Redefining Society and Technology Podcast

How AI-Enhanced Phishing Changes the Economic Dynamics of Phishing Attacks | A Conversation with Marco Ciappelli and Fred Heiding | Redefining CyberSecurity with Sean Martin

Episode Summary

Explore how AI is revolutionizing phishing attacks and the defensive strategies needed to counter them, as Sean Martin, Marco Ciappelli, and Fred Heiding of Harvard discuss the alarming rise of hyper-personalized phishing and its societal impact.

Episode Notes

Guests: 

Fred Heiding, Research Fellow, Harvard

On LinkedIn | https://www.linkedin.com/in/fheiding/

On Twitter | https://twitter.com/fredheiding

On Mastodon | https://mastodon.social/@fredheiding

On Instagram | https://www.instagram.com/fheiding/

Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli

____________________________

Host: Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/sean-martin

View This Show's Sponsors

___________________________

Episode Notes

In today's digital era, AI-enhanced phishing attacks are transforming the landscape of cybersecurity. An insightful episode of The Redefining CyberSecurity Podcast features host Sean Martin alongside ITSPmagazine co-founder Marco Ciappelli, and guest Fred Heiding, a research fellow in computer science at Harvard School of Engineering and Applied Sciences, and a fellow at the Harvard Kennedy School.

Fred Heiding shares updates on the evolution of phishing attacks using AI, highlighting both the technical facets and the societal implications. He explains how advanced language models can now automate the creation of highly realistic phishing emails, making it easier and more cost-effective for attackers to target individuals and organizations.

Heiding discusses the concept of hyper-personalization, where attackers gather granular information about their targets, such as their communication patterns and personal interests, to craft emails that seem authentic and trustworthy. This hyper-personalization poses significant challenges.

Heiding provides an example where attackers mimicked a Black Hat organizer's email, highlighting the precision and timing crucial for successful phishing. The use of open-source language models, which can be adjusted by developers to remove any built-in protections, further exacerbates the issue.

Marco Ciappelli ponders the potential solutions by leveraging AI for defensive strategies. Heiding acknowledges this is an area with promise, particularly in personalized spam filters, yet notes the inherent advantages attackers hold over defenders due to the unpatchable nature of human intuition. Defense mechanisms using AI can marginally enhance current spam filters but face limitations in practicality and widespread adoption because of people's reluctance toward continuous training and complex defense mechanisms.

Sean Martin evaluates the potential of AI in monitoring patterns of human vulnerability over time, which could redefine phishing training by focusing on specific, individualized principles. However, he also stresses the economic aspect, citing that cheaper and more efficient phishing methods increase the attack's scale and frequency, further complicating defensive strategies.

Heiding and Ciappelli both emphasize that while technological advancements provide tools for protection, they also require more personal data to be effective—a trade-off that involves significant privacy concerns. The future of online trust, according to Heiding, appears precarious. As phishing attacks become more sophisticated, the very nature of how people trust digital communications must evolve.

Overall, this episode underscores the critical need for ongoing research and dialogue in cybersecurity, focusing on balancing innovation in defense mechanisms against the ever-advancing sophistication of attacks.

___________________________

Sponsors

Imperva: https://itspm.ag/imperva277117988

LevelBlue: https://itspm.ag/attcybersecurity-3jdk3

___________________________

Watch this and other videos on ITSPmagazine's YouTube Channel

Redefining CyberSecurity Podcast with Sean Martin, CISSP playlist:

📺 https://www.youtube.com/playlist?list=PLnYu0psdcllS9aVGdiakVss9u7xgYDKYq

ITSPmagazine YouTube Channel:

📺 https://www.youtube.com/@itspmagazine

Be sure to share and subscribe!

___________________________

Resources

Harvard Business Review article: https://hbr.org/2024/05/ai-will-increase-the-quantity-and-quality-of-phishing-scams

IEEE Access article: https://ieeexplore.ieee.org/document/10466545

BSides presentation: https://bsideslv.org/talks#8WK8P3

Hacking Humans Using LLMs with Fredrik Heiding: Devising and Detecting Phishing: Large Language Models vs. Smaller Human Models | Las Vegas Black Hat 2023 Event Coverage | Redefining CyberSecurity Podcast With Sean Martin and Marco Ciappelli: https://redefining-cybersecurity.simplecast.com/episodes/hacking-humans-using-llms-with-fredrik-heiding-devising-and-detecting-phishing-large-language-models-vs-smaller-human-models-las-vegas-black-hat-2023-event-coverage-redefining-cybersecurity-podcast-with-sean-martin-and-marco-ciappelli

A Framework for Evaluating National Cybersecurity Strategies | A Black Hat USA 2024 Conversation with Fred Heiding | On Location Coverage with Sean Martin and Marco Ciappelli: https://redefining-cybersecurity.simplecast.com/episodes/a-framework-for-evaluating-national-cybersecurity-strategies-a-black-hat-usa-2024-conversation-with-fred-heiding-on-location-coverage-with-sean-martin-and-marco-ciappelli

Deep Backdoors in Deep Reinforcement Learning Agents | A Black Hat USA 2024 Conversation with Vas Mavroudis and Jamie Gawith | On Location Coverage with Sean Martin and Marco Ciappelli: https://itsprad.io/redefiningcybersecurity-454

___________________________

To see and hear more Redefining CyberSecurity content on ITSPmagazine, visit: 

https://www.itspmagazine.com/redefining-cybersecurity-podcast

Are you interested in sponsoring this show with an ad placement in the podcast?

Learn More 👉 https://itspm.ag/podadplc

Episode Transcription

How AI-Enhanced Phishing Changes the Economic Dynamics of Phishing Attacks | A Conversation with Marco Ciappelli and Fred Heiding | Redefining CyberSecurity with Sean Martin

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording, as errors may exist. At this time, we provide it “as it is,” and we hope it can be helpful for our audience.

_________________________________________

Sean Martin: [00:00:00] And here we are. You're all very welcome to a new episode of redefining cybersecurity here on ITSP magazine. This is Sean Martin, your host, where I get to talk to all kinds of cool people about cool things, cyber related. So hopefully we can do, uh, do what we need to, to protect the businesses and, and, uh, the people we serve in society and with all the technology we use. 
 

And for this episode, you probably see that, uh, Marco snuck his way in somehow  
 

Marco Ciappelli: if the cool Marco, the people and then, and then me,  
 

Sean Martin: and then Marco. Exactly. . So Fred, if you haven't figured out, Fred's the cool one and, uh, Marco. Marco snuck in. Well, Marco's on because it's a topic. Two things for this topic. One is we, we spoke to Fred last year as part of our Black Act coverage, event coverage, and, uh. 
 

Fred has some updates and it's very connected to, uh, [00:01:00] sociology. So society and the impact it has, not just business, but us as, as humans overall. So I figured Marco would have some fun having this chat with me. So Fred, it's good to have you on again.  
 

Fred Heiding: Yeah. Thanks for having me. It's great to be here and I'm pretty excited to share some of our updates and what's happened since last year. 
 

Sean Martin: Absolutely. And for those wondering, not to be confused with the episode that's part of this year's coverage that we just had with Fred. So be sure to check that one out as well. But this is a follow up to the episode we called Hacking Humans Using LLMs and devising and detecting phishing LLMs versus small human models. 
 

So that was the talk last year. And, uh, We're going to dig into a few, a few points from some of your recent findings and understanding of what's going on. But before we do that, Fred, maybe a quick word just to refresh folks of what you're up to these [00:02:00] days.  
 

Fred Heiding: Yeah, that makes sense. I'm a research fellow in computer science over at Harvard School of Engineering and Applied Science. 
 

I've since very recently also started a fellowship at the political school at Harvard, Harvard Kennedy School, which is, uh, we're going to talk a little bit about that. Um, Regarding this AI phishing project as well, because I have, in my last years, I've really started branching out to see the sort of intersection of policy, technology and business in the context of cyber security. 
 

So, historically, a lot of my research has been purely technical, but as of, uh, as of last year, I do, uh, more and more, more and more projects that are related to, you know, social aspects of security, but also related to society in general, because it's, uh, There's a lot of complex turns and twists to cyber security and to solve cyber problems. 
 

And I'm, I'm really living in that intersection now and seeing how to take the technical solution and make them useful for society and also feasible for businesses to implement.  
 

Sean Martin: And so what, [00:03:00] what were some of the biggest changes in the last year? I mean, I've a lot of questions of, has it changed here? 
 

Has it changed here? But instead of me guessing, maybe if you can kind of highlight some of the, Some of the changes you've seen things panned out as you expected based on the last chat we had or what's going on  
 

Fred Heiding: I think they did if the first thing I had to I had to say to sort of Sort of further my own cause a little bit this that only you probably remember this but a few weeks after Blackhat when I talked about AI fishing last year There was a phishing attack that resulted in all of the MGM casinos were, were shut down. 
 

And that's, uh, it's kind of fun in a way. It's of course not fun, it's terrible. But we talk, this big security conference, I talk about how social engineering is a big nuisance and we should really increase our protections and Just a couple of weeks later, you know, the very casino where we had our conference was shut down. 
 

I think they're estimated. I've lost a hundred million dollars in revenue because, um, if I remember correct that there were [00:04:00] a couple, a couple of, uh, folks who worked there who were targeted, um, pretty, pretty savvy social engineering attacks where the people looked up information online and then found, uh, found things to, to use and then they could trick them. 
 

And this is exactly a segue into what we do in our, on our stuff now, because last year We prove that, you know, phishing emails can be automated using language models. That's quite intuitive. Um, the language models are rated creating textual content that appears realistic, oftentimes the content is not true, but, but this is exactly what you need to do to trick people. 
 

And we talked a little bit about how we can add human models to make it even better and so forth. But what we've looked at in the following year since that is how can we take this further? It's okay. We know that language models can create phishing emails, but that's not all because you also need to. 
 

Find out background information about target and send the emails and analyze this. And we've been playing around with the concept of what are attackers likely to do? Well, attackers are likely to do the worst thing [00:05:00] they can do. So we've been thinking about this and that's why we created a tool that now automates all parts of the phishing chain. 
 

So as we see it, it's five, five steps to it. First, you collect information about the participant. Then you create the email, personalized email, Then you send out the email and you validate and learn from the results. And this is really, really cool. There's a lot of things that happens here. And one thing that we started working on quickly after blackout last year is I started to collaborate with some economics, economics, people, and researchers to see what does this mean in practice, and we added some, some work terms in terms of, you know, what's an hour for an attacker worth. 
 

We try to quantify various aspects of phishing attacks by doing this. We could quite quickly see that. We have a unique situation, which very rarely happens right now, which is that phishing attacks become more powerful. They become more capable using AI, but they also become cheaper. And this is really bad. 
 

But oftentimes for the past 20 years, you have to [00:06:00] select if, you know, do you want the cheap phishing attack? That's not as powerful. Or do you want a really good phishing attack? That's a bit more expensive because it takes more time to launch it. Right now you can combine them. And we wanted to see, well, what does it mean in practice? 
 

We did a bunch of economics analysis. Which have led us to this study that we do now. And we find that it's up to 99 percent cheaper to launch these attacks. Sometimes way more than that. You can add a small percentile to the end of it. And that's really bad. So to contrast, I'm going to stop and pause for you guys soon, but the last thing I want to say is that I collaborated with some people working with, for example, North Korea as a cybercrime actor. 
 

And North Korea actually get a substantial amount of their GDP through cyber attacks, which is very, very interesting. And that's problematic for a lot of reasons. For example, the West places a lot of sanctions to stop their nuclear programs. Well, these sanctions, they're of course not useless, but they're a little bit more useless since they can launch cyber attacks and go [00:07:00] around the legitimate trade. 
 

And a lot of these cyber attacks are enabled by phishing attacks. And again, now we stand in a, in a position where phishing attacks will be cheaper and better. So there are a lot of problems that are sort of scattered around for this. Everything from, you know, how should nations cope with this and stop these kinds of cyber actors? 
 

How should the little guy, Protect themselves. The scams to, um, the scams to individuals are increasing rapidly. There are just a couple of days ago, I had a friend who had his email compromised by scam from another student that wasn't that student. And these things happen all over the world, of course. 
 

And so I think that it's safe to say that since last year, uh, most of our predictions have been quite accurate. And, um, the AI deception market, I have to say that it is a market now, because that's what it is. It's an illegal market. It's been growing quite a lot. And. To the best of my knowledge, it will continue to grow way more in the coming years. 
 

Sean Martin: So I'm going to, and I'll let Marco kind of broaden this out as we [00:08:00] continue, but I'm going to, I'm going to stick with the first point on the collecting information, which I believe is the first point, right? Yeah. Well, collecting targets, collecting information about them, because I, I think a year ago we were looking at things like chat GPT, we, we could see where The creation of an email is easy. 
 

You give it the right prompt, the right information, you can have it create something realistic that that would be readily received by a, by a target, right recipient, and you pump that through some automation and, and you, to your point, you can kind of scale that out for very little cost, but I've heard stories where collections of information, multiple sources of information have been used to determine the organizational structure. 
 

Who reports to who you get your hands on some email [00:09:00] communications, uh, how, how they speak to each other and use that to not just write an email faster, but to actually have context and to have insight into how these communications normally sound. So I don't know, have you seen. Any of that. And if so, how, how does this have impact? 
 

I mean, do you see bad actors getting access to data and using it in this way?  
 

Fred Heiding: Yeah, Sean, that's a fantastic question. That's it's really good. Cause I, I said, um, a mouthful there, but you, you really picked up some of the key values and that. Personalization is a key difference in our research now compared to last year. 
 

Just as a quick reminder, last year we, we had pretty broad emails that we used the, the language models created such as, you know, offered, offered this person a gift card at Starbucks and so forth. And it's, it's quite easy to, to find out whether that will be relevant. It will honestly be relevant for most people, but it's, it's, [00:10:00] uh, Quite simple. 
 

What we're doing this year, which is exactly what you're saying is mimicking what a lot of criminals and attackers do is to find way more granular information. Then you dig deep down and see not only is this person likely to go to Starbucks, but we can find out really close called information such as what type of language did this person talk to with different coworkers and different friends and different families? 
 

What type of projects do they work on right now? What goals do they have? What dreams do they have? And all these kind of information. Because that's very often publicly available. Most people's digital footprint is massive. And the digital footprint is just the information you have available. Even if you try to restrict it, it's very, very hard. 
 

Only a few people have no information whatsoever published online. And this is also a security operational trade off from a personal perspective. We want to market ourselves. We want to be out there. It's not often feasible to remove ourselves from the internet. That's it. But if you are, which most people are, then there's so much information we can [00:11:00] find out, then that is being found out exactly. 
 

As you say, attackers use this, um, maliciously. For example, my friend who was recently recently hacked, they knew people that he communicated with daily. So they send an email from the legitimate new email address. One of his friends that was hacked. So it's sort of a chain hack, but it's, it's not just a empty half relevant attacks. 
 

And in our presentation this year, we talk about different levels of personalization. So what we did last year, we now define as. mild personalization, but this type of attacks that you mentioned and that we treat now, we call hyper personalization. When you find out, find out very granular information and When you feed this to the language model over time, you can imagine all kinds of horror stories when the language models really creates a perfect linguistic style to mimic a friend. 
 

And then you combine this with these phishing best practices such as relevancy, so you know that you write an email in the linguistic style of a friend or a co worker. In the perfect timing, when it's your birthday or when you're about to have a business presentation. [00:12:00] For example, next week before I go up to the Black Hat stage, maybe the Fisher could mimic one of the Black Hat organizers and send me, Hey, Fredrik, you need to update these slides now before you go up. 
 

And if that sounds perfect and comes from the right email address, I'm going to be a little bit stressed. I'll probably do that. So these attacks are becoming way more difficult to spot and The worst thing about all this is that it's easy to make. That's one thing we should, we have a proof of concept to see that this is cheap. 
 

It's easy to technologies out there. Uh, there's some security restrictions in these language models. We talk about that as well. They're quite easy to bypass. They're open source models. So. These tools are basically readily available for all crime actors to use. And I think we'll see much more usage of that. 
 

Sean Martin: Can you, can you highlight that point? Cause, so I guess if you're using something like a chat GPT, it's going to have built in walls and barriers and rules. Um, but [00:13:00] you're saying there are open source ones that. Developers can tweak, right? That they can remove, remove some of those protections.  
 

Fred Heiding: That's a fantastic point. 
 

And then, yes, the short answer is yes. So basically there are two ways that their companies tries to protect against this. Right. First of all, we can restrict the models. We can say that chat GPT. If you say, create a phishing email or create a bomb, it's going to say, no, I'm not going to do this. In the case of phishing, it's very, very difficult. 
 

And there's a lot of brilliant research to choose this that says, if you say, create a marketing email instead of a phishing email, you can bypass this. And there's, uh, there are other ways, uh, but basically it's, it's really hard because in the context of phishing and deception, The only real difference between the phishing email and the marketing email is the intention. 
 

And it's hard to know the intention. Again, back in the day when this email had poor grammar, they weren't so, so reasonable or logical, then you could spot them. But if it's a perfect email and only the intention matters, then these language models have to be useful. So it's very [00:14:00] hard to block, uh, reasonable content. 
 

And, but even if, The security mechanisms were perfect, which I don't think they will ever be. But even if they were, uh, there's a lot of research on jailbreaking these models, which is basically to take them off and remove the security features. And that works pretty well, unfortunately, or we haven't even done some small demonstrations, say my team to show that how you can do this, to remove security modifications very easily. 
 

And then you can outright say. Create a phishing email that targets depressed people, for example, which is terrible. Uh, but yeah, so there are some security mechanisms, but they're unfortunately pretty easy to bypass.  
 

Marco Ciappelli: Wow. Um, so many places I could go, but I'm going to connect with a conversation we just had about ransomware as a service. 
 

And I can totally see phishing, of course, as a service. Like you take the good AI, you, you turn a bad and there's no, there's [00:15:00] no guy rails anymore, and then you can give it to anyone that wants to do phishing. So I think the big issue it's what I grasped from what you said is of course, the, it makes it easier to do the research. 
 

It make it easier to do the, the email, even for those that don't have the knowledge to do that. And. And it's scalable, right? So, I mean, I look back and I remember I used to have this conversation with with social engineers, you know, back 10 years ago when when we started ITSB magazine and and I will go back and like, yep, there is the Spanish prisoner prisoner letter. 
 

There is the Nigerian prince and people fall for it. And those they were, I mean, pretty easy to spot, but people will fall for it anyway. And then it was like those, there was really high level target where you would have research intelligence and [00:16:00] kind of like the stuff that the spy movies are made of. 
 

And now you're giving the spy stuff to everyone, to any criminal that you want. So where I want to go with this, the question is, as there is the AI that you can turn the good one, air quote, that you can turn into a bad one. Then there is the. The good one that you can use to fight, I'm assuming, the bad one to spot this kind of stuff. 
 

Where are we standing with that? Because if there is an economic value for cybercrime, I feel like there is a really big economic value for a company to come up with a solution to this.  
 

Fred Heiding: Yeah, that's a great question. And there's a lot of anti phishing providers and so forth and spam filter detection services that tries to solve this. 
 

And the short summary is that there is some hope and there is some, some really good potential [00:17:00] solutions. One problem that I and my team find really bad, and we talk about this in our latest article that I might be able to share in the Harvard Business Review, is that in the context of phishing, we really find that there's an asymmetrical advantage that benefits the attacker way more than the defender. 
 

And the reason for this is that if we start with an example of non phishing, so technical cybersecurity, well, then the attackers can use AI to improve their attacks, but the defenders will of course use the AI to improve the defenses, and you can find out code vulnerabilities and so forth. But in phishing, we talk about humans. 
 

We can't patch the human brain, this is an old biological construct, right? So it, it doesn't work in the same way. There are workarounds and we can use AI for good, but the attack methods are so much more powerful. And again, AI, you can use AI to train humans, but that is way more inefficient than in the attacking. 
 

So in, in this sense, I'm, I'm more skeptical, far [00:18:00] more skeptical. Um, in how this will play out in the context of deception rather than technical cyber security again. What  
 

Marco Ciappelli: about sorry? What about the technical aspect? Let's say the spam filter. So, you know, like an enhanced part of the I mean, you can't fix the brain until we will know yet. 
 

Fred Heiding: So that's a great question, right? The way we have something we talk about in our presentation this year called personalized spam filter that we work on, they're quite promising. It's basically by knowing all this information that we collect about our participants, we can take that information and create spam filters that target your specific needs, because there is no one size fit all fishing email. 
 

Some people think one email is good and some people think it's bad and so forth. And by having a language models that knows these things, That's pretty powerful. It can also give recommendations, saying that it doesn't have to be a binary spam filter saying yes or no, good or bad. It can say that this email could be true, but it could also be malicious. 
 

So [00:19:00] just to be sure, uh, go to the official website and find these links and so forth. And this is really good. Uh, I'm quite optimistic about this. Um, the downside here is there's, there's two things that makes me a bit more skeptical. The first one is that again, it's very expensive and not necessarily in monitor sense, but just in terms of. 
 

People are relatively stressed. We will probably not ask a language from all this for every email. And quite frankly, if you would have this check for all your emails, a lot of the emails ask you to take some call to action. So you would have this check for almost every email. So I don't think it will be widely used in practice. 
 

Again, people are, if you have a security mechanism that takes two seconds, I think most people wouldn't use it because people are just too stressed. And so that's one problem. The other problem is that. Spam filters already exist, and they're good. You know, there's a lot of state of the art algorithms with 95, 99 percent plus accuracy. 
 

So, language models, they can improve spam filter accuracy, [00:20:00] but these techniques aren't novel. Uh, they will marginally improve them at best. I think maybe they will improve them much more in the future. But so far, the spam filters are good already. So, it would have been a different thing if we had no spam filters. 
 

And then the language models came, but it's, um, these things are good and promising, but in the attacker sense, we didn't really have a automatically automatic tool to create phishing emails before, but now we have, it's where the attacker is a game changer, but for defenders, it's. I say now at best an incremental benefit. 
 

Oh, we'll see. We'll see. It's, um, we obviously have to do what we can.  
 

Sean Martin: Yeah. It reminds me, or it's making me think of, I often think of healthcare when I think of this kind of thing, um, where we're, we're looking to use technology like AI to, provide better public health, right? So a broad view of what's going on publicly. 
 

[00:21:00] Are there any trends or any, any peaks or spikes or any, any other anomalies that could point to something that, that may be interesting. And then there's at the other end, precision healthcare, where. Specifically for somebody, the analysis of their, kind of to your point on an individual email targeting a certain person. 
 

What, what can we learn from that particular thing? And, and in between there's maybe some different slices, like let's look at cancer. Let's look at this type of threat. Um, so I guess my question is, do we have enough? Because one of the things you pointed to, uh, sorry, I'm kind of rambling a little bit here, but, Validating the results was one of the, one of the five steps. 
 

So it, clearly they have the knowledge of what's being done and what the result is. But I'm just wondering if, if there's something we can do in [00:22:00] the context of what I just described in terms of public health and precision health and the, and then the space in the middle. Can we leverage something that they have to get a better view of those, those results? 
 

Different spectrums. I don't know if that makes sense or not, but  
 

Fred Heiding: it makes a lot of sense. And at least to some degree, I'm going to deviate a little bit from what you say, but, uh, but not mostly answer it because this, you say a very good point is what, what can we do in terms of validating the results? To use it for good. 
 

And that's something that's probably the most exciting thing I find about their new study and that's in these personalized spam filters, as we call them, we have a bunch of vulnerability categories, and these would take from traditional literature. There's a bunch of different marketing and other psychological literature for this, why people are influenced by things. 
 

And these things are such as, you know, our 40 social peer pressure, et cetera, et cetera. And a lot of research has found, and again, this is completely, this is old things that are [00:23:00] already found by other people, but people tend to have different influence criteria that matter for them. You know, perhaps I am more vulnerable to peer pressure and Marco is more vulnerable to a 40. 
 

And that means for example, if, if you want to convey me, if all my friends are doing something, maybe I will do it. If you want to convey Markov, you should get a police officer or some higher person and tell him something just for an example. And my theory is that these things are context dependent. So I didn't have one inference principles always works for me. 
 

And they're probably different based on where I am, who I am and so forth. But what we do in our tool that we're discussing now that we created over the past year is that the tool randomly creates phishing emails with different influence principles. Then it measures over time, which, which emails do I fall for, which Marco fall for, which do Sean fall for and so forth. 
 

And that's super exciting because then what you can see is that no patterns that humans use. [00:24:00] Right now in this pattern analysis, I'm very interested to see more of this. And for example, maybe if you do this on a large enough scale over a couple of years or months, we'll see that every, every Monday I am very susceptible to social peer pressure, but every Wednesday I'm susceptible to out 40, who knows? 
 

And there are a lot of different patterns we can find, but if we find that. Then we can make phishing training way more efficient because then we can say that, you know, what I should think about, I don't need to think about all this big set of, you know, best practices because that takes too much time and I'm never going to do it, but we can pin down one or two super simple principles and that's what I think about. 
 

Then we can implement that in our daily lives. So, um, I hope that makes sense. That was a mouthful, but it's, I'm pretty excited about this, uh, this new defense strategy.  
 

Marco Ciappelli: It makes sense, but it's always about training. And I don't think people wants to be trained about this shit. . Uh, [00:25:00] I think they,  
 

Sean Martin: I was gonna say Marco won't, won't, uh, listen to anybody. 
 

So, peers, police,  
 

Marco Ciappelli: no, he, here's the thing. I mean, and he just, you get stuff in the email all the time. You get it in text and, you know, I just block everybody. I mean, but I made my own decision. And sometimes it's like, well, you know what? If I catch the good fish and I don't eat it? That's okay. I'll take the risk because I think defensively, as I think a lot of people that is in this industry or has been talking about this stuff long enough, but but there is, it's all psychology based, right? 
 

I mean, you're just describing that it's if there is something that this is not technical, it's it's fishing, but it make technology makes it easier. So I still go back to I don't think we should use the AI to do better training. I think we should use the AI to do again [00:26:00] better filters. I don't think as far as my experience, and I know we're dealing with some spam issues lately, I don't think spam is working that well, honestly. 
 

Um, it's still, I think what you mentioned in terms of knowing the individual and knowing how to protect where the individual is more. vulnerable because of the psychological traits. That's, I think, that's where the solutions stay. And if I think, for example, the Apple intelligence, right, where you finally will have a Siri that's going to be upgraded after 10 years to actually be useful, hopefully that will be one aspect that could be useful. 
 

Like as a personal assistant, in order to perform better, you need to know more things about you. And we need to trust with our, with the [00:27:00] privacy where, where we go with that, because we don't want to give away information, but how is it going to protect us if we don't give away that information? How are you going to book a restaurant for me if I don't tell you that I like sushi and I don't like something else? 
 

And it goes deeper and deeper. If we can trust the AI, I think we can have a really strong defense against phishing. It will be our own extension,  
 

Sean Martin: Well, it's, it's interesting because we, I mean, for decades, we talk about, uh, analyzing the, the infrastructure and identifying where the exposures are, what, what's internal and what can be compromised internally, what's publicly available and what can be, what could be used to compromise the systems externally. 
 

And uh, I think at a personal level, we, we might bring in threat intelligence from an executive perspective or a company perspective. Is the company being targeted? Is the industry being targeted? Are the executives [00:28:00] being mentioned in threads or whatever it is? But what I hear you describing, Marco, is kind of an analysis of us as humans. 
 

So the same type of analysis  
 

Marco Ciappelli: Applying that to ourselves.  
 

Sean Martin: Which yeah, so all the stuff that they're doing to target us and then marketers have been doing it for ages, right? Right, right. What's a good ad for us? Which one is a B testing all that stuff?  
 

Marco Ciappelli: Well, you know, I just  
 

Sean Martin: we need to do that for the employees and citizens or  
 

Marco Ciappelli: I just read an interesting book called the bottle for your brain which talked about all the the scan of the brain that All the way that we're using monitoring non non invasive or invasive one, but non invasive to detective people driving a bus. 
 

It's falling asleep. Or if you know, but but in order to do that, we need to give access to our thoughts, which then you become scary because you bring the thought police and all of that. But there is that [00:29:00] cognitive freedom, freedom of thoughts. Also, though, where do you bring game because it helps society To avoid 50 people on the bus to die because you're detecting that you're about to crash And fall in a slip. 
 

So I don't know it's you want to talk about societal repercussion of How we need to address certain things Sean. Yeah, that's exactly what I had in mind like apply To the company to the infrastructure to the organization to ourselves And I don't know what's good. It's a good movie to watch. I think. 
 

Sean Martin: Fred, what do you think about that?  
 

Fred Heiding: Yeah, I think it's a fantastic point. There was quite a lot of things being said there, but one thing you said early on that I really resonate with is that you don't think people want to do this. And I think that that's actually a fantastic point that a lot of Folks like me forget to realize sometimes. 
 

And I, during the past year, I've been talking a lot with different educational researchers, try to collaborate with them and see how can we [00:30:00] solve this? Because as you said, sometimes we have to make people want to do this. And that's very interesting. They say a lot of things that for me have been unintuitive, such as you have to make this fun for people. 
 

You have to make, you have to find people. Some way for people to find is useful. You know, why should they undergo training to learn this thing? And I think there's an inherent problem with these things that even if I think about myself, I'm pretty busy. I don't have too much time. We don't want to, um, cybersecurity is always a game of defense, right? 
 

And that's a problem because people want to add value to their life. And defense is never fun because it just means training you to not have a potential loss. And I don't think there's a clear cut answer to that. Your Apple metaphor or stories is very interesting, right? Somehow. You can add a tool or Siri and there's a lot of people who wouldn't trust that of course But if this works well You can add a tool that adds value to your life because it helps you book restaurants and you know it people to talk to And so forth, but as a sort of by consequence or a [00:31:00] side consequence of that that could also help you protect you against phishing And that's brilliant, right? 
 

I really like that suggestion. So overall I can just agree to the problem that Making this fun and interesting and just incentivize people to think about this. That's, uh, it's quite tricky.  
 

Marco Ciappelli: Yeah. It's, it's not fun.  
 

Sean Martin: Well, what I'm, what I'm hearing is that, well, at some point the technology companies that are enhancing our lives and some that are protecting us from all those enhancements that go awry. 
 

We'll need more information about us in order to protect us as humans, not just what machine we're running. And, and I mean, spam and phishing and that's, let's forget spam, but phishing. It's still very prevalent. It's probably one of the most. Used ways [00:32:00] to penetrate an organization and gain access to things. 
 

And I don't know, I think we talk about it as a human problem and we have the training, all this stuff that Mark was talking about. And I don't know that we have an answer and maybe, maybe the answer is in, we'll have to analyze us as humans. I don't know. Interesting. I'm trying to think of what, uh, Sean, we haven't even touched on,  
 

Marco Ciappelli: we haven't even touched on voice phishing with artificial intelligence that can mimic a voice or, or visual and imagine that talking Fred about the timing of an attack, you're going to get a phishing email about it. 
 

I don't know click here to get a discount for this toy Probably before christmas the last week when you're panicking and freaking out because you can find the toy for your kids or whatever it is I mean, there is a timing right for all these things And and I remember we were on with uh [00:33:00] with the former uh, cecil and and and military guy, um, Roland couture that he knows You What can be done and he almost got fished by voice Because he was such an emotional Moment for him that he knew 99. 
 

9 That this wasn't somebody in his family But can you be 100 percent sure when you the value of what you're worrying about? Individual it's so high so he I mean it was a moment that you're like we can all fall for this You And if you're not into this thinking that we are all the time, the regular person ain't gonna care. 
 

Sean Martin: Yeah, well, to me it all comes back to the economics. And I think, I don't know how many, I know this is not a black hat conversation, but I don't know, five or six, even the one we had with you, [00:34:00] Fred, all touched on economics at some level. Yeah. And I think it, I think it's going to come down to that. And, and where do we make the investment and how do we make the investment? 
 

Isn't in, in the human and tech is tech on the human. It's going to be interesting to see how this all plays out. Um, I'm grateful that you're continuing to do the research, Fred and, uh, Providing us an updates on what's going on. Anything else you want to share from, from the work you've done in the last year before we wrap? 
 

Fred Heiding: Well, uh, yeah, thanks for this discussion. It's been as always fantastically interesting to be here and there's a lot of things and nothing to share, but there's, um, I think that we captured to some degree, a lot of interesting things there are. There's definitely way more. I mean, you Marco mentioned voice phishing there. 
 

Of course, all the types of deepfakes, other types of phishing and there's that's a big can of worms to open, right? Because it's protecting against these attacks. It's It's incredibly difficult [00:35:00] and I think one perhaps last word to say is that just you mentioned that to some degree also shown now, but what's happening is that there's, I mean, there's a change in trust is the way we trust online and how we, how we behave online, right? 
 

Because if, if these attacks continue to happen and happen at the massive scale, it's going to be hard. And somehow we have. Perhaps to change the way we operate, or at least to change the way we trust content and material. And hopefully there's going to come some brilliant technical solutions. As Bofie also mentioned, the importance of having technical solutions. 
 

I believe that seamless technical ways that don't sort of disrupt the user experience, but it makes users more secure. This is very important, but I think that the online trust is, is, um, having a pretty rough future, uh, it's, it's already relatively low, but, uh, it's, it's no clear answers and there's a lot of problems and yeah, there's a lot of work ahead. 
 

Sean Martin: And what, what I love [00:36:00] about this conversation, if people are watching, I can see the wheels spinning, what you're going to do when you're in the project next, which is really cool. I love these kinds of conversations. Yeah. Well, listen, Fred, uh, it's great to have you on. Of course, you're welcome back anytime, and, uh, any updates you have to share, feel free to. 
 

Just drop me a note and let me know, we'll, we'll have you on again. Of course, uh, abolition to the, the black hat episode. We, we, uh, recently recorded, I'll link to that as well. And, uh, and the last year one, cause it's still very relevant. It's not like that stuff goes away. We're just adding, adding to the mix in this conversation with, uh, with new and exciting stuff. 
 

But, uh, yeah, super fun chat, Marco, glad you were on as well.  
 

Marco Ciappelli: I think, uh, yeah,  
 

Sean Martin: I think we, we've given, given businesses stuff to think about and, uh, and see some stuff to think about and [00:37:00] yeah, hopefully, hopefully have better answers soon, but, uh, definitely stay connected to Fred and the work he's doing and, and obviously stay, stay, uh, tend to pay attention to, uh, phishing stuff you have going in your organization. 
 

All right. Well, thanks everybody for listening to this episode of redefining cyber security. Please do subscribe and share with your friends and enemies, and uh, we'll see you on the next one. Thanks everybody.