Deception 2.0

This article has originally been published in Hakin9 Vol. 20, No. 01 (February 2025). Aside from some minor writing corrections, the article has not been modified.

Social Engineering (SE) is one of the most widely used, effective, and dangerous cybersecurity threats, as it exploits humans rather than technical vulnerabilities. Through deception and by leveraging human traits, targets – people – are tricked into revealing information, providing access, or performing actions they otherwise wouldn’t do. In practice, this is done using techniques such as phishing (e.g., deceptive messages or websites), impersonation (e.g., posing as a trusted individual), and pretexting (e.g., fabricating scenarios).

The recent advent of multimodal generative Artificial Intelligence (GAI) has the potential to fundamentally change how SE attacks are planned and performed and how individuals and organizations need to prepare for them. In this article, loosely aligned with the framework proposed by Marc Schmitt and Ivan Flechais in 2024, we will investigate how GAI might change SE reconnaissance and OSINT, content generation, personalized targeting, and the orchestration and automation of attacks. The last point is also closely linked to infrastructure and the availability and role of open models and systems, allowing us to modify them to our needs and use GAI locally and without the risk of exposure.

Supercharged Reconnaissance and OSINT

SE attacks can be carried out on individual high-value targets as well as whole organizations. This, for example, differentiates phishing, usually which targets many people at once in a universal manner, from targeted spear phishing attacks on individuals. In both cases, the success of an SE attack depends heavily on the information available about the targets and their environments. The more information we have, the easier it is to identify targets, fit in, build trust, or exert pressure.

Gathering information often involves a combination of Open-Source Intelligence (OSINT), observation, and speaking with employees under a pretext. Ideally, during the reconnaissance phase, we learn about the organization and, more importantly, the people and how they behave and communicate. GAI has the potential to drastically change how reconnaissance is performed.

Large Language Models (LLMs), can analyze vast amounts of data, such as publicly available documents, in seconds. For instance, we could use an LLM to identify and classify potential targets based on public records and criteria such as job roles, personal interests, networks, etc. We could also use an LLM to automatically and continuously monitor a company’s website or social media profiles (e.g., LinkedIn) for potential targets. Such analysis would allow us to augment our communication, identify an ideal timeframe for an SE attack, or find meaningful pretexts, such as a new product or role change. Using multimodal models, we can also automatically analyze photos, videos, podcasts, interviews, etc. As a further step, LLMs might be used to learn and reproduce linguistic patterns based on the data and information gathered. LLM-powered chatbots could also be used to automatically engage in conversation with potential targets, e.g., posing as recruiters or journalists.

Simply put, GAI allows us to supercharge reconnaissance and OSINT. What previously required a team and substantial resources can now be done by individuals and at scale.

Next-Generation Content Generation

Many SE attacks hinge on high-quality content to convince targets and back up narratives. This content ranges from realistic website clones to perfectly crafted and personalized messages to sophisticated deepfakes. Somebody might use it to trick targets into unwanted actions (e.g., entering credentials on a cloned website), support false narratives (e.g., using fake photos and videos), or impersonate people, including bypassing biometrics.

GAI rapidly simplifies this kind of content creation and unlocks previously impossible options, such as sophisticated deepfakes that can be created quickly. Multimodal models could be used to create realistic images, videos, and audio to enhance the credibility of the attacker by faking relationships with trusted individuals. LLMs can also craft messages perfectly tuned to the preferences, language, and style of both the impersonated sender and the receiver. All of this could be used to blackmail targets (e.g., using a compromising deepfake) or to build immense pressure and a sense of urgency – ideal conditions for SE attacks – by faking a convincing and urgent cry for help by someone important to the target.

While most of these attack vectors aren’t new or innovative, GAI allows attackers to perform these at unprecedented speed, scale, and agility. For example, having GAI available, attackers can convincingly clone websites in minutes without deep technical knowledge. Similarly, they can quickly shift between targets without manually adjusting messages and narratives.

Personalized Targeting – Hacking Humans at Scale

We’ve already discussed the fact that GAI allows SE at scale. This becomes particularly interesting when considering the option of personalized attacks. Until now, highly personalized attacks have mainly been carried out against high-value targets – if at all. Due to the resources needed, most less-sophisticated attacks would rely on, for example, nonspecific and generic phishing emails sent to hundreds of company employees, hoping that a few would fall into the trap.

By combining the advanced reconnaissance capabilities of GAI with its capability of crafting messages and content, highly personalized, context- and target-aware SE attacks at scale become achievable at a low cost. For example, it has become easy to generate hundreds of highly customized phishing emails, each with its pretext based on automated reconnaissance and in a language and style most suited for the target. Furthermore, LLMs can also be used to simultaneously engage in conversation with many targets, enabling fully automated SE in conversational settings (e.g., email, chat, or even telephone/voice).

Of course, this also frees up human resources to perform even more in-depth SE. While the GAI-powered system attacks, exploits, and leverages hundreds of employees simultaneously, human attackers, co-creating and collaborating with GAI systems, can focus on the truly valuable targets.

Fully Automated Deception and the Power of Open Models

As outlined above, GAI not only qualitatively changes the SE landscape but also unlocks a whole suite of new possibilities for orchestrating and automating attacks. While SE has traditionally been one of the less technical cybersecurity disciplines, GAI – capable of simulating many human traits and performing tasks such as communication at a high level – allows us to automate and support SE technologically in a way completely unprecedented before.

By leveraging GAI to streamline processes like reconnaissance, content generation, and personalized targeting, attackers can rapidly prototype and deploy tailored campaigns at scale. In addition, GAI-powered systems can, on the fly, use newly gathered information to adjust campaigns and change tactics, for example, including freshly gathered information in all ongoing interactions or tailoring messages based on target behavior. Of course, this also allows attackers with less experience, fewer competencies, or missing language skills to perform significantly more sophisticated and convincing SE attacks.

Openly available AI models (e.g., Meta’s Llama or DeepSeek’s DeepSeek series) have become increasingly powerful while hardware requirements have gone down. This, especially from a technical point of view, is changing the game. While using cutting-edge GAI, until now, meant relying on commercial providers such as OpenAI, the current developments with regard to open models and systems allow us to modify and run GAI models entirely independently and without model guardrails (e.g., OpenAI’s content policy). Therefore, all the use cases and examples outlined above can now be run on attacker-controlled infrastructure, significantly improving their capabilities and OpSec.

Hardening Humans and Leveraging GAI as a Blue Team

As we have seen, GAI has dramatically expanded the potential for SE attacks, quantitatively and qualitatively. While previously, most people would need to be able to recognize a catchall phishing email or question whether a strange phone call was legitimate or not, we are now faced with highly individualized and dynamic SE attacks.

Even if you are not a high-value target, it is now absolutely possible that you will receive a tailor-made message containing factual information about yourself, your social network, and your place of work. The message, prompting you to give up sensitive information or take action, may also include convincing multimedia content to create legitimacy. This could go as far as you calling back and having a phone call, verifying the integrity of the message, with a convincing and fully autonomous AI agent trained to talk you into trusting them. It might sound like science fiction, but it is the world we find ourselves in.

Of course, scenarios like the one above change how we approach security awareness training and blue team tactics. On the training side, we need to go beyond traditional heuristics to detect phishing emails and other SE attempts (e.g., suspicious grammar and URLs or overly general information) and move towards more holistic approaches, both social and technical, of verifying the integrity of messages and content.

Fortunately, blue teams also have GAI at their disposal. While attackers use GAI to craft convincing SE content, blue teams can leverage the same tools to conduct realistic training and adversary simulations. Similarly, while attackers use GAI’s data analysis capabilities to analyze public information, blue teams can use it to detect malicious messages, deepfakes, and various types of disinformation.

Ultimately, as I have tried to show, GAI is not changing the fundamental principles of SE; instead, it changes the scale and quality at which SE can be performed – especially with significantly reduced cost and competencies. Also, excitingly, GAI brings Social Engineering and Engineering, in the technical and technological sense, closer than ever before.