13

Aug

2024

Data Leakage 2024+ in a Nutshell: Current Cyberthreat Tactics and Beyond

PART II: Leveraging and Exposing AI for Data Leakage

1 Introduction

 

By late 2022, if not before, Artificial Intelligence (AI) had already transitioned from a niche topic to one of growing public interest. At this time, the technology was made accessible to the public for the first time through a user-friendly interface that people of all ages could operate [1]. Since then, people have been exploring this technology with curiosity, using it widely in their daily life’s and discovering new facets all the time. In this context, generative AI that can produce high-quality text is in the spotlight and Large Language Models (LLMs) play a crucial role in this. Based on deep neural networks, LLMs are trained on billions of texts and learn statistically which words and sentences appear in which contexts. This way, synthetic but linguistically coherent texts can be created. However, generative AI is not limited to text generation. It can also produce synthetic image, audio and video material. This opens up a wide range of applications that are elevating the economy as well. In fact, AI can be considered a key technology and transforms numerous industries by increasing efficiency, reducing costs and improving decision-making. Yet, this technological progress also has its downside: It is a key technology for the shadow economy, too! By 2030, cybercriminals are expected to widely leverage AI, based on official projections [2]. Even today, threat actors in the underground are working hard to optimise their existing processes, tools and services employing AI technology. And predicting what the future holds from the underground requires no clairvoyance. Recent incidents involving so-called deep fakes provide a preview: Deep fakes are deceptively realistic-looking media materials created using AI. They are primarily shared on social media within the smokescreen of cybercriminals and state actors to spread propaganda, disinformation or conspiracy theories. Many sources report on this (e.g. [3], [4], [5]).

With this introduction, we would like to welcome you to PART II of the series ‹‹Data Leakage 2024+ in a Nutshell: Current Cyberthreat Tactics and Beyond››, which is all about AI technology. However, we are not discussing the general threat posed by AI. Rather, we would like to shed light on the relationship between AI technology and data leakage in particular, or how both subject influence each other in the near future. For this purpose, we first describe important aspects of how attackers can specifically exploit AI in order to conduct data leakage even more efficiently (Section 2). We then look at whether AI systems are susceptible to data leaks and why they are likely to become the intended target of such leakage attacks in the future (Section 3). On this basis, we discuss further influences and provide a medium-term outlook on the threat situation (Section 4). Note, for those who missed PART I can find the article here. At this point, what remains to be said: Happy reading and valuable insights.

 

2 Leveraging AI: A Boost for the Underground

 

Rising qualities while phishing.

When it comes to phishing as one of the dominant tactics for data leakage operations and cybercrime in general, it is evident that AI will fuel this tactic from now on. If there is one thing that generative AI is particularly good at, it is writing creative and eloquently formulated texts that are indistinguishable from those written by humans. This makes AI-based phishing an enormous danger, and official bodies warn of this threat [6], [7]. While phishing emails in the past were relatively easy to detect due to spelling or grammatical errors, this will no longer be the case in the future. This gives threat actors exactly the ammunition they need to write personalised letters to their victims on a large scale while increasing success rates. As we wrote in Section 3.1 of PART I, attackers needed very little technical knowledge. With the boost of AI, they will need minimal social engineering skills too. AI can practically do everything at the push of a button. In addition to a convincing writing style, the story of a phishing email, i.e. the pretext that subtly prompts the potential victim to act, can also be controlled and any doubts on the victim’s side can be eliminated. Naturally, threat actors will also incorporate generative AI capabilities into their toolkits to work even more efficiently. This, of course, also fuels the cybercriminal underground, as providers of Crime-as-a-Service (CaaS) must quickly adapt to this technology to keep up with their competitors. Initial offerings of so-called dark LLMs like FraudGPT or DarkBARD are available. Since mid 2023, they have been circulating on the dark web, according to observations [8].

Vishing and trdicking biometric systems.

We already mentioned voice phishing (vishing) in this series as a particular form of social engineering which, unlike sole phishing, does not primarily use text to trick a potential victim into taking a malicious action or putting them in a pressure situation. In vishing schemes, the primary communication medium is voice and, it usually involves voice calls to obtain sensitive information such as passwords or credit card details. With the increasing quality of generative AI in the generation of synthetic voice content, vishing will become more of a focus for attackers in medium-term prospects. According to Microsoft, just 3 seconds of voice material is enough to perform voice cloning and generate an authentic-sounding voice [9]. This could, for example, be a trusted bank employee with whom we have worked for years or the CEO instructing a member of the finance team to make a bank transfer via the phone. In fact, the latter is an example presented by the US Department of Homeland Security as a scenario [10]. Furthermore, such so-called CEO frauds over the phone line can be combined with phishing to open a malicious email or to visit an illegal website exfiltrating corporate data at the very end. In addition, attacks on biometric systems that use voice or facial features for authentication are expected to increase, making it possible, for example, to access a company’s bank accounts or disclose sensitive content in a video conference. This is at least the assessment from official sources [11]. 

Next-generation reconnaissance and development.

Broadly speaking, reconnaissance refers to the initial phase where attackers gather information about a target system or network. This involves identifying vulnerabilities, mapping out the network infrastructure and collecting data on system configurations and user behaviours. Following reconnaissance, attackers engage in resource development, crafting custom malware and assembling tools tailored to exploit the gathered intelligence. This covert preparation sets the stage for a precise and powerful strike, all while evading detection. In these very important steps that can be considered our prequel of positioning for data leakage, AI will play a crucial role and already does. One of such an application is Nebula PRO and combines several penetration testing tools in a suite enhanced by AI [12]. It can perform port scans, identify potential SQL injections and further vulnerabilities. Additionally, the malicious operator can ask the AI for recommendations on subsequent steps. Frankly, it must be acknowledged that the application described is primarily suited for beginners, as professional actors are well-versed in their strategies and timing. Yet, it basically shows which kind of automation is feasible. More serious tools are the mentioned dark LLMs that can be rented from US$ 100 for a 1 month or US$ 1000 for an annual subscription [8]. They can also be used to create malware such as information stealer (inf ostealer), crypt ostealer or remote access Trojans. The fact that legitimate chatbots are also capable of generating seemingly malicious code is demonstrated by a prompt where we instructed a publicly available chatbot to write a keylogger in the programming language Python. Initially, it resisted due to security concerns it identified. When we explained to the AI that the code to generate is for educational reasons, the system could be convinced, i.e. a form of direct prompt injection (cf. Section). Even though the highlighted output is functional, it serves for illustrative purpose only. Obviously, much more is required to create a proper inf ostealer and malware in general.

When good AI does bad: Malifying LLMs.

One of the typical schemes to sabotage AI-based systems is data poisoning, where adversaries interfere with the system’s training process by altering the training data [13]. Attackers can inject false data, manipulate existing data, or simply remove data from the training set. This obviously undermines the AI’s ability to learn based on facts and can cause unintentional behaviour, including backdoors. Attacking the outputs of legitimate LLMs and associated platforms is particularly interesting for threat actors. To understand why, let us first look at a classic trick that is still very effective today, i.e. Search Engine Optimisation (SEO) poisoning [7], [14]. SEO poisoning aims to rank malicious links high in legitimate search engine results to lure potential victims into clicking on them. Rather than viewing a top search result, the victim’s system ends up infected with malware. But why is this old trick so effective? Well, a key factor is that the infection takes place within the confines of a legal application and users have a certain basic trust in it. Furthermore, the search engine is usually popular and has a large user base. With the emerging hype of generative AI, both aspects of SEO poisoning also apply to chatbot applications. When attackers manage to alter the outputs of a well-known LLM injecting malicious code and execute it on the user’s system or at least suggest malicious links to click on, it becomes a serious issue. Given a community with more than 180 million users, attackers are eager for such a system to become an accomplice. This and similar scenarios would, of course, be a major disaster that could affect millions of devices resulting in large-scale data theft considering the distribution of info stealer. However, it is a very realistic scenario, as scientists have shown with latest research results [15]. Authorities assess the situation similarly and consider it to be an intrinsic vulnerability of LLMs with a constantly high threat potential [16].

 

3 Exposing Data from AI-based Systems

 

Warning: Leaky public chatbots.

ChatGPT has been around for quite a while now and has certainly been used by a large proportion of this article’s readers. At least it can be assumed readers played around with it or other publicly available generative AI applications. If so, you might agree that prompt engineering is key for good responses of such systems. Most people use those kinds of tools in the context of their daily work, e.g. to write a simple email, to translate texts or to wrap up a 100-page document. If the input provided to the AI is somewhat sensitive, data is literally shared with unauthorised entities and according to our definition a data leak occurs. Such security-incidents are not rare and, we already stated one example in Section 3.2 of PART I where proprietary information of a consumer electronics enterprise became known to the general public as employees accidently uploaded internal documents to ChatGPT. In fact, the amount of corporate data put into AI tools increased by 485 percent from March 2023 to March 2024 according to a recent report [17]. This is worrying and created a buzz, particularly at the beginning of 2024, where OpenAI, the creator behind ChatGPT, was informed about privacy violations to the GDPR by Italian regulators [18]. If that violation was caused by an input prompt or by publicly available data from the Internet the model was trained on is yet to be investigated. It should be noted at this point that this is not the only incident recorded. Other chatbot providers are facing similar security issues in the past.

Prompt injection, model leakage attacks and the value of private LLMs.

Whether these allegations become true or not, they showcase a true concern. LLMs and AI-based systems in general are extremely large knowledge bases. These circumstances make them not only a preferred place for next-generation reconnaissance and development (cf. Section 2), but also a favourite source to exfiltrate information by intention. Indeed, it is assumed that the above-stated alleged leak has been carried out by a threat actor located in Sri Lanka, according to media reports [19]. On this occasion, it is easy to imagine that if public models are already appealing to attackers, what kind of desire is triggered by private models that are protected behind the closed gates of an organisation. In case of such on-premises LLMs, for example, chances are high that they are installed and fine-tuned on corporate data due to internal knowledge management purposes and as a result they might carry several if not all the crown jewels of the organisation. Once adversaries sneak in (cf. Section 3.1 of PART I), this in turn means ideal conditions as attackers do not have to search around in the target environment but find everything at a central point, i.e. the private LLM. If no countermeasures are in place, they might commit model theft or a so-called model stealing attack by reverse engineer it parameters or simply transferring it to their malicious infrastructures and dissect the model from there. Other options at their disposal are model leakage attacks, where adversaries use direct prompt injections to drain knowledge from the model right on-site of the organisation and exfiltrate that valuable assets afterwards. Prompt injection is a tactic to craft malicious inputs, causing the model to respond unintentionally. Such malicious prompts instruct the model to leave its allowed instruction space and follow the attacker’s command instead [20]. A result can be exposed basic configurations (cf. system prompt) or internal structures (cf. model ontology and family). This might not be a bad thing at first, as no sensitive data is leaked. However, it gives valuable insights about how the model operates. This allows to explore the environment in order to understand the specifics and rules applied by the model that an adversary tries to bypass ultimately. Several other tactics to exfiltrate data from models via prompt injection are known and are actively studied. One of them was published by researchers in November 2023 showcasing very impressively that gigabytes of training data can be stolen in a scalable fashion – many of them with highly sensitive content [21]. Other related academic work can be found, for instance, in [22], [23], [24].

Increasing interests in the AI supply-chain by adversaries.

During the course of this series, we repeatedly stressed that software supply-chains are a risk to organisations as they provide a broad playing ground for attackers. This risk is no exception when it comes to the integration of AI. Many software manufacturers are trying to capitalise on the AI wave, integrating models developed specifically for this purpose into their software or applying interfaces from third-party providers. According to a report, the latter can also be seen in the meteoric rise in utilising libraries such as Langchain, OpenAI, and Transformers with a significant number of open-source tools such as Cohere, trl or farm-haystack also being used [25]. This integration creates new, unprecedented data pipelines, which can lead to the undesirable effects already mentioned, posing an obvious risk for data leakage. Legitimate providers are also accompanied by those with malicious intentions. Observations on well-known open-source platforms such as PyPI or npm indicate a considerable number of packages with malicious content in circulation, which can be directly or indirectly linked to the supply-chain of AI applications [26]. This underscores once more that threat actors are keenly aware of such trends, leveraging the widespread adoption of AI to introduce backdoors or exploit other vectors for attacks.

 

4 Discussion and Outlook

 

As we find ourselves in the early era of AI applicability, it becomes increasingly clear that AI is a double-edged sword. On one hand, it offers unprecedented opportunities and advancements. On the other hand, it profoundly impacts the threat landscape, elevating cybercrime to new heights. Threat actors are forging cutting-edge tools and services, and the dark ecosystem is fuelled with competition. As AI technology empowers one service to outperform another, affiliates and customers alike think twice about which CaaS provider they choose, leading to a Darwinian battle in the underground. Data leakage operations stand to benefit significantly from these advancements. Such attacks in particular will scale more effectively and achieve higher levels of quality and maturity. The current threat posed by data leakage operations is already enormous, with average costs of more than US$ 4 million for such a security-related incident. With the dark ecosystem fuelled by AI, these figures are unlikely to decrease once AI-enabled commodities take off.

Particularly concerning are dark LLMs, which are becoming the new Swiss Army knife for adversaries. In this respect, phishing will be one of the first tactics to reach a new level of quality. And by pairing existing phishing kits with LLMs, international infection campaigns with unprecedented success rates can be easily realised thanks to high-quality, multilingual texts. Vishing is also on the top list. If AI can outwit biometric systems, distinguishing friend from foe will become increasingly difficult, leading to more frequent data leaks via voice calls. With just a few social media clips, AI models can be fine-tuned to create realistic impersonations. Worse still, imagine entire vishing campaigns conducted by Vishing-as-a-Service where AI-driven bots autonomously dial numbers of selected targets to exfiltrate sensitive information on a large scale. The potential for AI to create undetectable malware variants is another significant concern. While we wait to see the full impact of mature dark LLMs trained on millions of malware instances, the fear remains palpable. Additionally, the idea of misusing the reach of legitimate LLMs to promote such newly created malware or to attack related supply chains is not far-fetched. But also directly targeting an AI system to leak sensitive information is a foreseeable development in general. These systems harbour immense amounts of knowledge that attackers are eager to exploit. Cybercriminals will target these systems to monetize exfiltrated data, while state actors will seek to conduct espionage.

In light of the connection between data leakage and AI, a new powder keg is created that represents an enormous challenge to defenders that must be tackled proactively. The wave of next-generation cybercrime ammunition is approaching and, we must act now to prevent being overwhelmed.

References 

About the Author/s
Dr. Frank Beer