top of page

Vulnerabilities in Large Language Models

Large language models (LLMs) such as ChatGPT have become all the hype as of late. These models are completely changing the way we interact with computers but are also causing serious concerns regarding data privacy and security.


How Does it Understand Me?

ChatGPT, OpenAI's large language model, is the fastest-growing application ever created. Large language models, built on neural networks, can process large amounts of text data, learn patterns between words and their contextual usage, and form responses in a human-like style. The underlying architecture of these models is often based on transformer networks, which weigh the importance of different parts of the input data when making predictions or generating text. The training process of LLMs involves feeding them with large amounts of text, which allows them to learn the ins and outs of human language. In use, LLMs utilize training data to analyze input text from the user, and in turn, generate a coherent and contextually relevant response based on the patterns they have learned.


Sensitive Information Disclosure Risks

LLMs can process enormous datasets, and these datasets often encompass sensitive personal information. A notable limitation of LLMs is their inability to selectively delete or unlearn specific data, like an individual's name or birthdate. The negligent use of this data in training can lead to future consequences for individuals or businesses. 


When in use, LLMs may disclose confidential data and lead to unauthorized access to the data. The inclusion of Personal Identifiable Information (PII) — names, addresses, social security numbers, and birthdates — in training datasets, if not protected, poses a significant threat. For individuals, the exposure of their personal information can lead to identity theft and other forms of cybercrime, and businesses may face considerable financial losses. 


Companies responsible for the training of the LLM might also see their reputations damaged. In today's digital age, a company's reputation is closely tied to its ability to protect customer data. A data breach can lead to a loss of trust among customers and partners, which can have long-term effects on a company's bottom line. This is true notably for companies in sectors such as healthcare, where trust and privacy are of the utmost importance. [1]


Noncompliance with regulations that mandate strict data protection measures can lead to hefty fines being imposed on businesses as well. Companies may face lawsuits from affected individuals or regulatory bodies and may even be subject to injunctions or other court orders, affecting their ability to do business. In some cases, company executives may also face personal liability for data breaches.


Challenges in Development and Deployment

LLMs can pose a threat to privacy during both their developmental phases and when they are in use by consumers. One of the primary concerns is that LLMs can establish connections with publicly available data and create openings for data breaches. Openings can occur when the models are trained on large datasets that may include publicly available information, and then link what was found to private data during their operation.  LLMs are also known to fall victim to prompt injection attacks. In these attacks, users manipulate the prompts given to the LLMs to extract the data they are in search of.


The processes of data collection for LLMs raise concerns as well. Concerns arise over whether individuals have given informed consent for their data's usage. In many cases, the data used to train LLMs is collected from a variety of sources, and it may not always be clear to individuals that their data is being used.


The use of LLMs in chatbots or virtual assistants can pose additional privacy risks. [2] These applications often involve the processing of personal data, and if not safeguarded, data could be exposed or misused. For example, a chatbot that uses an LLM might inadvertently reveal user information in its responses, or a virtual assistant might store personal data in a way that is vulnerable to hacking.


Regulation and Data Governance

The European Union's Artificial Intelligence Act and General Data Protection Regulation (GDPR) have set precedents for data protection, granting individuals an array of rights over their data. [3] [4] These rights include the right to be informed about how their data is used, the right to access their data, and the right to have their data deleted under certain circumstances.


The application of these regulations, however, becomes challenging when data is embedded within an LLM. [5] One technique used to increase data security is data minimization, which focuses on collecting only the data that is required for the intended use of the model and nothing more. Another common method is data tokenization, which refers to the substitution of some data with unique tokens. For example, an SSN embedded in training data can be substituted with a “[SSN-REDACTED]” token to preserve the format of the data without exposing any confidential information. Data tokenization is particularly effective in protecting data during transmission and storage, as it ensures that sensitive data is not exposed even if a data breach occurs.


Differential privacy, which introduces random noise to data, and anonymization, where personal data is aggregated to de-identify it, are also gaining traction. Differential privacy provides a guarantee of privacy by adding a controlled amount of noise to the data preventing the identification of individual data points. [6] Anonymization involves the removal of personally identifiable information from the data, making it impossible to link the data back to individual users. 


Regular privacy impact assessments are essential in evaluating risks as development continues. These assessments involve an evaluation of the potential privacy impacts of new technology and can help to identify and mitigate privacy risks before they become issues. Recent bans on the usage of ChatGPT in countries like Italy and by companies like Samsung show that there is a growing concern globally over data privacy in the context of LLMs. [7] [8] However, outright bans on the use of LLMs could end up being a flawed solution to the problem. 


Towards a Balanced Approach

As LLMs continue to develop at a rapid pace, outright bans seem less and less sustainable as the solution to their issues; the focus should be on safe, secure development and usage. While they do bring numerous challenges in data collection and privacy, LLM’s are a beacon of technological advancement. The balance between leveraging their capabilities and protecting information requires privacy measures and regulations that are strictly enforced. Through continuing progress, it is necessary to consistently refine these models and the way they are governed to get the most out of their potential while protecting the privacy and rights of individuals and businesses worldwide.




[1] Lee, D. K. P., Vaid, A., Menon, K., Freeman, R. S., Matteson, D. S., Marin, M. L., & Nadkarni, G. N. (2023). Development of a privacy preserving large language model for automated data extraction from thyroid cancer pathology reports. medRxiv (Cold Spring Harbor Laboratory). https://doi.org/10.1101/2023.11.08.23298252


[2] Piñeiro-Martín, A., Garcı́a-Mateo, C., Docío-Fernández, L., & Del Carmen López-Pérez, M. (2023). Ethical challenges in the development of virtual assistants powered by large language models. Electronics, 12(14), 3170. https://doi.org/10.3390/electronics12143170


[3] Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI  | News | European Parliament. (2023, September 12). https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai


[4] GDPR.eu. (2019, February 19). General Data Protection Regulation (GDPR) Compliance Guidelines. https://gdpr.eu/


[5] Kacprzak, K. (2023, October 23). Privacy and data security challenges in the era of Large Language Models (LLMs) - Data Analytics. Data Analytics. https://dsstream.com/privacy-and-data-security-challenges-in-the-era-of-large-language-models-llms/


[6] Sebastian, G. (2023). Privacy and Data protection in ChatGPT and other AI Chatbots: Strategies for Securing User information. Social Science Research Network. https://doi.org/10.2139/ssrn.4454761


[7] McCallum, B. S. (2023, April 1). ChatGPT banned in Italy over privacy concerns. BBC News. https://www.bbc.com/news/technology-65139406


bottom of page