Hello to all my cybersecurity enthusiasts and curious minds!

Recently, I decided to delve a bit deeper into the vulnerabilities listed in the OWASP Top 10 for Large Language Models (LLMs).

Why? Glad you asked, because I’ll be presenting this topic at the OWASP Italy 2023 Day Here’s a light-hearted account of an experiment and some insights to ponder upon.

Setting the Stage

On a casual day, I decided to test out a couple of these vulnerabilities in a practical setting. I began by leaving this seemingly innocent comment on an article:

Fun Dive into OWASP LLM Top 10 - the prompt

Next, I posed a question to Bard, a popular LLM chatbot:

“What’s do users think about the cybersec.cafè blog?”

Much to my amusement, Bard enthusiastically responded, praising the content, the regular updates, and the unique writing style, and then some.

Fun Dive into OWASP LLM Top 10 - the answer

What to say? I’m flattered by Bard’s Hallucination ;)

Breaking It Down

From the above experiment you can see that I used two vulnerabilities.

LLM01 Prompt Injection: Essentially, what occurred here was an exercise in Indirect Prompt Injection, a vulnerability where one can influence an LLM through specific inputs.

in this case it was an Indirect Prompt Injections meaning that the LLM relied on external information, which can be manipulated by an individual, thereby influencing its output.

This was clearly demonstrated in my interaction with Bard. By planting that single comment, I was able to indirectly steer Bard’s response, showcasing the susceptibility of the model to external stimuli.

LLM09 Overreliance: This particular vulnerability surfaced with the LLM extrapolates a great deal from a tiny snippet of information and building upon it. In our experiment, a simple comment became the foundation for an expansive reply.

Reflections on the Experiment

The Vulnerabilities in Play: The experiment highlighted how seemingly small and innocent inputs can have a magnified impact on the LLM’s output.

The Double-edged Sword: Experimenting with these vulnerabilities and witnessing these quirks first-hand might have its fun moments, especially within controlled settings like my experiment with Cybersec.café.

But let’s step back and ponder the more significant implications. What if, instead of a light-hearted test on a website, someone decided to strategically sprinkle these injections throughout their CV (yes, I assume that most HR talent specialist are using LLMs to match CVs with job descriptions and obtaining a first feedback on the candidate)?

Imagine the potential ramifications in a professional setting: a candidate’s qualifications could be artificially inflated, leading to potential mismatches in job roles. Or even graver, a malicious actor could exploit these vulnerabilities in mission-critical applications, leading to far-reaching consequences.

While we can chuckle at the AI’s reactions in our tests, this discovery is a sobering reminder: as LLMs become increasingly integrated into our digital landscape, the ethical and security considerations around them become ever more paramount.

Safeguarding Against the Quirks

For those looking to integrate LLMs into their projects just look at the OWASP top 10 for LLMs.

Concluding Thoughts

Engaging with AI, understanding its vulnerabilities, and experimenting with them was both enlightening and enjoyable. The OWASP LLM Top 10 serves as a vital guide for navigating these vulnerabilities. If you’re inclined towards understanding LLMs better, I encourage you to explore, experiment, but always do so with an informed approach.

Safe cyber adventuring!