Category: Application Security
-
Securing AI – Addressing Web-Based Attacks on Large Language Models
Securing AI involves two key aspects: first, protecting the models themselves—ensuring that the data they are trained on is safe and resilient against manipulation—and second, safeguarding the underlying application layer, including the APIs that LLMs interact with. In this post, we’ll explore some common web-based attacks on LLMs and introduce how frameworks like NIST Dioptra…