SESSION

LLM Security: Practical Protection for AI Developers

Accept Cookies to Play Video

OVERVIEW

EXPERIENCEIn Person
TYPEBreakout
TRACKGenerative AI
INDUSTRYEnterprise Technology, Health and Life Sciences, Financial Services
TECHNOLOGIESAI/Machine Learning, GenAI/LLMs
SKILL LEVELBeginner
DURATION40 min
DOWNLOAD SESSION SLIDES

With thousands of open-source LLMs on Hugging Face, AI developers have a wealth of resources at their disposal. As developers harness these models that power innovative applications, they may inadvertently expose their company to security risks. It’s not sufficient to rely on the internal guardrails that LLM providers have baked into their models. The stakes are too high, especially with proprietary data being made available to models through fine-tuning or retrieval-augmented generation (RAG). Even internal apps are still vulnerable to adversarial attack. With that, how can developers deploy LLMs painlessly but securely? In this session, we review the top LLM security risks using real-world examples and explore what’s required to meet emerging standards from OWASP, NIST, and MITRE. We share how a validation framework can enable developers to innovate freely while protecting from indirect prompt injection, prompt extraction, data poisoning, supply chain risk, and more.

SESSION SPEAKERS

Yaron Singer

/CEO & Co-Founder
Robust Intelligence