SESSION

Mitigating LLM Hallucination Risk Through Research Backed Metrics

Accept Cookies to Play Video

OVERVIEW

EXPERIENCEIn Person
TYPEBreakout
TRACKGenerative AI
INDUSTRYEnterprise Technology, Health and Life Sciences, Financial Services
TECHNOLOGIESAI/Machine Learning, GenAI/LLMs, Governance
SKILL LEVELIntermediate
DURATION40 min
DOWNLOAD SESSION SLIDES

In the context of LLMs, “hallucination” refers to a phenomenon where the model generates incorrect, nonsensical, or unreal text. Identifying and mitigating hallucinations is critical for trustworthy LLM application deployment at scale. In this talk, we will showcase ChainPoll – a unique and powerful methodology to evaluate the quality of LLM outputs, focusing on RAG and fine-tuning use cases. ChainPoll-based metrics have showcased a roughly 85% correlation with human feedback while being low-cost and low-latency to compute. Expected takeaways:

 

  • Deep dive into research-backed metrics to evaluate the quality of the inputs (data quality, RAG context quality, etc.) and outputs (hallucinations) while building LLM-powered applications
  • Evaluation and experimentation framework while prompt engineering with RAG and fine-tuning with your own data
  • A demo-led practical guide to building guardrails and mitigating hallucinations

SESSION SPEAKERS

Vikram Chatterji

/CEO and Co-founder
Galileo Technologies Inc