Generative AI Application Evaluation and Governance
This course introduces you to evaluating and governing generative AI systems. First, you’ll explore the meaning behind and motivation for building evaluation and governance/security systems. Next, we’ll connect evaluation and governance systems to the Databricks Data Intelligence Platform. Third, we’ll teach you about a variety of evaluation techniques for specific components and types of applications. Finally, the course will conclude with an analysis of evaluating entire AI systems with respect to performance and cost.
Note: This is the third course in the 'Generative AI Engineering with Databricks’ series.
The content was developed for participants with these skills/knowledge/abilities:
- Familiarity with natural language processing concepts
- Familiarity with prompt engineering/prompt engineering best practices
- Familiarity with the Databricks Data Intelligence Platform
- Familiarity with RAG (preparing data, building a RAG architecture, concepts like embedding, vectors, vector databases, etc.)
- Experience with building LLM applications using multi-stage reasoning LLM chains and agents
Self-Paced
Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos
Registration options
Databricks has a delivery method for wherever you are on your learning journey
Self-Paced
Custom-fit learning paths for data, analytics, and AI roles and career paths through on-demand videos
Register nowInstructor-Led
Public and private courses taught by expert instructors across half-day to two-day courses
Register nowBlended Learning
Self-paced and weekly instructor-led sessions for every style of learner to optimize course completion and knowledge retention. Go to Subscriptions Catalog tab to purchase
Purchase nowSkills@Scale
Comprehensive training offering for large scale customers that includes learning elements for every style of learning. Inquire with your account executive for details