Published: February 12, 2025
by Kelly Albano, Omar Khawaja and Arun Pamulapati
We are excited to announce the second edition of the Databricks AI Security Framework (DASF 2.0—download now)! Organizations racing to harness AI’s potential need both the 'gas’ of innovation and the 'brakes’ of governance and risk management. The DASF bridges this gap, enabling secure and impactful AI deployments for your organization by serving as a comprehensive guide on AI risk management.
This blog will provide an overview of the DASF, explore key insights gained since the original version was released, introduce new resources to deepen your understanding of AI security and provide updates on our industry contributors.
The DASF is a framework and whitepaper for managing AI security and governance risks. It enumerates the 12 canonical AI system components, their respective risks, and actionable controls to mitigate each risk. Created by the Databricks Security and ML teams in partnership with industry experts, it bridges the gap between business, data, governance, and security teams with practical tools and actionable strategies to demystify AI, foster collaboration, and ensure effective implementation.
Unlike other frameworks, the DASF 2.0 builds on existing standards to provide an end-to-end risk profile for AI deployments. It delivers defense-in-depth controls to simplify AI risk management for your organization to operationalize and can be applied to your chosen data and AI platform.
In the DASF 2.0, we’ve identified 62 technical security risks and mapped them to 64 recommended controls for managing the risk of AI models. We’ve also expanded mappings to leading industry AI risk frameworks and standards, including MITRE ATLAS, OWASP LLM & ML Top 10, NIST 800-53, NIST CSF, HITRUST, ENISA’s Securing ML Algorithms, ISO 42001, ISO 27001:2022, and the EU AI Act.
We’ve received valuable feedback as we share the DASF at industry events, workshops, and customer meetings. Many of you have asked for more resources to make navigating the DASF easier, operationalizing it, and mapping your controls effectively.
In response, we’re excited to announce the release of the DASF compendium document (Google sheet, Excel). This resource is designed to help operationalize the DASF by organizing and applying its risks, threats, controls, and mappings to industry-recognized standards from organizations such as MITRE, OWASP, NIST, ISO, HITRUST, and more. We’ve also created a companion instructional video that provides a guided walkthrough of the DASF and its compendium.
Our goal with these updates is to make the DASF easier to adopt, empowering organizations to implement AI systems securely and confidently. If you’re eager to dive in, our team recommends the following approach:
According to a recent Economist Impact study, surveyed data and AI leaders have identified upskilling and fostering a growth mindset as key priorities for driving AI adoption in 2025. As part of the DASF 2.0 launch, we have resources to help you understand AI and ML concepts and apply AI security best practices to your organization.
Ensuring that the DASF evolves in step with the current AI regulatory environment and emerging threat landscape is a top priority. Since the launch of 1.0, we have formed an AI working group of industry colleagues, customers, and partners to stay closely aligned with these developments. We want to thank our colleagues in the working group and our pre-reviewers like Complyleft, The FAIR Institute, Ethriva Inc, Arhasi AI, Carnegie Mellon University, and Rakesh Patil from JPMC. You can find the complete list of contributors in the acknowledgments section of the DASF. If you want to participate in the DASF AI Working Group, please contact our team at [email protected].
Here’s what some of our top advocates have to say:
"AI is revolutionizing healthcare delivery through innovations like the CLEVER GenAI pipeline, which processes over 1.5 million clinical notes daily to classify key social determinants and impacting veteran care. This pipeline is built with a strong security foundation, incorporating NIST 800-53 controls and leveraging the Databricks AI Security Framework to ensure compliance and mitigate risks. Looking ahead, we are exploring ways to expand these capabilities through Infrastructure as Code and secure containerization strategies, enabling agents to be dynamically deployed and scaled from repositories while maintaining rigorous security standards." - Joseph Raetano, Artificial Intelligence Lead, Summit Data Analytics & AI Platform, U.S. Department of Veteran Affairs
“DASF is the essential tool in transforming AI risk quantification into an operational reality. With the FAIR-AI Risk approach now in its second year, DASF 2.0 enables CISOs to bridge the gap between cybersecurity and business strategy—speaking a common language grounded in measurable financial impact.” - Jacqueline Lebo, Founder AI Workgroup, The FAIR Institute and Risk Advisory Manager, Safe Security
“As AI continues to transform industries, securing these systems from sophisticated and unique cybersecurity attacks is more critical than ever. The Databricks AI Security Framework is a great asset for companies to lead from the front on both innovation and security. With the DASF, companies are equipped to better understand AI risks, and find the tools and resources to mitigate those risks as they continue to innovate.” - Ian Swanson, CEO, Protect AI
“With the Databricks AI Security Framework, we’re able to mitigate AI risks thoughtfully and transparently, which is invaluable for building board and employee trust. It’s a game changer that allows us to bring AI into the business and be among the 15% of organizations getting AI workloads to production safely and with confidence.” — Coastal Community Bank
"Within the context of data and AI, conversations around security are few. The Databricks AI Security Framework addresses the often neglected side of AI and ML work, serving both as a best-in-class guide for not only understanding AI security risks, but also how to mitigate them." - Josue A. Bogran, Architect at Kythera Labs & Advisor to SunnyData.ai
“We have used the Databricks AI Security Framework to help enhance our organization's security posture for managing ML and AI security risks. With the Databricks AI Security Framework, we are now more confident in exploring possibilities with AI and data analytics while ensuring we have the proper data governance and security measures in place." - Muhammad Shami, Vice President, Jackson National Life Insurance Company
The Databricks AI Security Framework 2.0 and its compendium (Google sheet, Excel) are now available for download. To learn about upcoming AI Risk workshops or to request a dedicated in-person or virtual workshop for your organization, contact us at [email protected] or your account team. We also have additional thought leadership content coming soon to provide further insights into managing AI governance. For more insights on how to manage AI security risks, visit the Databricks Security and Trust Center.