Vadim is a Principal Architect at Collibra focusing on autonomous data quality and cloud native architecture. He previously led Research and Development at OwlDQ, responsible for enabling data quality at Terabyte scale wherever the data may live. Vadim also spent several years evangelising Hadoop and Spark at Hortonworks and later Cloudera. His background includes analytics, distributed processing, stream processing, and enterprise architecture. Vadim also holds a US patent for Service Platform Architecture.
Modern data lakehouses have enabled scalable data engineering that brings together more data than ever. But many organizations are discovering that more data doesn’t mean better data. In fact, data quality and trust issues become more prevalent and harder to solve as the volume of data increases.
Enter continuous, self-service data quality, powered by Collibra Data Quality (formerly OwlDQ), which leverages Spark parallel processing across large and diverse data sources. By combining this solution with Databricks, organizations can create end-to-end high-quality data pipelines for scalable and trusted analytics and AI.
In this deep dive, you’ll learn:
[daisna21-sessions-od]