Aemro is Resident Solution Architect at Databricks. He is interested in creating highly scalable and foult-tolerant data pipelines, Data lakes, and complex algorithms in real-time and batch processes. He is also a hands on Lead Data Engineer with a track record of incorporating user and business requirements into cost-effective, secure and highly optimized solutions known for scalability and durability with cutting-edge cloud stacks and open source ecosystem.
November 18, 2020 04:00 PM PT
Quby, an Amsterdam-based technology company, offers solutions to empower homeowners to stay in control of their electricity, gas and water usage. Using Europe’s largest energy dataset, consisting of petabytes of IoT data, the company has developed AI powered products that are used by hundreds of thousands of users on a daily basis. Delta Lake ensures the quality of incoming records though schema enforcement and evolution. But it is the Data Engineers role to check whether the expected data is ingested in to the Delta Lake at the right time with expected metrics so that downstream processes will function their duties. Re-training models and serving on the fly might go wrong unless we put the right monitoring infrastructure too
. Quality data without a good performing model or the best model without quality data, do not bring any value. Our use-cases need training of more than half a million models on a daily basis. These models will be automatically used in production environments without human interference. We also stream training data to our delta lake in a near real-time fashion. Before training the models we have to make sure that there is enough and quality data that pass the minimum threshold we set. Even-though we monitor our data quality the accuracy of our models varies depending on multiple variables observed on the daily collected data.
Therefore, we need to monitor the performances of our models too. At last we also need to evaluate the result produced by our algorithms both in terms of quality and quantity before we serve it. In this presentation, we will demonstrate how we are using Databricks dashboards to monitor our raw and processed data quality metrics. We will also present using ML flow to keep track of the performances of models. At last we will show you how we have integrated Slack to receive alerts when there is a failure at any stage of our data crunching process.
Speakers: Aemro Amare and Shekh Morshed Akther