Shekh Akther is a Senior Software Engineer/Data Engineer at Quby, a leading company offering data-driven home services technology, known for creating the in-home display and smart thermostat Toon. He is responsible for building end-to-end data products. He works closely with Data Scientists and other Data Engineers from innovating new concepts and make them production ready and serve to end users. Shekh has a Bachelor degree in Computer Science and Engineering and loves to work with complex challenges that has high impact.
November 18, 2020 04:00 PM PT
Quby, an Amsterdam-based technology company, offers solutions to empower homeowners to stay in control of their electricity, gas and water usage. Using Europe’s largest energy dataset, consisting of petabytes of IoT data, the company has developed AI powered products that are used by hundreds of thousands of users on a daily basis. Delta Lake ensures the quality of incoming records though schema enforcement and evolution. But it is the Data Engineers role to check whether the expected data is ingested in to the Delta Lake at the right time with expected metrics so that downstream processes will function their duties. Re-training models and serving on the fly might go wrong unless we put the right monitoring infrastructure too
. Quality data without a good performing model or the best model without quality data, do not bring any value. Our use-cases need training of more than half a million models on a daily basis. These models will be automatically used in production environments without human interference. We also stream training data to our delta lake in a near real-time fashion. Before training the models we have to make sure that there is enough and quality data that pass the minimum threshold we set. Even-though we monitor our data quality the accuracy of our models varies depending on multiple variables observed on the daily collected data.
Therefore, we need to monitor the performances of our models too. At last we also need to evaluate the result produced by our algorithms both in terms of quality and quantity before we serve it. In this presentation, we will demonstrate how we are using Databricks dashboards to monitor our raw and processed data quality metrics. We will also present using ML flow to keep track of the performances of models. At last we will show you how we have integrated Slack to receive alerts when there is a failure at any stage of our data crunching process.
Speakers: Aemro Amare and Shekh Morshed Akther