Mike Del Balso


Mike Del Balso is the co-founder of Tecton.ai, where he is focused on building next-generation data infrastructure for Operational ML. Before Tecton.ai, Mike was the PM lead for the Uber Michelangelo ML platform. He was also a product manager at Google where he managed the core ML systems that power Google’s Search Ads business. Previous to that, he worked on Google Maps. He holds a BSc in Electrical and Computer Engineering summa cum laude from the University of Toronto.

Past sessions

Summit 2021 Rethinking Feature Stores

May 27, 2021 04:25 PM PT

Feature stores have emerged as a key component in the modern machine learning stack. They solve some of the toughest challenges in data for machine learning, namely feature computation, storage, validation, serving, and reuse.

However, the deployment of feature stores still requires a coordinated effort from multiple teams, comes with a large infrastructural footprint, and leads to integration costs and significant operational overheads. This large investment places feature stores completely out of reach for the average data team. What’s needed is a fundamental redesign of the feature store.

In this talk we will introduce a new light weight feature store framework that allows any data source to be operationalized by declaring them as dependencies to production ML applications, without coupling these applications to environment specific infrastructure. By publishing model-centric logical feature definitions, this framework will allow data scientists to build ML applications that depend on any data source, using their tools of choice, and deploy to their existing production infrastructure.

In this talk we will also demonstrate how this new paradigm empowers individual data scientists to develop and serve a production-grade ML application in less than one minute.


Productionizing real-time ML models poses unique data engineering challenges for enterprises that are coming from batch-oriented analytics. Enterprise data, which has traditionally been centralized in data warehouses and optimized for BI use cases, must now be transformed into features that provide meaningful predictive signals to our ML models. Enterprises face the operational challenges of deploying these features in production: building the data pipelines, then processing and serving the features to support production models. ML data engineering is a complex and brittle process that can consume upwards of 80% of our data science efforts, all too often grinding ML innovation to a crawl.

Based on our experience building the Uber Michelangelo platform, and currently building next-generation ML infrastructure for Tecton.ai, we’ll share insights on building a feature platform that empowers data scientists to accelerate the delivery of ML applications. Spark and DataBricks provide a powerful and massively scalable foundation for data engineering. Building on this foundation, a feature platform extends your data infrastructure to support ML-specific requirements. It enables ML teams to track and share features with a version-control repository, process and curate feature values to have a single source of centralized data, and instantly serve features for model training, batch, and real-time predictions.

Atlassian will join us to provide first-hand perspective from an enterprise who has successfully deployed a feature platform in production. The platform powers real-time, ML-driven personalization and search services for a popular SaaS application.