Sameer Farooqui

Product Marketing Manager, OctoML

I’m a Product Marketing Manager at OctoML, the company that built Apache TVM (a ML compiler that optimizes models to run inference much faster). Prior to this, I was a Strategic Cloud Engineer for Big Data at Google Cloud, a Trainer and Evangelist at Databricks and a Solutions Architect at Hortonworks.

Past sessions

Summit 2021 How to use Apache TVM to optimize your ML models

May 27, 2021 03:15 PM PT

Apache TVM is an open source machine learning compiler that distills the largest, most powerful deep learning models into lightweight software that can run on the edge. This allows the outputed model to run inference much faster on a variety of target hardware (CPUs, GPUs, FPGAs & accelerators) and save significant costs.
In this deep dive, we'll discuss how Apache TVM works, share the latest and upcoming features and run a live demo of how to optimize a custom machine learning model.

In this session watch:
Sameer Farooqui, Product Marketing Manager, OctoML

[daisna21-sessions-od]

Summit 2018 Separating Hype from Reality in Deep Learning—continues

June 5, 2018 05:00 PM PT

Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.

I'll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I'll separate the hype from reality around questions like:

• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you 'steal' pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?

Session hashtag: #DDSAIS14

Summit 2018 Separating Hype from Reality in Deep Learning

June 5, 2018 05:00 PM PT

Deep Learning is all the rage these days, but where does the reality of what Deep Learning can do end and the media hype begin? In this talk, I will dispel common myths about Deep Learning that are not necessarily true and help you decide whether you should practically use Deep Learning in your software stack.

I'll begin with a technical overview of common neural network architectures like CNNs, RNNs, GANs and their common use cases like computer vision, language understanding or unsupervised machine learning. Then I'll separate the hype from reality around questions like:

• When should you prefer traditional ML systems like scikit learn or Spark.ML instead of Deep Learning?
• Do you no longer need to do careful feature extraction and standardization if using Deep Learning?
• Do you really need terabytes of data when training neural networks or can you 'steal' pre-trained lower layers from public models by using transfer learning?
• How do you decide which activation function (like ReLU, leaky ReLU, ELU, etc) or optimizer (like Momentum, AdaGrad, RMSProp, Adam, etc) to use in your neural network?
• Should you randomly initialize the weights in your network or use more advanced strategies like Xavier or He initialization?
• How easy is it to overfit/overtrain a neural network and what are the common techniques to ovoid overfitting (like l1/l2 regularization, dropout and early stopping)?

Session hashtag: #DDSAIS14