Kim Hazelwood is the West Coast Head of Engineering for Facebook AI Research and well as the Technical Lead for Facebook Systems and Machine Learning (SysML) Research. Her expertise lies at the intersection between systems (compute and software platforms) and machine learning, and she has been focusing on optimizing the performance and developer efficiency of Facebook’s many machine-learning based products and services. Prior to Facebook, Kim held positions including a tenured Associate Professor at the University of Virginia, Software Engineer at Google, and Director of Systems Research at Yahoo Labs. She received a PhD in Computer Science from Harvard University, and is the recipient of an NSF CAREER Award, the Anita Borg Early Career Award, the MIT Technology Review Top 35 Innovators under 35 Award, and the ACM SIGPLAN 10-Year Test of Time Award. She currently serves on the Board of Directors of the Computing Research Association (CRA), MIT SystemsThatLearn, and EPFL EcoCloud. She has authored over 50 conference papers and one book.
Kim Hazelwood - Deep Learning: It’s Not All About Recognizing Cats and Dogs (Facebook) - 5:22 Hany Farid - Creating, Weaponizing, and Detecting Deep Fakes (UC Berkeley) - 24:40
Deep Learning: It’s Not All About Recognizing Cats and Dogs
Based on a recent blog post and paper, this talk would focus on the fact that recommendation systems tend to be underinvested in the overall research community, and why that’s problematic.
Creating, Weaponizing, and Detecting Deep Fakes
The past few years have seen a startling and troubling rise in the fake-news phenomena in which everyone from individuals to nation-sponsored entities can produce and distribute misinformation. The implications of fake news range from a misinformed public to an existential threat to democracy, and horrific violence. At the same time, recent and rapid advances in machine learning are making it easier than ever to create sophisticated and compelling fake images. videos, and audio recordings, making the fake-news phenomena even more powerful and dangerous. I will provide an overview of the creation of these so-called deep-fakes, and I will describe emerging techniques for detecting them.