Deduplication and Author-Disambiguation of Streaming Records via Supervised Models Based on Content Encoders - Databricks

Deduplication and Author-Disambiguation of Streaming Records via Supervised Models Based on Content Encoders

Download Slides

Here we present a general supervised framework for record deduplication and author-disambiguation via Spark. This work differentiates itself by – Application of Databricks and AWS makes this a scalable implementation. Compute resources are comparably lower than traditional legacy technology using big boxes 24/7. Scalability is crucial as Elsevier’s Scopus data, the biggest scientific abstract repository, covers roughly 250 million authorships from 70 million abstracts covering a few hundred years. – We create a fingerprint for each content by deep learning and/or word2vec algorithms to expedite pairwise similarity calculation. These encoders substantially reduce compute time while maintaining semantic similarity (unlike traditional TFIDF or predefined taxonomies). We will briefly discuss how to optimize word2vec training with high parallelization. Moreover, we show how these encoders can be used to derive a standard representation for all our entities namely such as documents, authors, users, journals, etc. This standard representation can simplify the recommendation problem into a pairwise similarity search and hence it can offer a basic recommender for cross-product applications where we may not have a dedicate recommender engine designed. – Traditional author-disambiguation or record deduplication algorithms are batch-processing with small to no training data. However, we have roughly 25 million authorships that are manually curated or corrected upon user feedback. Hence, it is crucial to maintain historical profiles and hence we have developed a machine learning implementation to deal with data streams and process them in mini batches or one document at a time. We will discuss how to measure the accuracy of such a system, how to tune it and how to process the raw data of pairwise similarity function into final clusters. Lessons learned from this talk can help all sort of companies where they want to integrate their data or deduplicate their user/customer/product databases.
Session hashtag: #EUai2

About Reza Karimi

Dr. Reza Karimi is currently a lead data scientist in Elsevier Search and Data Science Division. His work is focused on content modeling with deep learning, entity resolution, author disambiguation, and network analysis of research communities. Formerly, he was a research scientist and a project lead in Philips Research, where he worked on predictive maintenance of remote devices as well as healthcare productivity and quality analysis. He has a PhD in mechanical engineering from MIT with extensive experience in parallel processing of multi-dimensional images as well as statistical analysis and data mining of molecular trajectories during transport into nucleolus.