I am a committer, data scientist and PMC member on the Apache Metron project in the engineering team at Hortonworks. In the past, I’ve worked as an architect and senior engineer at a healthcare informatics startup spun out of the Cleveland Clinic, as a developer at Oracle and as a Research Geophysicist in the Oil & Gas industry. I specialize in writing software and solving problems where there are either scalability concerns due to large amounts of traffic or large amounts of data. I have a particular passion for data science problems or any thing mathematical.
Natural language processing techniques are well established due to their obvious utility. Further, the rise in unstructured textual data has resulted in mature, distributed and scalable implementations beginning to be seen. While textual data is extremely common, there is apparently unstructured data which has underlying structure in the same way words which compose sentences have an underlying grammatical structure. This talk explores borrowing some natural language programming techniques to analyze the structure in non-textual data. In particular, we consider the Word2Vec implementation in MLLib to help us organize and analyze non-textual clinical event data (I.e. Diagnoses, drugs prescribed, etc.). We will explore connections between diseases and drugs in an unsupervised way with Python, Spark and MLLib.
Detecting outliers and anomalies in data is one of the most common tasks that the working data scientist is asked to do. This is especially common and extra challenging with fast streaming data coming from many IoT sources. Despite this, the library support for problems of this variety are woefully unavailable. Often data scientists are forced to go to research papers and implement their own solutions. This talk will cover using the Spark Streaming coupled with a novel new algorithmic approach to detecting outliers at scale using a composition of distributional sketches as well as more classical techniques along with off-the-shelf UI components to demonstrate how this common but challenging task might be accomplished with for IoT data as well as more traditional streaming data.