Marek Novotny

Senior .NET developer, Barclays Africa Group Limited

Marek obtained Bachelors and Masters degrees in computer science at Charles University in Prague. His Masters studies were mainly focused on development of distributed and dependable systems. In 2013, Marek joined Barclays Africa Group Limited in Prague to develop a scalable data integration platform and a framework for calculating regulatory reports. During the work on those two projects, he gained experience with many NoSQL and distributed technologies (e.g. Kafka, Zookeper, Spark). Nowadays, he is a member of the Big Data Engineering team and primarily focuses on development of the Spline project.


Extending Spark SQL API with Easier to Use Array Types Operations

Big companies typically integrate their data from various heterogeneous systems when building a data lake as single point for accessing data. To achieve this goal technical teams often deal with data defined by complex schemas and various data formats. Spark SQL Datasets are currently compatible with data formats such as XML, Avro and Parquet by providing primitive and complex data types such as structs and arrays. Although Dataset API offers rich set of functions, general manipulation of array and deeply nested data structures is lacking. We will demonstrate this fact by providing examples of data which is currently very hard to process in Spark efficiently. We designed and developed an extension of Dataset API to allow developers to work with array and complex type elements in a more straightforward and consistent way. The extension should help users dealing with complex and structured big data to use Apache Spark as a truly generic processing framework.

Spline: Apache Spark Lineage, Not Only for the Banking Industry

Data lineage tracking is one of the significant problems that financial institutions face when using modern big data tools. This presentation describes Spline - a data lineage tracking and visualization tool for Apache Spark. Spline captures and stores lineage information from internal Spark execution plans and visualizes it in a user-friendly manner. Session hashtag: #EUent3