Gidon Gershinsky designs and builds Data Security solutions at Apple. He plays a leading role in the Apache Parquet community work on big data encryption and integrity verification technologies. He’s earned a PhD degree at the Weizmann Institute of Science in Israel, and was a post-doctoral fellow at Columbia University in New York City.
May 26, 2021 12:05 PM PT
Big data presents new challenges for protection of privacy and integrity of sensitive information. Straightforward application of traditional file encryption and MAC techniques can't cope with staggering volumes of data, flowing in modern analytic pipelines.
Apple addresses these challenges by leveraging the new capabilities in the Apache Parquet format. We work with the Apache Parquet community on a modular data security mechanism, that provides privacy and integrity guarantees for sensitive information at scale; the encryption specification has been approved and released by the Apache Parquet Format project. Today, there are two open source implementations of this specification - in Apache Arrow (C++) and in Apache Parquet-MR (Java) repositories. The latter has just been released in the parquet-mr-1.12 version - which means the Apache Spark and other Java/Scala based analytic frameworks can start working with Apache Parquet encryption.
In this talk, Gidon Gershinsky and Tim Perelmutov will outline the challenges of protecting the privacy of data at scale and describe the Apache Parquet encryption technology security approach. We will give a quick intro to usage of Apache Parquet encryption API in pure Java and in Apache Spark applications. We will also discuss the roadmap of the community work on new encryption features and on deeper integration with Apache Spark and other analytic frameworks. Finally, we will show a demo of the Apache Parquet modular encryption in action, sharing our learnings using it at scale.
October 3, 2018 05:00 PM PT
Enterprises and non-profit organizations often work with sensitive business or personal information, that must be stored in an encrypted form due to corporate confidentiality requirements, the new GDPR regulations, and other reasons. Unfortunately, a straightforward encryption doesn’t work well for modern columnar data formats, such as Apache Parquet, that are leveraged by Spark for acceleration of data ingest and processing. When Parquet files are bulk-encrypted at the storage, their internal modules can’t be extracted, leading to a loss of column / row filtering capabilities and a significant slowdown of Spark workloads.
Existing solutions suffer from either performance or security drawbacks. We work with the Apache Parquet community on a new modular encryption mechanism, that enables full columnar projection and predicate push down (filtering) functionality on encrypted data in any storage system. Besides confidentiality, the mechanism supports data authentication, where the reader can verify a file has not been tampered with or replaced with a wrong version. Different columns can be encrypted with different keys, allowing for a fine grained access control.
In this talk, I will demonstrate Spark integration with the Parquet modular encryption mechanism, running efficient analytics directly on encrypted data. The demonstration scenarios are derived from use cases in our joint research project with a number of European companies, working with sensitive data such as connected car messages (location, speed, driver identity, etc). I will describe the encryption mechanism, and the observed performance implications of encrypting and decrypting data in Spark SQL workloads.
Session hashtag: #SAISDev14