Principle Data Engineer - Databricks

Principle Data Engineer

Strong experience with at least two of the following technologies:
Python, Scala, SQL, Java Commercial client-facing project experience is helpful, including working in close-knit teams
Ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets
Meaningful experience in multiple database technologies such as Distributed Processing (Spark, Hadoop, EMR, Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL), MPP (AWS Redshift, Teradata, NoSQL (MongoDB, DynamoDB, Cassandra, Neo4J, Titan)
Ability to clearly communicate complex solutions
Strong understanding of Information
Security principles to ensure compliant handling and management of client data
Experience and interest in Cloud platforms such as: AWS, Azure, Google Platform or Databricks
Strong experience in traditional data warehousing / ETL tools (Informatica, Talend, Pentaho, DataStage)