Skip to main content

Data Transformation

What Is Data Transformation?

Data transformation is the process of taking raw data that has been extracted from data sources and turning it into usable datasets. Data pipelines often include multiple data transformations, changing messy information into clean, quality, trusted data that organizations can use to meet operational needs and create actionable insights. The data transformation process is a key process in data engineering.

Here’s more to explore

big book of data engineering

Big Book of Data Engineering: 2nd Edition

Explore the latest technical guidance for building real-time data pipelines.

Download now
Understanding ETL data sources for modern architectures: a visual representation of data integration processes.

Get Started With ETL

Learn about ETL pipelines with this O’Reilly guide preview.

Download now
Promotional graphic for Eckerson Group's white paper on 'Data Pipeline Orchestration: Modernizing Workflows with AI/ML.' The graphic shows a modern building facade with geometric glass patterns in the background, with the white paper title in the foreground. A red circular badge on the right side contains the text 'LEARN THE WHAT, WHY AND HOW' in white letters.

Eckerson Report: Data Pipeline Orchestration

Learn how orchestration is critical for your data and AI efforts and how to choose the right tool.

Read now

Why is data transformation important?

The data transformation process manipulates datasets so they can be used by business data applications. It’s critical for realizing value from raw data by generating insights that can be used to make business decisions. Transformation gives organizations the data they need to better understand the past, present and future of their business and pursue opportunities in an agile way.

Data transformation and ETL

Data transformation is at the heart of ETL, which stands for extract, transform and load. This is the process data engineers use to pull data from different sources, transform it into a usable and trusted resource, and load that data into the systems end-users can access and use downstream to solve business problems. In the transform stage, data gets cleansed, mapped and transformed, often to a specific schema. 

Data transformation examples

Several different types of transformation can be used to ensure the quality and integrity of data. These range from data de-duplication, which increases data quality and performance by removing duplicated data, to data refining, which increases quality by filtering out data that is irrelevant, to data integration, in which different data types are merged into the same structure. For more types of data transformation and their definitions, see the next section.

Types of data transformation

The data transformation process can be accomplished via several different techniques, depending on the data and end transformation goal. These may include:

Bucketing/binning: Dividing a numeric series into smaller “buckets” or “bins.” This is done by changing numeric features into categorical features using a set of thresholds.

Data aggregation: The process of summarizing data so it can be better used in reporting and visualization. Aggregation can be achieved by using different methods, such as using the sum, average or median of values over time, space or other dimensions.

Data cleansing: The process of increasing data accuracy and quality by deleting inaccurate, incomplete or out-of-date information.

Data de-duplication: A compression process in which duplicate copies of data are identified and removed to speed up the data transfer process.

Data derivation: Creating rules to extract only the specific information needed from the data source.

Data enrichment: A process of enhancing existing data using external data sources to expand a data field or supplement missing data fields.

Data filtering: Refining data to eliminate irrelevant data to display only the information that is needed.

Data integration: Merging different data types into the same structure. Data integration standardizes disparate data so that it can be analyzed as a whole.

Data joining: An operation that merges multiple database tables using a common data field into a single dataset.

Data splitting: Dividing a single column into multiple columns in order to analyze the data. This can be useful for analyzing large amounts of data gathered over time.

Data summarization: A type of data aggregation in which different business metrics are created by calculating value totals.

Data validation: Ensuring data quality by creating automated rules that generate responses to specific data issues.

Format revision: The process of changing formats to solve problems related to fields containing different data types.

Key restructuring: The process of changing keys with built-in meanings to generic keys (random numbers that reference the information in the source database) to prevent slowdowns in the data system.

Data transformation performance optimizations

The data transformation process can be both time- and resource-consuming, so optimizing data transformation performance is crucial to reduce costs and save time. Performance optimization techniques include:

  • Data compaction
  • Partitioning
  • File size tuning
  • Data skipping and pruning
  • Data caching

Learn more about optimizing data transformation performance with our comprehensive guide.

Data transformation tools

Data transformation is critical for creating reliable data that organizations can use for insights. However, the data transformation process and the entire ETL process present serious challenges, from building and maintaining reliable data pipelines to managing data quality in increasingly complex pipeline architectures. Data transformation tools streamline and democratize transformation and simplify the ETL lifecycle. 

Data transformation on the Data Intelligence Platform

Databricks offers essential tools for implementing and orchestrating data transformations and ETL on the Data Intelligence Platform.

Delta Live Tables (DLT) helps data engineering teams easily build and manage reliable batch and streaming data pipelines that deliver high-quality data on the Data Intelligence Platform. DLT provides automatic data quality testing, makes streaming more cost-effective through efficient compute autoscaling, and offers deep visibility for pipeline monitoring and observability.

Databricks Workflows is a fully integrated managed orchestration service that empowers data teams to better automate and orchestrate data pipelines. Workflows helps users easily define, manage and monitor multitask workflows for ETL, analytics and machine learning pipelines to boost productivity. Workflows integrates seamlessly with dbt, a tool that allows anyone on a data team to safely contribute to production-grade data pipelines and makes transformation faster and more secure.

Back to Glossary