Skip to main content

Understanding data quality

More than ever, organizations rely on a variety of complex datasets to drive their decision-making. It’s crucial that this data is reliable, accurate and relevant so that businesses can make effective, strategic decisions. This becomes even more important as industries adapt to using AI capabilities. AI and analytics rely on clean, quality data to make accurate predictions and decisions.

Unreliable data makes AI algorithms less trustworthy, but it can also have broader implications for your organization. Data quality issues — such as incomplete or missing data — can lead to inaccurate conclusions and material financial losses. According to Gartner, organizations lose an average of nearly $13 million a year as a result of poor data quality.

Data must also have integrity, meaning data that is accurate, complete and consistent at any point in its lifecycle. Data integrity is also the ongoing process of ensuring any new data does not compromise the overall quality of a dataset, as well as protecting current data against loss or corruption.

Here’s more to explore

The Big Book of Generative AI

Best practices for building production-quality GenAI applications.

Read now

Databricks Delta Live Tables: Getting Started Guide

Develop scalable, reliable data pipelines that conform to the data quality standards of lakehouse architecture with Delta Live Tables.

Get started

The Delta Lake Series

Learn how to bring quality, reliability, security and performance to your data lake.

Get the full series now

Benefits of good data quality

Maintaining data quality is important for many reasons, including: 

Operational efficiency: Having high-quality data means you can reduce time and resources spent on correcting errors, addressing discrepancies and identifying redundancies. Good data quality also lowers costs by helping employees focus on more high-level, strategic tasks rather than dealing with data-related issues. 

Informed decision-making: Good data quality gives key stakeholders confidence that their decisions are based on accurate information. Accurate, complete and timely data is also imperative for analytics and AI, as both rely on quality data for meaningful results.

Enhanced data governance: Good data quality is critical to effective data governance, which ensures that datasets are consistently managed and comply with regulatory requirements.

Key elements of data quality

Data quality can be broken down into six key dimensions

  1. Consistency: Data should be consistent across different databases and datasets. This includes data across subject areas, transactions and time. As datasets scale and grow, curating data that eliminates duplication and conflict is key. 
  2. Accuracy: Data should reflect the real-world scenario it’s meant to represent. Whether the data references a physical measurement or a reference source, quality data must be error free and accurately represent the source. 
  3. Validity: Data must also conform to the defined formats, standards and rules. This usually means the data matches the designed range or pattern — including any relevant metadata. 
  4. Completeness: A dataset is only as good as its completeness. Missing or unavailable data points can compromise overall data quality, leading to insufficient or incomplete insights.  
  5. Timeliness: Data needs to be up-to-date and available when it’s needed. Any delays or lags in data reporting can lead to inaccurate data reporting. Systems need to capture any new information, process that information and store it accurately so it can be recalled later.  
  6. Uniqueness: When data is aggregated from various sources, it’s crucial that data quality processes account for any duplications or redundancies. Datasets that lack uniqueness can lead to misleading insights and strategies. 

It’s important to note that any data entering an analytics platform will likely not meet these requirements. Data quality is achieved by cleaning and transforming data over time. 

Another way to ensure data quality is to use the “seven Cs of data quality” framework, which outlines how to prepare data for sharing, processing and use. 

  • Collect: The initial phase is data collection. This is the process of capturing, formatting and storing data in a proper data repository.  
  • Characterize: Once data has been collected, the second step is characterizing additional metadata, such as the time the data was created, the method of collection and even the location or specific sensor settings. 
  • Clean: The next step is to clean the data by addressing any issues or corruption within the data. ETL (extract, transform, load) is a common process, but others may be used to address additional issues, including duplication, typos or unnecessary data. 
  • Contextualize: Not all data is relevant to your business or initiative. Contextualizing the data determines what additional metadata may be required. 
  • Categorize: This further identifies key factors in datasets and extracts them based on the problem domain. 
  • Correlate: This step connects disparate data and concepts across various data stores. For instance, two datasets may refer to the same data point: A customer’s phone number could be categorized as two different types according to its respective database. Correlation helps resolve these conflicts by connecting the data point. 
  • Catalog: The final step is to ensure data and metadata are securely stored, preserved and accessible across search and analysis platforms.

Assessing data quality

Data quality should be measured against a framework of established standards and dimensions. Four of the major frameworks include: 

  • Data Quality Assessment Framework (DQAF) 
  • Total Data Quality Management (TDQM) 
  • Data Quality Scorecard (DQS) 
  • Data downtime 

These standards identify gaps in data and guide improvement over time. Some of the common metrics these frameworks address include: 

  • Error rate: The frequency of errors found in the data 
  • Completeness rate: The percentage of data that’s complete and available 
  • Consistency rate: The degree to which data is consistent across different datasets 
  • Timeliness rate: How current the data is

Improving data quality

With huge, growing datasets and complex issues to resolve, improving data quality can be a challenge. Monitoring data quality should take place throughout the entire data lifecycle. Over the long term, this can result in more accurate analytics, smarter decisions and increased revenue. 

  • Data quality during ETL: The process of cleaning datasets can introduce a number of mistakes. Checking data quality throughout the ingest, transformation and orchestration process can ensure ongoing accuracy and compliance. While data cleansing tools can automate the process of correcting or removing inaccurate or incomplete data from a dataset, no automation is perfect. Continual testing throughout this process can further ensure its overall accuracy and quality.
  • Data quality and governance: Good data governance is essential to protect data and support data quality. Decide what the organizational standard for data quality should be and identify key stakeholders to own different parts of the process. It’s also important to develop a culture of data quality to ensure that everyone understands their role in maintaining data integrity. 
  • Data quality in testing: Data quality testing attempts to anticipate specific and known problems in any given dataset, while data profiling tools analyze data for quality issues and provide insights into patterns, outliers and anomalies. This should be done prior to any real-world deployment to ensure the accuracy of your results.

Emerging data quality challenges

In a competitive business environment, organizations need to stay ahead by leveraging their data. AI and machine learning initiatives are becoming crucial for enterprises to generate insights and innovation from their data to stay competitive. Meanwhile, the shift to cloud-first capabilities and an explosion in the Internet of Things (IoT) has led to exponentially more data.  

The need for robust data quality practices has never been greater, but organizations face common challenges around building and maintaining good data quality: 

  • Incomplete or inaccurate data: Aggregating data from multiple sources may feature missing attributes, errors or duplications, which can lead to misleading or inaccurate decisions 
  • Poor data governance: Without strong data management best practices, data quality can suffer due to unclear roles or accountability 
  • Data volume and velocity: A growing amount of data presents challenges in real-time processing and reporting, potentially delaying insights 
  • Complex data sources: Systems increasingly collect unstructured data, such as photos and videos, which can challenge even the most carefully constructed data quality processes 
  • Monitoring practices: Organizations that lack rigorous data monitoring practices may lose out on data quality

As organizations double down on a data-driven approach led by AI and analytics, it will be crucial to centralize and streamline data quality practices. The better the data quality, the better organizations can make effective decisions, minimize errors and compete in a technologically advanced environment.

    Back to Glossary