Man's hand typing on a computerThe Data Accuracy market (traditionally defined in terms of Data Quality and Master Data Management) is currently undergoing a paradigm shift from complex, monolithic, on-premise solutions to nimble, lightweight, cloud-first solutions. As the production of data accelerates, the costs associated with maintaining bad data will grow exponentially and companies will no longer have the luxury of putting data quality concerns on the shelf to be dealt with “tomorrow.”

To meet these challenges, companies will be tempted to turn to traditional Data Quality (DQ) and Master Data Management (MDM) solutions for help. However, it is now clear that traditional solutions have not made good on the promise of helping organizations achieve their data quality goals. In 2017, the Harvard Business Review reported that only 3 percent of companies’ data currently meets basic data quality standards even though traditional solutions have been on the market for well over a decade.1

The failure of traditional solutions to help organizations meet these challenges is due to at least two factors. First, traditional solutions typically require exorbitant quantities of time, money, and human resources to implement. Traditional installations can take months or years, and often require prolonged interaction with the IT department. Extensive infrastructure changes need to be made and substantial amounts of custom code need to be written just to get the system up and running. As a result, only a small subset of the company’s systems may be selected for participation in the data quality efforts, making it nearly impossible to demonstrate progress against data quality goals.

Second, traditional solutions struggle to interact with big data, which is an exponentially growing source of low-quality data within modern organizations. This is because traditional systems typically require source data to be organized into relational schemas and to be formatted under traditional data types, whereas most big data is either semi-structured or unstructured in format. Furthermore, these solutions can only connect to data at rest, which ensures that they cannot interact with data streaming directly out of IoT devices, edge services or click logs.

Yet, demand for data quality grows. Gartner predicts that by 2023, intercloud and hybrid environments will realign from primarily managing data stores to integration. 

Therefore, a new generation of cloud-native Data Accuracy solutions is needed to meet the challenges of digital transformation and modern data governance. These solutions must be capable of ingesting massive quantities of real-time, semi-structured or unstructured data, and be capable of processing that data both in-place and in-motion.2 These solutions must also be easy for companies to install, configure and use, so that ROI can be demonstrated quickly. As such, the Data Accuracy market will be won by vendors who can empower business users with point- and-click installations, best-in-class usability and exceptional scalability, while also enabling companies to capitalize on emerging trends in big data, IoT and machine learning.

1. Tadhg Nagle, Thomas C. Redman, David Sammon (2017). Only 3% of Companies’ Data Meets Basic Data Quality Standards. Retrieved from https://hbr.org/2017/09/only-3-of-companiesdata-meets-basic-quality-standards

2. Michael Ger, Richard Dobson (2018). Digital Transformation and the New Data Quality Imperative. Retrieved from https://2xbbhjxc6wk3v21p62t8n4d4-wpengine.netdna-ssl.com/wpcontent/uploads/2018/08/Digital-Transformation.pdf