What is the 1-10-100 rule of data quality

What is the 1-10-100 rule of data quality? 

The 1-10-100 Rule pertains to the cost of bad quality. As digital transformation is becoming more and more prevalent and necessary for businesses, data across a company is integral to operations, executive decision-making, strategy, execution, and providing outstanding customer service. Yet, many enterprises are plagued by having data that is completely riddled with errors, duplicate records containing different information for the same human customer, different spellings for names, different addresses, more than one account for the same vendor (where pricing is not consistent), inconsistent information about a customer’s lifetime value or purchasing history, and reports and dashboards are often not trusted because the data underlying the display is not trusted. By its very nature, business operations often include manual data entry and errors are inherent.

The true cost to an organization of trying to conduct operations and make decisions based upon data riddled with errors is tough to calculate. That’s why G. Loabovitz and Y. Chang set out to conduct a study of enterprises to measure the cost of bad data quality. The 1-10-100 Rule was born from this research.

1-10-100 Rule diagramIn data quality, the cost of verifying a record as it is entered is $1 per record. The cost of remediation to fix errors after they are created is $10 per record. The cost of inaction is $100 per record per year.

The Harvard Business Review reveals that on average, 47% of newly created records contain errors significant enough to impact operations.

If we combine the 1-10-100 Rule, using $100 per record for failing to fix data errors, with the Harvard Business Review statistic on the volume of such errors typical for an organization, the cost of poor data quality adds up rapidly. For an enterprise having 1,000,000 records, 470,000 have errors each costing the enterprise $100 per year in opportunity cost, operational cost, etc. This costs the enterprise $47,000,000 per year. Had the enterprise cleansed the data, the data clean-up effort would have cost $4,700,000 and had the records been verified upon entry, the cost would have been $470,000. Inherit in business services are errors caused by human manual data entry. Even with humans eyeballing records as they are entered, errors escape. This is why investing in an automated data management platform with built-in data quality provides a huge cost savings to an organization. Our solution, Aunsight Golden Record, can help organizations mitigate these data issues by automating data integration and cleansing.


Daybreak's built-in data connectors and integrations speed insights for financial institutions

Daybreak's built-in data connectors and integrations speed insights for financial institutions

Got data? There’s a Daybreak connector for that! Our customer intelligence data platform, Daybreak™ for Financial Services, has built-in connectors for the financial services industry so credit unions and banks can put an end to siloed, disparate data. Daybreak connects to most relevant data sources, including core, lending, wealth, CRMs, and mobile banking. Whether structured or unstructured, on-prem or in the cloud, Daybreak can handle all types of data and sources.

In addition, Daybreak can feed cleaned, updated data to other systems you may be using, including BI platforms or analytics tools, through pre-built integrations. Watch the video below to learn more about Daybreak’s connectors and integrations.


Failure to follow Data Privacy Compliance requirements can be costly

Failure to follow Data Privacy Compliance requirements can be costly. How can you prepare your business?

GDPR, CCPA and the newly coming CPRA (which goes into effect 1/1/2023) require intense data management, or the cost of non-compliance can rise to $1000 per record. These data privacy laws pertain broadly to personal information of consumers including:

  • Account and login information
  • GPS and other location data
  • Social security numbers
  • Health information
  • Drivers license and passport information
  • Genetic and biometric data
  • Mail, email, and text message content
  • Personal communications
  • Financial account information
  • Browsing history, search history, or online behavior
  • Information about race, ethnicity, religion, union membership, sex life, or sexual orientation

CPRA requires businesses to inform consumers how long they will retain data (for each category of data) and the criteria used to determine that time period of what is “reasonably necessary.” Basically, you have to be prepared to justify the data collection, processing, and retention and tie it directly to a legitimate business purpose.

Now prohibited is “sharing” data (beyond buying and selling it), which is defined as: sharing, renting, releasing, disclosing, disseminating, making available, transferring or otherwise communicating orally, in writing, or by electronic of other means, a consumer’s personal information by the business to a third party for cross-context behavioral advertising, whether or not for monetary or other valuable consideration, including transactions between a business and a third party for the benefit of a business in which no money is exchanged. Arguably, this covers a business working with a marketing firm and sharing data with the agency about leads or prospects to employ cross-content behavioral advertising in a campaign.

Companies now need to be able to inform the consumer what type of data they have about customers, what business purposes it is used for, and retention periods. This information allows them to meet the expanded consumer rights given by the CPRA, including deleting data, limiting types of use for certain types of data, correcting data errors across the organization in every location where it lives, automating and executing data retention policies, and to be ready for auditing. While it is an ethical ruleset that has now been put in place, with Data Privacy Compliance active, companies now have to consider possible non-compliance costs in addition to operational and opportunity costs as well.

As a result of these requirements, companies are now in need of a data management system with built-in data governance to stay in compliance. These data management platforms must be capable of identifying, on a data field-by-data-field basis, where the data originates, each and every change made to the data, and each downstream user of the data (databases, apps, analytics, queries, extracts, dashboards). Without automated data management and governance, it will be humanly impossible to manually find this information by the deadlines required. You need to be able to automatically replicate changes made to the data to all locations where the data resides throughout your organization to make consumer directed corrections and deletions within the time limits prescribed.


Critical Success Factors for Data Accuracy Platforms

The data accuracy market is currently undergoing a paradigm shift from complex, monolithic, on-premise solutions to nimble, lightweight, cloud-first solutions. As the production of data accelerates, the costs associated with maintaining bad data will grow exponentially and companies will no longer have the luxury of putting data quality concerns on the shelf to be dealt with “tomorrow.”

When analyzing major critical success factors for data accuracy platforms in this rapidly evolving market, four are critically important to evaluate in every buyer’s journey. Simply put, these are: Installation, Business Adoption, Return on Investment and BI and Analytics.

Installation

When executing against corporate data strategies, it is imperative to show measured progress quickly. Complex installations that require cross-functional technical teams and invasive changes to your infrastructure will prevent data governance leaders from demonstrating tangible results within a reasonable time frame. That is why it is critical that next-gen data accuracy platforms be easy to install.

Business Adoption & Use

Many of the data accuracy solutions available on the market today are packed with so many complicated configuration options and features that they require extensive technical training in order to be used properly. When the barrier to adoption and use is so high, showing results fast is nearly impossible. That is why it is critical that data accuracy solutions be easy to adopt and use.

Return on Investment

The ability to demonstrate ROI quickly is a critical enabler for securing executive buy-in and garnering organizational support for an enterprise data governance program. In addition to being easy to install, adopt, and use, next-gen data accuracy solutions must also make it easy to track progress against your enterprise data governance goals.

Business Intelligence & Analytics

At the end of the day, a data accuracy program will be judged on the extent to which it can enable powerful analytics capabilities across the organization. Having clean data is one thing. Leveraging that data to gain competitive advantage through actionable insights is another. Data accuracy platforms must be capable of preparing data for processing by best-in-class machine learning and data analytics engines.

Look for solutions that offer data profiling, data enrichment and master data management tools and that can aggregate and cleanse data across highly disparate data sources and organize it for consumption by analytics engines both inside and outside the data warehouse.


Daybreak Analytic Database

Daybreak's Predictive Smart Features Add Additional Value for Credit Unions

Is your credit union in a position to hire experienced data scientists who will develop predictive algorithms to enrich your member relationships? With Aunalytics, you can take advantage an entire team of data science talent. Our customer intelligence data platform, Daybreak™ for Financial Services, includes industry relevant Smart Features™ —high value data fields created by our Innovation Lab using advanced AI to provide high-impact insights.

Watch the video below to learn more about how Smart Features provide additional value for credit unions.


How Cybersecurity Mitigation Efforts Affect Insurance Premiums

How Cybersecurity Mitigation Efforts Affect Insurance Premiums, and How to Keep Your Business Secure

Cyberattacks have increased sharply over the past year. According to an August 2021 survey by IDC, more than one-third of organizations globally have experienced a ransomware attack or breach that blocked access to systems or data over the last twelve months. As a result, insurance companies are tightening eligibility requirements for cybersecurity coverage and requiring their insured to maintain higher standards of data security in order to qualify for better rates, and sometimes for renewal at all. Rates are increasing—up to 100%—for 2022, even for companies without any cyber incidents.

If you have received a renewal notice with a shocking sticker price for 2022, it is time to review your internal controls and security to learn if you can put in place further data protection to lower your rate. Worse, if you have received a notice that your business insurance policies are now excluding cyber coverage, data theft, or privacy breaches, you may be forced to shop for new cyber coverage at a time when attacks are at an all-time high. Without adequate security controls, obtaining coverage may be impossible. Due to the high cost of data breach incidents, you need to make sure that you are eligible for cyber coverage, but what does it take for 2022?

Aunalytics compliance and security experts are ready to help. We provide Advanced Security and Advanced Compliance managed services including auditing your practices, and helping you to mature your business cybersecurity processes, technology and safeguards to meet the latest standards and prevent new cyberattack threats as they emerge. Security maturity is a journey, and best practices have changed dramatically over the years. Threats evolve over time and so too must your cyber protection for your business to remain compliant and operational.


Financial institution is hit by a ransomware attack

Think your financial institution is immune to ransomware? Think again.

Many organizations in the financial services sector don’t expect to be hit by ransomware. In the recent State of Ransomware in Financial Services 2021 survey by Sophos, 119 financial services respondents indicated that their organizations were not hit by any ransomware attacks in the past year, and they do not expect to be hit by them in the future either.

The respondents mentioned that their confidence relied on the following beliefs:

  • They are not targets for ransomware
  • They possess cybersecurity insurance against ransomware
  • They have air-gapped backups to restore any lost data
  • They work with specialist cybersecurity companies which run full Security Operations Centers (SOC)
  • They have anti-ransomware technology in place
  • They have trained IT security staff who can hinder ransomware attacks

It’s not all good news. Some results are cause for concern. Many financial services respondents that don’t expect to be hit (61%) are putting their faith in approaches that don’t offer any protection from ransomware.

  • 41% cited cybersecurity insurance against ransomware. Insurance helps cover the cost of dealing with an attack, but doesn’t stop the attack itself.
  • 42% cited air-gapped backups. While backups are valuable tools for restoring data post attack, they don’t stop you getting hit.

While many organizations believe they have the correct safeguards in place to mitigate ransomware attacks, 11% believe that they are not a target of ransomware at all. Sadly, this is not true. No organization is safe. So, what are financial institutions to do?

While advanced and automated technologies are essential elements of an effective anti-ransomware defense, stopping hands-on attackers also requires human monitoring and intervention by skilled professionals. Whether in-house staff or outsourced pros, human experts are uniquely able to identify some of the tell-tale signs that ransomware attackers have you in their sights. It is strongly recommended that all organizations build up their human expertise in the face of the ongoing ransomware threat.


Credit Unions Realize Business Outcomes Sooner by Utilizing Aunalytics' Expertise

Aunalytics speeds time to value for credit unions looking to dive into analytics. By offering a well-rounded team including technology experts—such as data engineers and data scientists—and subject matter experts with experience in the financial services industry, Aunalytics is able to implement our Daybreak solution quickly and get credit unions actionable business answers using machine learning and AI.

Watch the video below to learn more about how our team gets credit unions the answers they need faster.


Automated data pipeline cleans and wrangles data

Data Scientists: Take Data Wrangling and Prep Out of Your Job Description

It is a well-known industry problem that data scientists typically spend at least 80% of their time finding and prepping data instead of analyzing it. The IBM study that originally published this statistic dates to even before most organizations adopted separate best-of-breed applications in functional business units. Typically, there is not one central data source used by the entire company, but instead there exist multiple data silos throughout an organization due to decentralized purchasing and adoption of applications best suited for a particular use or business function. This means that now data scientists must cobble together data from multiple sources, often having separate “owners,” wrangle IT and the various data owners to extract and get the data to their analytics, and then make it usable. Having Data Analysts and Data Scientists work on building data pipelines is not only a complex technical problem but also a complex political problem.

To visualize analytics results, a data analyst is not expected to build a new dashboard application. Instead, Tableau, Power BI and other out-of-the-box solutions are used. So why require a data analyst to build data pipelines instead of using an out-of-the-box solution that can build, integrate, and wrangle pipelines of data in a matter of minutes? Because it saves time, money, and angst to:

  • Automate data pipeline building and wrangling for analytics-ready data to be delivered to the analysts;
  • Automatically create a real-time stream of integrated, munched and cleansed data, and feed it into analytics in order to have current and trusted data available for immediate decision-making;
  • Have automated data profiling to give analysts insights about metadata, outliers, and ranges and automatically detect data quality issues;
  • Have built-in data governance and lineage so that a single piece of data can be tracked to its source; and
  • Automatically detect when changes are made to source data and update the analytics without blowing up algorithms.

Building data pipelines is not the best use of a data scientist’s time. Data scientists instead need to spend their highly compensated time developing models, examining statistical results, comparing models, checking interpretations, and iterating on the models. This is particularly important given that there exists a labor shortage of these highly skilled workers in both North America and Europe and adding more FTEs into the cost of analytics projects makes them harder for management to justify.

According to David Cieslak, Ph.D., Chief Data Scientist at Aunalytics, without investment in automation and data democratization, the rate at which one can execute on data analytics use cases — and realize the business value — is directly proportionate to the number of data engineers, data scientists, and data analysts hired. Using a data platform with built-in data integration and cleansing to automatically create analytics-ready pipelines of business information allows data scientists to concentrate on creating analytical results. This enables companies to rapidly build insights based upon trusted data and deliver those insights to clients in a fraction of the time that it would take if they had to manually wrangle the data.

Our Solution: Aunsight Golden Record

Aunsight™ Golden Record turns siloed data from disparate systems into a single source of truth across your enterprise. Powered with data accuracy, our cloud-native platform cleanses data to reduce errors, and Golden Record as a Service matches and merges data together into a single source of accurate business information – giving you access to consistent trusted data across your organization in real-time. With this self-service offering, unify all your data to ensure enterprise-wide consistency and better decision making.


Man's hand typing on a computer

Meeting the Challenges of Digital Transformation and Data Governance

Man's hand typing on a computerThe Data Accuracy market (traditionally defined in terms of Data Quality and Master Data Management) is currently undergoing a paradigm shift from complex, monolithic, on-premise solutions to nimble, lightweight, cloud-first solutions. As the production of data accelerates, the costs associated with maintaining bad data will grow exponentially and companies will no longer have the luxury of putting data quality concerns on the shelf to be dealt with “tomorrow.”

To meet these challenges, companies will be tempted to turn to traditional Data Quality (DQ) and Master Data Management (MDM) solutions for help. However, it is now clear that traditional solutions have not made good on the promise of helping organizations achieve their data quality goals. In 2017, the Harvard Business Review reported that only 3 percent of companies’ data currently meets basic data quality standards even though traditional solutions have been on the market for well over a decade.1

The failure of traditional solutions to help organizations meet these challenges is due to at least two factors. First, traditional solutions typically require exorbitant quantities of time, money, and human resources to implement. Traditional installations can take months or years, and often require prolonged interaction with the IT department. Extensive infrastructure changes need to be made and substantial amounts of custom code need to be written just to get the system up and running. As a result, only a small subset of the company’s systems may be selected for participation in the data quality efforts, making it nearly impossible to demonstrate progress against data quality goals.

Second, traditional solutions struggle to interact with big data, which is an exponentially growing source of low-quality data within modern organizations. This is because traditional systems typically require source data to be organized into relational schemas and to be formatted under traditional data types, whereas most big data is either semi-structured or unstructured in format. Furthermore, these solutions can only connect to data at rest, which ensures that they cannot interact with data streaming directly out of IoT devices, edge services or click logs.

Yet, demand for data quality grows. Gartner predicts that by 2023, intercloud and hybrid environments will realign from primarily managing data stores to integration. 

Therefore, a new generation of cloud-native Data Accuracy solutions is needed to meet the challenges of digital transformation and modern data governance. These solutions must be capable of ingesting massive quantities of real-time, semi-structured or unstructured data, and be capable of processing that data both in-place and in-motion.2 These solutions must also be easy for companies to install, configure and use, so that ROI can be demonstrated quickly. As such, the Data Accuracy market will be won by vendors who can empower business users with point- and-click installations, best-in-class usability and exceptional scalability, while also enabling companies to capitalize on emerging trends in big data, IoT and machine learning.

1. Tadhg Nagle, Thomas C. Redman, David Sammon (2017). Only 3% of Companies’ Data Meets Basic Data Quality Standards. Retrieved from https://hbr.org/2017/09/only-3-of-companiesdata-meets-basic-quality-standards

2. Michael Ger, Richard Dobson (2018). Digital Transformation and the New Data Quality Imperative. Retrieved from https://2xbbhjxc6wk3v21p62t8n4d4-wpengine.netdna-ssl.com/wpcontent/uploads/2018/08/Digital-Transformation.pdf