Over time, the integrity of information is eroded and can be questioned due to unintended events: loss, corruption, manipulation, degradation, and human error handling. Depending on the type, use, and requirements of the information, this life cycle may be relatively brief or may encompass years or decades. Data integrity, or data quality, is the maintenance of and the assurance of the accuracy of information over its entire lifecycle.  This practice is becoming more and more crucial to the enterprise.

Why? Consider the effect on data from the change of control within an organization as it passes through departments or is passed outside of the organization to counterparties.  Now consider scenarios where the creators or intermediaries of data are not the ultimate users of the information and are not incentivized to ensure that the information is reliable.  Quite commonly, changes to, and the reliability of, data will go unnoticed until the information is recalled for further use. This is why data integrity has become a crucial practice.

The costs are significant

Considering the vast amount of data that is created by an enterprise on a daily basis, the task of ensuring accurate data is getting increasingly complex and expensive.  According to a recent IBM survey, it is estimated that:

  • Poor data quality costs the US economy $3.1 trillion a year
  • 27% of the respondents were unsure of how much of their data was accurate
  • 1 in 3 business leaders don’t trust the information that they use to make decisions

The costs are commonly concealed with routine activities

Many times, the costs are not obvious as activities to prove the accuracy of information have become all too commonplace.  We have all spent time searching for sources to confirm information that is not trusted and have spent considerable time locating, correcting, cleaning and re-creating information that is not trusted.  

There are also regulatory and legal considerations

Aside from being a best practice, integration of a data integrity program to mitigate data risks and costs is often a Federal and State requirement for an enterprise.  Many regulated entities have specific data integrity and audit trail requirements as well as associated retention requirements. And when faced with legal challenges or e-discovery requests, handing over inaccurate information can bear legal and financial consequences.    

Data realities

  • Data is one of the enterprise’s most valuable and important assets    
  • Enterprise brand and reputation are tied to the quality of data
  • Decisions at every level are based on data that may not be inaccurate
  • Data growth is exponential

Why ULedger Data Integrity?   

  • ULedger creates 3rd Party tamper-proof audit trail of the complete life cycle of data
  • All data activity (author of content creation, time & date, device, edits, transfers) is captured
  • ULedger captures any file type and file size
  • ULedger adds certainty, proof, integrity and the relative order of events to data

About ULedger

ULedger is designed to be minimally invasive to an existing technology infrastructure via REST-ful API standards allowing for ease of integration to existing data management environments.   Through this process, each database underpinned by ULedger becomes its own Blockchain. As a result, an entity can have more than one Blockchain. ULedger hashes and timestamps the metadata of all transactions that occur on the database(s) and then the hash, timestamp, and metadata are posted to a public network of ULedger Blockchain nodes.  

Our hybrid approach ensures that the underlying data remains secure and private while benefiting from a distributed and tamper-proof ledger.  This approach delivers a highly scalable solution intended for enterprise data loads and security requirements.