DevOps Test Data: Why Synthetic Is Wrong and Policy-Based Masking Is Right
To compete in digital markets, businesses must get new code into production quickly, frequently and with high confidence. That’s why DevOps is such an important part of IT — and test/QA is such an important part of DevOps.
But as DevOps teams test more code, they have to get more test data to more test/QA teams. That creates new challenges, including:
- Getting the right test data to test/QA teams more often
- Fully mitigating security risks and fully complying with regulatory constraints associated with test data
- Eliminating DevOps delays caused by test/QA teams waiting for test data
- Responsibly governing and auditing test data use — even as test/QA activity increases
DevOps teams that meet these challenges will be faster, safer and more efficient. So DevOps leaders should carefully re-evaluate their test data strategies before pushing too hard on the DevOps gas pedal.
Three decision factors are particularly important:
Decision #1: Masked vs. Synthetic
Some vendors claim that DevOps test/QA needs can be fulfilled with synthetic data. This is demonstrably false. Anyone currently relying on synthetic production of test data knows what a time-consuming pain it is to re-configure that production for every test case. Plus, synthetic data invariably leads to questions about how well the data was synthesized.
Masking is far more practical. With masking, you use data that already exists and already fulfills your requirements. Plus, if you use modern masking techniques and standardize your masking policies, your test data creation will be highly replicable and efficient. The result is testing that is cheaper, faster and more reliable.
Decision #2: Centralized Policy vs. Ad Hoc
Some masking solutions do a good job of protecting data while preserving the alphanumeric attributes essential for accurate testing. Unfortunately, they don’t do a good job of managing the use of test data across an ever-growing number of increasingly dispersed DevOps test/QA events.
The key to effective test data management is metadata-based policy. Such policies empower enterprise data governance leaders — rather than dispersed test/QA teams — to determine how data is used anywhere and everywhere. On-the-fly policy-based masking also enforces separation of duties by eliminating the need for test/QA staff to touch underlying datasets or masking rules. Plus, with centralized management, test data usage can be audited via a single, credible reporting mechanism.
Decision #3: Any Rules for Any Data, Anywhere – Including the Mainframe
Successful data masking ultimately requires the ability to apply any desired rules to any type of data regardless of source. So DevOps decision-makers should carefully evaluate solution attributes such as flexibility of format-preserving encryption, value replacement, and built-in intelligent value interrogation.
Mainframe databases and applications can be especially challenging in this regard. Redefines, occurs, and other platform idiosyncrasies often limit the effectiveness of masking solutions. Those limitations are unacceptable given the mainframe’s role as any large enterprise’s most important data host.
The bottom line: You can’t accelerate and scale DevOps unless you also accelerate and scale DevOps test data delivery. That’s why every DevOps leader should take a close look at Compuware’s Test Data Privacy solution, which addresses the complexities of mainframe data more effectively than any other solution on the market.
Latest posts by John Crossno (see all)
- Will Insider Threats Prey on Data Despite IBM z14 Encryption? - July 27, 2017
- Stay Ahead of Insider Threats to Mainframe Systems - April 4, 2017
- DevOps Test Data: Why Synthetic Is Wrong and Policy-Based Masking Is Right - November 17, 2016