Pragmatic Works Blog

You Don't Have Time for Data Testing?!

Written by John Welch | Sep 02, 2015

I’ve been a big advocate of testing for applications, databases, data warehouses, BI and analytics for a while now. Not just any testing, but real tests that help you truly verify the state of your code, applications and data. I like Test Driven Development, but really any approach that focuses on automated, repeatable tests that verify meaningful functionality I find hugely beneficial. And almost no one I’ve ever talked to about this topic has disagreed with me. (There was that one guy, but he was a FoxPro developer, so...) But there’s often a point where the conversation goes sideways. Usually, it starts something like this:

Me: Automated testing is good. It verifies that what we build does what it’s designed to do, gives us confidence in what we create and gives us verifiable evidence of testing that we can share with management and the business to prove that we’re doing our jobs correctly.

BI Developer: Yeah, that does sound good. I really like knowing that the code I’ve written does what it’s supposed to and that I can verify that at any time. But it’s really tough to do for data and BI.

Me: It can be difficult because the cool tools we have on the app dev side for testing aren’t as easy to find for data-centric applications. But that’s getting better because companies are realizing there’s a need here and are working to address it. And you can do a lot with some effort on your part, even if you don’t go with a tool. All automated testing requires some investment of time – the tests can’t create themselves.

Here’s where we take the left turn into Crazy Town

Way too often, in this supposedly enlightened day and age, I get one of the following responses:

  • “If only we had time to implement tests”
  • “Management won’t let us have the time to test this”
  • “There’s not any testing time allocated in the project timeline”

Really? You don’t have time to verify that what you are creating actually works, and does what it’s supposed to? You don’t have time to set it up so that you can continue to verify that your creation works as other changes are made? Your business users don’t care if the data they see is accurate or not?

I can’t answer those first two questions for you. They depend on whether you take your job seriously or not. However, the answer to the third is “Yes, your business users do care whether they are seeing accurate data." Having trusted data enables business leaders to make more informed decisions and sadly, only 50% of global leaders actually admit to knowing how to get value from their company’s data.1

Benefits of Data Testing

There are a few important things to keep in mind about testing. Yes, it takes some time up front. However, that time is an investment and it’s also insurance.

The investment pays off in time down the road. First, it reduces defects during the initial development. In fact, 60% of defects already exist during the design phase.2. Testing enables you to catch more of those up front. Second, it helps when you have to revisit that same code later and make updates. Now you can validate that you haven’t inadvertently broken something, and you can also easily prove that the tested application functionality hasn’t changed in unexpected ways. It also lets you validate that your code continues to work, even when other people are making changes. The end result is that you can make future changes faster, because you have a safety net of automated tests that make sure you don’t mess up.

The insurance part comes into play, because you have increased confidence that the requirements are met, the system works as expected, and the results the business users expect are delivered – and with automated tests, you can verify and prove that this is the case. It also gives you and your end users confidence in what you have produced.

In the responses mentioned above, I particularly like the one about management not giving the time to test. On occasion this is true – management doesn’t understand the consequences of not testing. I generally take the time to explain the potential outcomes of skipping automated tests:

  • You can’t prove that the development team actually tested anything.
  • You can’t have confidence that you are getting consistent, verifiable results from your system.
  • You can’t show the business that the results are verifiable.
  • You may not find out there are problems until the system has been in production for a while. This one is particularly important for systems with financial implications. Producing wrong numbers there tends to be associated with changes of employment.

There is also a cost to having poor data, somewhere in the range of 10-25% of total revenues.3 Start talking dollar savings and management tends to listen. But if management still doesn’t take testing seriously, maybe it’s time to consider whether you want to be around when something blows up - and it will, sooner or later.

Poor #data costs businesses somewhere in the range of 10-25% of total revenues #LegiTest #DataTesting (Click to Tweet)

At the core of this, it usually comes down to people not wanting to make the investment in testing upfront, or right now. It’s often seen as something that can be kicked down the road. Yet, that inevitably leads to real problems being missed and systems that become more of a ticking time bomb the longer they are around. Do you want to be the one working on it when it goes off? 

 

1. http://www.informationweek.com/big-data/big-data-analytics/4-ways-it-can-inspire-business-confidence/a/d-id/1320292

2. http://www.stevemcconnell.com/articles/art04.htm

3. http://download.101com.com/pub/tdwi/Files/DQReport.pdf