Rigorous Testing Leads to Trust

I want to go back and revisit a topic that we touched on a previous episode, that’s trusting your reports. We’re building Reportopia and there’s not much that’s more important than being able to trust the reports that are being generated. At the end of the day, you need to look at those reports or someone needs to utilize those reports to make decisions that are going to drive your business. It’s either going to make you more successful or it’s not. Knowing that those reports are accurate, has a lot of repercussions. If people find that they’re not accurate, they’re not going to use the reports. Even worse, maybe they do use the reports and they make decisions that have bad outcomes. So, trust is important when it comes to reporting.

How do we get to the point where we can really trust our reports? We talked about that a little bit in the past and a big part of that is testing. We’re going to focus on testing. Specifically, I want to cover testing from your perspective, the business user’s perspective. What is your role in testing? I also want to make sure that you understand what the developer’s role is so that you have a comprehensive view of what’s being done to ensure that data being presented in these reports is accurate. We must define that, what is accurate?

Now, as a little bit of background, let’s assume that we have already collected requirements. The business analyst has collected the requirements. The developers consume those requirements, and we’re at the point in the process now where the initial round of development is complete. The developer looked at the requirements, did whatever had to be, and ultimately has generated an output that is believed to meet the requirements.

Now, how does a developer know if requirements have been met? That’s the first question and that gets into what a validation source is. The developer is working with some type of coding language. They are moving data from point A to point B. They are transforming it, restructuring it, and there’s a lot of things that can go wrong in this process. So, does the developer know that the results are valid? The way they know that is through some type of validation source.

Now, back at the requirements collection point where hypothetically you were interfacing with the business analyst. That analyst likely would’ve asked you about the expected results. How can we ensure that we are meeting expectations? That’s typically done through a validation source. This can take a number of different paths. Let’s talk about a couple of them.

In what I would call the “happy path,” we’re recreating something that already exists. Maybe you have a report that your business application is producing, and it has accurate information that we can use to tie back. If the results of the development match the results of this report that we’re able to get out of your business system, then we know we’re accurate. Again, that’s the happy path.

The report that we’re getting out of the business system could come in several different forms. It may be a large, very detailed data extract. I would call it an atomic data extract where we’re getting all the detailed data out. It could be a high-level number. That’s the aggregation of several different detailed data elements. The detailed data extract is a lot more valuable or a lot easier to work with, than just a high-level number. Obviously, if you have the detail, then the developer can join the validation source to the outputs of what was developed to determine if we have a match at each individual detailed data point.

If there’s not a match, it’s obvious where that issue is. Maybe it’s just on one or two elements, records, or maybe it’s across the board. That gives the developer something to go on. We know we are generally right, but there’s a couple of outliers. Do we have a couple of outliers because the requirements didn’t encompass something? Or is that an outlier that is expected based on the requirements we’re deviating from what the validation source is telling us? There’s a lot of things that can happen. At the end of the day, if we have a detailed data set that we can compare against, that’s the best situation. We can now see on a record-by-record basis whether we’re meeting expectations.

On the other hand, if you have something like an aggregate report, maybe it’s just even one number, it gets a lot more difficult. Now what must happen is the developer has to aggregate the results of the development to match whatever number or numbers we’re given to see if we match. Even if you do match, you still can’t be quite certain that you’re correct at the atomic level. For example, if we have a number that we’re getting out of a source system, let’s use sales, and we have one number per region, the developer is working on detailed data to generate those regional level numbers. Even if we match those regional level numbers, there are still several ways that the development could be inaccurate. It may be that the detailed numbers are aggregating to the total correctly, but maybe those details are assigned to the wrong individual sales reps.

It’s always better to have the detailed data as a validation source, but we don’t always get what we want. Sometimes you need to take what you got. Often when building Reportopia, it is the case that we’re not going to have exactly what we already are trying to build. This just makes sense. We’re often building something new. Not always, but often we’re building something new, which means that we’re not going to have an apples-to-apples comparison when we test. But that doesn’t mean we can’t test. We absolutely can and we must test.

In that case where we have a report that’s giving us a high-level number, that’s what the business is able to give us to validate against because that’s what’s available today. That’s step one. Now, we tie to that high-level number. That looks good. Now, we need to figure out how we can do a little bit more validation. It may be that we must log into the business application and do some amount of sampling to see. We’ll use the sales rep example again to see if the numbers that are coming out of the development effort are matching on a sample basis to what we’re seeing in the business application. There are other ways to handle that, but we’ve got to have some type of validation source that gives us confidence that the development that was completed matches the expectations.

There are times when you simply don’t have a report in your business application that you can give for a validation source, and you can’t just go into the business application to validate the results. This is when you’re building something that is new. A good example would be like forecasting information. You’re not going to have the forecast in your business application or some type of logic that simply does not exist in your business application.

Then, what we have to do is back up. We have no way to validate the final results. At some point, we can back up in the business logic and find a point where we can validate. Maybe we’re trying to project sales, for example. In that case, maybe we can tie the actuals, because the actuals should be in the business application. Maybe we can get a report out of the business application, whether it’s high level, or detailed level. We need to go into the front end to find those sample test cases, but we should be able to get the actuals tied.

Then what I call this is just proving your work from there. Once you can tie at the detail level, then you must prove the work to get to the results because, there’s nothing to tie the final results to. We can at least tie the starting point, which is the actuals in this case. Then we can show the computations that have occurred between that point that we were able to validate and the final results that we’re presenting. That’s detail-to-results type validation.

Validation sources are important. This is something that the analysts that you’re working with will often be asking. Hopefully that gets you some understanding of what options you have. Again, it’s kind of just what do you have available for us to make sure that the development that’s being done meets your expectations. Now, let’s talk about the actual testing process itself. There are whole disciplines that focus on this type of thing, so I’m not going to do it justice, but I’m going to try to give it an overview. I think it is going to be sufficient for business users to understand what’s going on. This is not that complicated.

First, when a developer is building something, they are going to want to know if they’ve met expectations as they’re developing. That just makes sense that they’re going to want to know what they are trying to develop. Even if they don’t have a formal process, even if it’s just complete cowboy coding, which we don’t recommend, if you tell a developer to do something like build a widget and make sure that the results do A, B, and C, as the developer is building something, they’re testing as well. They’re running their code to see, is it doing “A, B, and C”. That’s a very down and dirty type of testing, but we are often a lot more formal about this process.

There are disciplines of how you do software testing, like test-driven development, behavior-driven development, and acceptance testing. There are a lot of different ways of testing, and I’m not going to go into all these methodologies. Just know that some of those things are likely what the development team is using as they’re going through this test process.

The first tester is the developer. The developer built something, they have read the requirements, hopefully they have a validation source, and the developer is going to do that first round of validation. As you can imagine, a developer is testing their own effort, and it’s quite easy for a developer to be biased. It’s easy for a developer to overlook things, because they’re very deep into this effort, and sometimes it’s just not as easy for a developer to catch a problem as it would be for someone on the outside. That’s where peer testing comes in. The developer will give the results of their effort and the validation source to someone else, often another developer. Maybe a senior consultant or something like that will read the requirements, run the code, look at the validation source, and see if the validation effort succeeded. It’s a second set of eyes from a technical perspective to see if we met requirements. That’s the peer-testing process.

If testing passes the developer’s effort and testing passes peer testing, then testing will go to you, the client, the business user. At that point, what you’re going to be asked to do is determine if the result of development have met requirements. Now, how are you going to do this? There’s a couple of things that you must have in mind. Number one, you already have provided a validation source, so you should be familiar with what you provided. You know the nuances of that validation source, and you also know what the requirements were for the development effort. Now, you have some type of an output, some type of data has been given to you. Then we’re asking you to say, is this correct? Are these numbers accurate?

You’re going to be comparing the development output to your validation source. The same thing that the developer did, the same thing that the peer review did, but now, you’re asked to do this as well. It could take again several different paths, but often it’s just a data dump coming out of the development effort, and you already have a data dump coming out of your business application. Do the numbers match up? Sometimes the developer and/or the peer reviewer will provide their evidence as well for their testing, so you can see what they did to compare the validation source to the development effort.

I don’t recommend utilizing those results. You need to see them because you shouldn’t have to test as a business user until you know, the development team believes that they’ve developed something that meets your requirements. You should start from scratch, because you don’t want to let a mistake that was made upstream of your testing bias your testing. You really do need to start from scratch with your testing, pull in your data source, get the raw data dump from the development team, and do your comparison to see if we have accurate results.

Now, let’s talk about where this testing is being done. There’s a couple of things that are important to keep in mind, mainly around timing. A development environment is typically going to be used to do all this work. We’re not doing development in your production environment. Your production environment is an area where everything there is tested. All reports should be reliable, accurate, they should be current, and so on. We don’t want to impact that production environment. In other words, nothing goes to production until we get done with all this testing, and we’re satisfied that the testing has been accurate. How are we going to test if we’re not doing this in production? We do that by having a parallel environment. Sometimes you’ll have an actual test environment. Sometimes you’ll even have a QA environment. For now, let’s just call it a “development environment.” Essentially, it’s just an environment that’s completely decoupled from your production environment. If a mistake is made in development, it doesn’t impact production.

However, because we’re working in a different environment, there’s often time-based issues that must be taken into consideration during testing. This is going to take several different paths. What we often do is in a development environment, we’re not refreshing that environment, we’re not getting current data into that environment at the same interval as we are in production. In our case, a lot of times, we’ll just refresh development as need it to get to the test results that we are looking for.

If something similar is true for that validation source that you, the business user provided, and that is you pulled that data out of your business application at a certain point in time. It’s very important that we understand exactly what point in time that validation source was extracted because, after the point that report was extracted, the business has continued to operate, and things have changed. It’s very important that we have as-current-as-possible validation source and as-current-as-possible development data, but it’s critical that those two are in sync.

In some cases it’s not practical to get them completely in sync. In that case, you must have some way of knowing how to identify what we’ll call “timing issues.” Those timing issues must then be run down. You can’t just dismiss everything as a timing issue, or your testing is useless. You must identify the cases where things don’t validate accurately completely, and then run the rest of them down. You have to do that so that you understand, for example, if you tested 20 failed test cases, and they’re all due to timing. Meaning that the business application is ahead of the testing data that we’re comparing against.

Again, the third step in the testing process is your testing, the client testing, the business user testing. You are trying to ensure that the results of the development match your validation source. At the same time, you have to also keep in mind what we’re trying to achieve from a business perspective. If you find that the test results are matching the validation source, but you also in the back of your mind have something that’s telling you, “We really also need to do A, B, and C,” because that’s what it’s going to take to add value. I didn’t think about that upfront when we were talking about requirements, but now that I look at this data and I am doing the validation, I realize that we really need to do these additional things. You need to stop the process right there. You need to speak up right there. You need to say the results tie and match, this is what you asked to have done. But now, back up, go back, and do another round of development because now, you have more information.

This is a completely natural process. People learn things when they look at the results that weren’t top of mind when initial requirements were collected because, you’re looking at very detailed information. It’s making you think, but for whatever reason, sometimes people are hesitant to give us this type of feedback that are going to change the requirements. I guess that’s because in some development shops, you’ll have change requests. Then, change requests must get approval and those things have costs associated with them. That might lead back to the person that’s changing the requirements and make them look bad. We’ve got to set all of that aside and focus on what we’re trying to get done. You’ll very rarely have all the requirements correct on any significant development effort on the first try. You might be very close, but often you, the business, is going to learn something just like development team will, as the development process is progressing.

At this point, where client/user testing is happening, again, we’re downstream of the developer and the peer testing, which have been successful, now we’re doing your testing. It’s important that you provide not only the results of your testing from A matches B type of scenario, but you’ve also got to think about, “Does this actually get me to where I want to be? Am I actually going to be able to use these results to get to the value that I’m ultimately trying to achieve?”

Let’s just say that we tie exactly like we think we are supposed to tie and the data is lining up perfectly. You learned something new that we need to add to make this report as valuable as it can be. We have a couple of choices at that point. We could stop from moving forward, we don’t push this thing into production because it’s not as good as it could be. Then we go back and do another round of development. Go back through testing and see if we meet the mark. Or maybe there is value in what’s already developed. We can push that into production, and then go back and add to it.

That is what we really like to do. We like to constantly be improving. If we have something of value, if it’s wrong, we’re not going to push it into production. If we have something of value, even if it’s not as good as it can possibly ever be, let’s get it into production, because we don’t want to throw away something that is valuable. Let’s get that thing producing results and then let’s improve it.

It’s kind of an interesting situation, because everybody wants it to be perfect. Everybody wants that. We want it to be as good as it can possibly be as soon as it goes out. I think that’s natural, but it’s analysis paralysis. It’s often going to slow you down and reduce the overall value. If we’re not going to put something out in production until it’s as good as it can possibly ever be, frankly, it’s never as good as it can ever be. We’re always going to find ways to make it better. As soon as you get that report out into production and you think it’s perfect, business user, guess what? Your colleagues are going to look at that and they’re going to give us more insight into what’s going on. Maybe there are things that the person that gave us requirements doesn’t know about. That’s going to feed additional requirements, ultimately go through another round of testing, further improve this report, and add more value for the business.

That’s kind of long-winded, but this is the back to the testing process. This is the third round of testing, which is client or business user testing. Your role here is to make sure not only that we’re matching from an A equals B type of standpoint. You also must think about the requirements that you gave and see if the requirements need to be adjusted at this point or not.

Now, assuming that testing passes the developer test, the peer review test, and the client test, this feature is going to be promoted to production. At LeapFrogBI, we have an actual queue in our board where we’ll push features across a board, and one of those queues is called “promote to prod.” Those are the cards that have passed all testing and are ready to go into production.

Now, what’s going to happen is we are going to schedule this promotion into production. It really depends on each individual business to decide when we’re going to push into production. We want to do it, of course, when it’s not going to impact the business users. We want to make sure that we have time to recover if something goes wrong. There are other considerations that we’re not going to go into. Ultimately, we’re going to schedule a promotion into production so that feature, or that report, is pushed into production. It’s ready for use for everybody that it’s intended for.

At the point that the feature goes into production, we’re going to do another round of testing. This is going all the way through development, which means that we have a parallel environment. We’ve got development, which has these results in it. Now that we promoted to production, we have everything in production with the results in production as well.

At that point, we must do another round of validation. We’re not typically going to find anything, but we need to make sure that the results in production also match the results that we’re looking for. If we have some anomaly or some mistake during the promotion process, there’s another round of testing that must be done. We just call it “validation” and that’s done in production. Once it’s out there in production, the business has looked at it, and approves, at that point, we’re done with that feature and we’ll close that request out.

Now, the business is using this feature. This report is out there, hopefully adding value, but there’s still more testing to be done. This is the ongoing testing process. We’re going to have additional request that go through development and those requests are making changes to this engine. That is moving data around, pulling data and transforming it, creating new data structures, and ultimately moving into reports that your business is using. As those changes are made, we want to make sure that we’re not negatively impacting something that has already been developed. We don’t want to have to retest everything every time that we go through a development process.

This is where regression testing comes in. If we can define some type of known valid output, then we can create a regression test that will constantly be looking to see if that parameter is staying inline or not, without having to go through a testing process, every time we push a new feature out. You’ll likely hear about regression testing. Hopefully, you don’t hear too much about it other than explaining that it exists. If you hear that a developer is saying something failed a regression test and it’s something that you haven’t even been working on because, testing has been set up to happen continuously. Testing on things that you did six months ago, and you forgot about. Testing is continuously happening to make sure that we’re staying inline.

There’s a lot of reasons, why the regression test would fail. It could be that something was changed that shouldn’t have been changed or wasn’t changed correctly. It could be that your business applications configuration has changed, so now the business rules need to be updated. There’s a lot of reasons for that. But at the end of the day, we need to have some way of knowing that not only are we accurate now with these results, but we’re also going to have some way to look at this on an ongoing basis to ensure that we’re staying accurate going forward.

Okay, so I hope that gives an understanding of what’s going to be expected when we get into the testing process of these reports that we’re developing. It is important. This is super important because, testing creates trust. It’s one of the ways we create trust. If you are involved in the testing process, you’re going to have faith that the results are meeting expectations because you did the test, you tested it, you know that the results match the expectations, and the process that we go through should also instill trust. When you, the person that’s heavily involved in this process, has trust, then you’re able to confidently go out there and relay that trust to others. That is going to hopefully instill trust in them, and also ensure that the report is being utilized to deliver the value. And ultimately, change the business process that we’re hoping to change so that we deliver value for the organization we’re working for.


 

About the Podcast

In More Than Reports, Paul and Alex explore the many ways in which data technology can be used by small and midsize businesses to promote transformational growth. We talk openly about the opportunities and challenges companies face as they try to make sense of an increasingly complex environment. Together with our expert guests we break the facts away from the hype and never lose sight of our target: value.

 

Listen On

Podcast Episodes

Follow Us

We appreciate your feedback.

Leave us a comment or ask us a question you would like answered in an upcoming episode.

';