Software verification is no simple task. How do we know the software does what it should do? Or better yet, how do we know the software does what we think it does! Perhaps even more challenging, what do the various individuals involved in creating the software think it should do? What do stakeholders expect? What do developers think? Testers? What do users expect?
Making sure everyone is on the same page about what they expect the software will do, and then verifying that it actually does that, requires a high degree of transparency. Each layer of handoff increases the likelihood that people aren’t even in agreement about what they think the software should do. But, even if there’s a common understanding of what it should do, that’s no guarantee that it will actually do that.
So how do we go about verifying software does what’s expected? Should everyone open it up and poke around to see what happens? That doesn’t seem very effective. How would we know anyone actually verified things? Obviously we need to communicate about the verification process. Unfortunately, and especially in consulting, this communication isn’t very transparent, nor organized. Often, those that implement software do whatever testing they feel is necessary. And then those that use the software will do whatever testing they feel is necessary. It’s rare for these efforts to be coordinated.
As such it’s not uncommon to find overlap, or even worse, a lack of testing critical components of the software. If testing isn’t collaborative, individuals are left to assume what they’re responsible for.
Instead, if stakeholders, developers, testers, users and everyone involved cohesively work together to establish a verification plan, we’re much more likely to cover the important aspects of the system. And, we can avoid overlap and learn to rely on each other to leverage expertise to maximize the effectiveness of testing. For example, developers and testers can often automate many forms of testing. This can relieve testers and especially users and stakeholders from lengthy manual verification of common scenarios, so they can instead invest time in exploratory testing.
Overtime, cohesive testing helps tackle even more challenging, fundamental questions. Like, when we change software, how do we verify it still does what we designed it to do a week ago, a month ago, and a year ago? Which of those things should it no longer do? Which of those things should it still do?
So, what do we need to do to adopt cohesive testing? If you want to explore this question, get everyone together: stakeholders, users, developers, testers, anyone involved in implementing and supporting the software. And then try the following:
- Ask each person to write down the objective (purpose) for the last month or two of development. Then, have people read these aloud.
- Were there discrepancies? How can discrepancies be avoided in the future?
- What forms of handoff may have contributed to this?
- Have each person explain the type of testing they performed and how it aligned with the goals of the project.
- As individuals are listening, have each person find at least one thing they tested that overlapped with what someone else tested.
- Discuss duplicated testing.
- Was the duplication wasteful? (for critical components duplication isn’t always wasteful, that said, it’s common to talk about these areas of testing and for the duplication to be intentional)
- Who would be best qualified to handle areas of unnecessary overlap? How can everyone better communicate to avoid this overlap in the future?
- Ask everyone to explain the types of testing they assumed others would do.
- Of this, what wasn’t done?
- Ask everyone to explain the types of testing they expected others to do and explicitly communicated with the other party about.
- Of this, what wasn’t done?
- Ask everyone to write down one thing they were most uncertain about. Then, ask how that uncertainty could have been mitigated with the help of someone else in testing.
- Ask what can be done for users and stakeholders to understand and feel confident in the testing that developers and testers perform. Ask what can be done for developers and testers to understand and feel confident in the testing that users and stakeholders perform.