Everyone at some point in their careers faces difficulties. The
problem could be not having enough resources or time to complete
projects. It could be working with people who don't think your job is
important. It could be lack of consideration and respect from managers
and those who report to you.
Software testers aren't exempt from this. But as Jon Bach pointed out
in his session titled "Top 10 tendencies that trap testers" that he
presented at StarEast a couple weeks ago, software testers often do
things that affect their work and how co-workers think about them.
Bach, manager for corporate intellect and technical solutions at Quardev
Inc., reviewed 10 tendencies he's observed in software testers that
often trap them and limit how well they do their job.
"If you want to avoid traps because you want to earn credibility,
want others to be confident in you, and want respect, then you need to
be cautious, be curious and think critically," he said.
Here's a look at what Bach considers the top 10 traps and how to remedy them:
10. Stakeholder trust: This is the tendency to search for or
interpret information in a way that confirms your preconceptions. But
what if a person's preconceptions are wrong? You can't automatically
believe or trust people when they say, "Don't worry about it," "It's
fixed," or "I'll take care of it."
Remedies include learning to trust but then verify that what the
person says is correct. Testers should also think about the tradeoffs
compared with opportunity costs, as well as consider what else might be
broken.
9. Compartmental thinking: This means thinking only about
what's in front of you. Remedies include thinking about opposite
dimensions -- light vs. dark, small vs. big, fast vs. slow, etc. Testers
can also exercise a brainstorm tactic called "brute cause analysis" in
which one person thinks of an error and then another person thinks of a
function.
8. Definition faith: Testers can't assume they know what is
being asked of them. For example, if someone says, "Test this," what do
you need to test for? The same goes for the term "state." There are many
options.
What testers need to do is push back a little and make sure they
understand what is expected of them. Is there another interpretation?
What is their mission? What is the test meant to find?
7. In-attentional blindness: This is the inability to
perceive features in a visual scene when the observer is not attending
to them. An example of this is focusing on one thing or being distracted
by something while other things go on around you, such as a magic
trick.
To remedy this, testers need to increase their situational awareness.
Manage the scope and depth of their attention. Look for different
things and look at different things in different ways.
6. Dismissed confusion: If a tester is confused by what he's
seeing, he may think, "It's probably working; it's just something I'm
doing wrong." He needs to instead have confidence in his confusion.
Fresh eyes find bugs, and a tester's confusion is more than likely picking up on something that's wrong.
5. Performance paralysis: This happens when testers are
overwhelmed by the number of choices to begin testing. To help get over
this, testers can look at the bug database, talk with other testers
(paired testing), talk with programmers, look at the design documents,
search the Web and review user documentation.
Bach also suggests trying a PIQ (Plunge In/Quit) cycle -- plunge in
and just do anything. If it's too hard, then stop and go back to it. Do
this several times -- plunge in, quit; plunge in, quit; plunge in, quit.
Testers can also try using a test planning checklist and a test plan
evaluation.
4. Function fanaticism: Don't get wrapped up in functional
testing. Yes, those types of tests are important, but don't forget about
structure tests, data tests, platform tests, operations tests and time
tests. To get out of that trap, use or invest in your own heuristics.
3. Yourself, untested: Testers tend not to scrutinize their
own work. They can become complacent about their testing knowledge, they
stop learning more about testing, they have malformed tests and
misleading bug titles. Testers need to take a step back and test their
testing.
2. Bad oracles: An oracle is a principle or mechanism used to
recognize a problem. You could be following a bad one. For example, how
do you know a bug is a bug? Testers should file issues as well as bugs,
and they should mention in passing to people involved that things might
be bugs.
1. Premature celebration: You may think you've found the
culprit -- the show-stopping bug. However, another bug may be one step
away. To avoid this, testers should "jump to conjecture, not
conclusions." They should find the fault, not just the failure.
Testers can also follow the "rumble strip" heuristic. The rumble
strip runs along most highways. It's a warning that your car is heading
into danger if it continues on its current path. Bach says, "The rumble
strip heuristic in testing says that when you're testing and you see the
product do strange things (especially when it wasn't doing those
strange things just before) that could indicate a big disaster is about
to happen."
No comments:
Post a Comment