Why do programmers, analysts, and designers not do testing?
Many will blame the program as being un-testable. Others will say that the environment of this
program makes it impossible to construct legitimate tests. Some will use management an excuse. Usually there are points in time where the
decision was made to not test or to postpone testing to a more convenient time.
Blame the Program
- Testing
is not part of the program.
- It is
not obvious where to test.
- Testing
is highly repetitive.
- Testing
is voluminous.
- This
program constantly evolves so testing is a moving target.
- Tests
take as much effort as writing the program.
- Why
test when it works?
Blame the Environment
- Test
data varies widely so we are unable to test.
- Testing
is not well understood or defined.
- The
need for testing appears after the program works.
Blame Management
- The
programs works so just use it.
- We
could not let testing slow delivery of the product.
- Good
programmers do not need to test.
At what point do we decide that testing is out?
Before
- There
are too many layers to test.
- It is
too complex to test.
- This
is just a prototype.
- A user
can shake out the problems.
- Users
and support can test it, it is their job.
After
- If a
program works, why test it?
- If it
is a simple program, why test it?
- If we
have a tight schedule, why test it?
- Running
the program is a test, so why test it again?
- It is finished;
there is another project so there is no time to test.
What is the truth about testing?
After all the choices of testing there are some realities of
testing that are rarely considered. It
is not a panacea to solve programming problems or project delays. The introduction of testing into a project
should be considered like any other good tool that we introduce. It has an impact on the project. It needs management and direction or it is
doomed to failure as much as any other program effort.
Sad truths
- We all
do it anyway.
- What
is not tested usually doesn’t work.
- Untested
software gains a reputation.
- Tests
will miss things.
Happy truths
- Testing
is just as much fun as programming.
- Know
how much to test: the three bears
of testing.
My testing is too hot – testing every function, testing every decision
point.
My testing is too cold – it works, I tried it.
My testing is just right – test user functions,
simulate layers (peel the onion),
and
know the law of diminishing
returns (measure testing).
Reality truths
- Bugs
tend to re-appear.
- The
later we catch mistakes, the greater the cost.
- “If
you believe you can or can’t, you’re right” Henry Ford
- Good
testing saves time and creates concrete deliverables.
What are the guidelines for testing?
Guidelines should be program specific. The open source teams have built the best
testing tools in the history of computing.
Learn from their efforts across many different languages and choose
wisely the tools available. Build only
as you need to build. If possible, add
what you need to an open source project that may have as many as 200 authors
tuning their code. Be program specific
so that tests build into a story of testing.
The following guidelines are short.
Testing is a foundational shift in thinking which enables measures to
track every project. It should be
program independent and release specific.
At the end of this road one should not only overcome the burden of
testing but see a new set of sign posts along the way. Unit test done, system test done, profile
complete, coverage report prepared, benchmark completed, and this release
handed off to support for release.
Program Specific
- Use
a test harness and test walls which automate testing.
- Use
programs to do the test drudgery.
- Give
programs the capability of testing and/or simulation.
Guidelines
- Use testing
to prove program correctness.
- Have
the mind set that “programming includes testing.”
- Use
testing for profiling, coverage, benchmarks, and support.
Program Independent
- Document
your tests by description, class, files, and results.
- Define
tests first as part of the programming effort.
- Write
documentation and tests as part of the programming.
Release Specific
- Version
tests and test data with each formal release.
- Create
tests of bug fixes to verify the next release.
- Test
as part of the release process.
Overcoming the Burden of Testing – Creative Ideas
- Create
a framework from which tests are derivable.
- Create
the ability to create tests easily.
- Create
the ability to run tests easily.
- Test
changeable things but force consistent results
DAY MONTH YEAR
TOTAL
LINK
USER
- Create
the ability to check tests easily.
All tests pass or fail.
Reproduce failures with documentation.
Run tests in debug mode.
- Leave
testability in the programs to capture bugs
myprog –log “logfile.dat”
myprog –record
option to turn on / off recording
- Save
your initial testing as “regression” tests.
- Automate
tests by user definable categories.
- Audit
tests by version and date run.