Why do programmers, analysts, and designers not do testing?

 

Many will blame the program as being un-testable.  Others will say that the environment of this program makes it impossible to construct legitimate tests.  Some will use management an excuse.  Usually there are points in time where the decision was made to not test or to postpone testing to a more convenient time.

 

Blame the Program

 

  1. Testing is not part of the program.
  2. It is not obvious where to test.
  3. Testing is highly repetitive.
  4. Testing is voluminous.
  5. This program constantly evolves so testing is a moving target.
  6. Tests take as much effort as writing the program.
  7. Why test when it works?

 

Blame the Environment

 

  1. Test data varies widely so we are unable to test.
  2. Testing is not well understood or defined.
  3. The need for testing appears after the program works.

 

Blame Management

 

  1. The programs works so just use it.
  2. We could not let testing slow delivery of the product.
  3. Good programmers do not need to test.

 

At what point do we decide that testing is out?

 

Before

 

  1. There are too many layers to test.
  2. It is too complex to test.
  3. This is just a prototype.
  4. A user can shake out the problems.
  5. Users and support can test it, it is their job.

 

After

 

  1. If a program works, why test it?
  2. If it is a simple program, why test it?
  3. If we have a tight schedule, why test it?
  4. Running the program is a test, so why test it again?
  5. It is finished; there is another project so there is no time to test.

 

What is the truth about testing?

 

After all the choices of testing there are some realities of testing that are rarely considered.  It is not a panacea to solve programming problems or project delays.  The introduction of testing into a project should be considered like any other good tool that we introduce.  It has an impact on the project.  It needs management and direction or it is doomed to failure as much as any other program effort.

 

Sad truths

 

  1. We all do it anyway.
  2. What is not tested usually doesn’t work.
  3. Untested software gains a reputation.
  4. Tests will miss things.

 

Happy truths

 

  1. Testing is just as much fun as programming.
  2. Know how much to test:  the three bears of testing.

    My testing is too hot – testing every function, testing every decision point.
    My testing is too cold – it works, I tried it.
    My testing is just right – test user functions,
       simulate layers (peel the onion), and
       know the law of diminishing returns (measure testing).

Reality truths

 

  1. Bugs tend to re-appear.
  2. The later we catch mistakes, the greater the cost.
  3. “If you believe you can or can’t, you’re right” Henry Ford
  4. Good testing saves time and creates concrete deliverables.

 

What are the guidelines for testing?

 

Guidelines should be program specific.  The open source teams have built the best testing tools in the history of computing.  Learn from their efforts across many different languages and choose wisely the tools available.  Build only as you need to build.  If possible, add what you need to an open source project that may have as many as 200 authors tuning their code.  Be program specific so that tests build into a story of testing.  The following guidelines are short.  Testing is a foundational shift in thinking which enables measures to track every project.  It should be program independent and release specific.  At the end of this road one should not only overcome the burden of testing but see a new set of sign posts along the way.  Unit test done, system test done, profile complete, coverage report prepared, benchmark completed, and this release handed off to support for release.

 

Program Specific

 

  1. Use a test harness and test walls which automate testing.
  2. Use programs to do the test drudgery.
  3. Give programs the capability of testing and/or simulation.

 

Guidelines

 

  1. Use testing to prove program correctness.
  2. Have the mind set that “programming includes testing.”
  3. Use testing for profiling, coverage, benchmarks, and support.

 

Program Independent

 

  1. Document your tests by description, class, files, and results.
  2. Define tests first as part of the programming effort.
  3. Write documentation and tests as part of the programming.

 

Release Specific

 

  1. Version tests and test data with each formal release.
  2. Create tests of bug fixes to verify the next release.
  3. Test as part of the release process.

 

Overcoming the Burden of Testing – Creative Ideas

 

  1. Create a framework from which tests are derivable.
  2. Create the ability to create tests easily.
  3. Create the ability to run tests easily.

  4. Test changeable things but force consistent results

    DAY MONTH YEAR
    TOTAL
    LINK
    USER

  5. Create the ability to check tests easily.

    All tests pass or fail.
    Reproduce failures with documentation.
    Run tests in debug mode.

  6. Leave testability in the programs to capture bugs

    myprog –log “logfile.dat”
    myprog –record
    option to turn on / off recording

  7. Save your initial testing as “regression” tests.
  8. Automate tests by user definable categories.
  9. Audit tests by version and date run.
Hosted by www.Geocities.ws

1