language


NAME - tester language

tester language - tester language description


DESCRIPTION

This page provides a detailed description of the tester language.

The tester manual overview refers to all other sections:


NOTES

TEST LANGUAGE DESCRIPTION
Tests are listed by description using keywords.

Keywords on the left margin are used to direct testing so that comments can be almost anywhere. Keywords are described below in grouped bulletins.

keywords
Description keywords should contain a unique description:

  • tests TO describe this file of tests
  • default TO describe a group of tests
  • test TO describe a specific test

This unique description helps find tests.

Mulitple line commands can be as many lines as needed to complete the keyword. Note the special section on endings:

  • names DEFINES names for substitution, $name
  • envall FOR overall test environment commands
  • env FOR test environment commands
  • beforeall FOR overall or group set up commands
  • before FOR test set up commands
  • run FOR test run commands
  • prune FOR test pruning commands
  • after FOR test tear down commands
  • afterall FOR overall or group tear down commands

Multiple line commands can be ended with:

  • comment FOR a # symbol in column 1
  • keywords FOR a new keyword
  • end FOR the "end" keyword to end all tests
  • EOF i.e. an end of file condition

Multiple lines of commands may have blank lines sprinkled in which are ignored unless it is the run command. Use the -all option to make blank lines significant in all commmands.

Some keywords are limited to one line of information:

  • expect FOR expected output files
  • exit FOR expected exit value
  • diff FOR diff command to check test results
  • save FOR other output to save

See the Test Specification Example, sample for an example.

.

Steps To Effective Testing
Describe the tests and define names and class es.

Define beforeall , afterall and envall commands

Define default groups that have similar characteristics.

Define each product specific test to succeed.

tests
The text between these two hatched lines represent the test specification. Any command must be in the first column as shown:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - tests Testing a small database - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Defining tests includes setting up names, environment variables and any set up and tear down commands that are shared.

names
Use the names keyword To define a set of names which will be substituted throughout the test specification. A tester can define independence of location and data by effective naming.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - names COPY /bin/cp DATA smalldb/contrast/data DELETE /bin/rm -f ECHO /bin/echo MOVE /bin/mv SOURCE /home/master/test - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Use names to specify command locations that vary from platform to platform. Use names to specify the location of programs to test. Anything that can be moved can be named so that a name file can be used to test out a new environment. This achieves location independence.

The concept of system defined names is discussed later which will allow the tester to refer to the unique names that may change with every test run but stay consistent during the run of an individual test.

Note that a name is used within the test document with a dollar symbol in front of it. If you need to use the dollar symbol in your test document then use a backslash to quote it, i.e. \$DATA.

envall
The envall keyword defines environment commands which apply to the entire set of tests. These commands are ended by a new test keyword, the end of file or a shell comment .

This sample shows that the environment variable WORLD is set for this test specification.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - envall setenv WORLD mars - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

It must be stated that this environment variable may not be in force for the execution of all tests. It all depends on how you drive the tester. Check out the -queuejobs option which insures that the entire test session is done as one stream of commands.

The -restart option will insure that each test is done by itself so the environment variable for the WORLD would go away even before the first test is executed. Otherwise, use the env command to give a shell variable to a specific test.

And all environment variables are shell specific. In this case, the senenv command applies only to cshell, csh. The bourne shell would look like:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - envall WORLD=mars export WORLD - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The default shell is the bourne shell so this is the form expected by default. The variable has to be exported to export its value to any subshells.

beforeall
This sample shows how a set of instructions may be carried out before all tests.

Even as the tester executes, it may not be actually executing a test. By default the tester just creates a shell to execute a series of tests. See the -execute option to see a way to have it doing things immediately.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - beforeall $ECHO Using: $SOURCE with data: $DATA $COPY $SOURCE/$DATA . $COPY $SOURCE/myVitalFile - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

In this case global files may be put in place to allow all tests to operate on common data so that expected results are the same each time.

This beforeall keyword can also apply to a default group of tests. In that case the before all commands execute before all tests in that specific group.

afterall
This sample shows a set of commands to clean up after all tests have executed.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - afterall $ECHO Finished testing $DELETE $DATA $DELETE $SOURCE/myVitalFile # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This afterall keyword can also apply to a default group of tests. In that case the after all commands execute after all tests in that specific group.

default
Grouping tests with the default command allows the tester to make test templates which match the logical flow of a program. The test document should be readable by descriptions which outline the whole of the testing effort.

This shows a group for error testing:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - default Exception io handling missing a vital file class regres io error exception beforeall $MOVE myVitalFile saveVitalFile afterall $MOVE saveVitalFile myVitalFile - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The idea of a test group is optional. The test specification will run without defining any specific test group.

The following keywords may be specified as a ``default'' test group:

  • after TO tear down commands for each test
  • afterall TO tear down commands for all tests in this group
  • before TO set up commands for each test
  • beforeall TO set up commands for all tests in this group
  • class FOR keywords describing the class of tests
  • diff FOR diff command for comparing results of each test
  • env FOR environment commands for each test
  • exit FOR expected exit code value of each test
  • expect FOR expected output by result tag
  • prune TO define commands to prune test results
  • save FOR saved output by result tag

The afterall and beforeall commands will be executed after and before all tests that follows the definition of this default test group.

Any default keywords apply to each test that follows but may be overridden by any specific test definition using the same keyword. The override of that keyword only applies to the current test. The next test goes back to the default keyword.

expect
The expect keyword specifies the expected output for a test. Expected output is compared with the diff routine by default.

This shows an expanded group for error testing:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - default Exception io handling missing a vital file class regres io error exception expect stderr - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

In this case any test in this group will expect the Standard Error output. This means it will remember the Standard Error output when the test is executed. Errors are often writing to standard error.

The expect keyword can use the following keywords to specify expecting certain output:

  • stderr .. standard error output
  • stdout .. standard out output
  • both .. both standard error and standard out output
  • same .. standard output sent to the same file
  • tag .. user specified name of file extension

Really the heart of testing is the ability to backup these expected results , restore files for checking results and to reproduce results .

tag
A user might expect to see other output which is related to a specific test. If so, then use any lower case value for a tag on the expect or save keyword.

The output has to be explicitly put there by the specific test as shown:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Writing out specific logs run grep "analysis" $FILESPEC.log > $TESTSPEC.log expect log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

An expect file is created by the name of $TESTSPEC .log. The tester logs expect files to compare the results to $TESTSPEC .R_LOG. Any mis-matched data will be found in the $TESTSPEC .log.CMP file.

Tests typically FAIL or SUCCEED but not always. The save keyword will still create a results file but will save it to $TESTSPEC .S_LOG.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Writing out specific logs run grep "analysis" $FILESPEC.log > $TESTSPEC.log save log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This particular test will always succeed because it has no exit or expect conditions. It is still better to check the exit value to verify that the program itself ran successfully as shown:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Writing out specific logs run grep "analysis" $FILESPEC.log > $TESTSPEC.log exit 0 save log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Now as long as the grep command completes successfully, the output will be written to $TESTSPEC .log and saved to $TESTSPEC .S_LOG. Any expect or save files are automatically cleaned up in this process if the test is successful.

diff
Many tests require different compare routines from the standard diff. A binary output requires the use of a binary compare like the unix cmp utility.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Executeable scan product build class final build run cc scan.c > $TESTSPEC.bin expect bin diff cmp - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Notice how we have to monkey around with the name to get the expected output to be recognized. Check out the prune keyword for an alternative approach.

prune
After a test is run it is often not ready for testable results. Often dates and numbers exist in the standard test data that vary from run to run. It is essential to be able to filter out data or change file names to have consistent tests.

The prune keyword allows any number of commands for filtering the expected data before it is processed for success or failure. If the data is generated, it is also pruned before the generated data is saved.

Here is an example of testing the executable of a product without having to change the development process.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Executeable scan product build class final build run make scan prune mv scan $TESTSPEC.bin after make clean expect bin diff cmp - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This makes use of the binary compare in the diff section as well as an alternate expect ed output.

Another example filters out dates:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test scan for analysis data run scan "analysis" prune dateout.pl $TESTSPEC.so after rm -f *bak expect both - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This scans for lines of analysis with a scan routine and filters today's date with a dateout.pl script. The ``expect both'' will capture both standard error and standard output from the run command.

The clean up after removes any backup file created by the dateout.pl utility. If the files do not match the previously generated data, then the test will FAIL .

after
Each test can clean up after itself by placing clean up commands which remove work files.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test delete of grok data run delete.pl "grok" $FILESPEC.data after rm -f *bak expect both - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The script delete.pl leaves backup files of the before delete image for each file processed. So the clean up must come along and remove those backup files with the ``rm -f *bak'' command.

The problem with this example is that if $FILESPEC .data is shared by more than one test, the delete.pl routine has just destroyed global data. Look at the before section to see a solution.

Often the development tools like make have commands which support this clean up as well. See the prune section for a sample of ``make clean'' in the after section.

Use the -keep option to skip the after and afterall sections of the current test session. This may help in debugging tests.

before
Each test can do special set up commands which make this test safe from other tests. A key principle in testing is to make each test stand on its own independent of other tests. See how the before area make a unique copy of a shared file for this test -- making it safe for all other tests to use the same data:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test delete of grok data run delete.pl "grok" $TESTSPEC.data before cp $FILESPEC.data $TESTSPEC.data after rm -f *bak rm -f $TESTSPEC.data expect both - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

At this point the after section is expanded to clean up the test specific data, $TESTSPEC .data. Use the -keep option to skip the after and afterall sections of the current test session. This may help in debugging tests.

exit
A simple check for any program is the value returned by a program when it exits. This can be negative, postive or zero.

Successful tests like the one shown generally return zero.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test scan for analysis data run scan "analysis" exit 0 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Tests that check for user errors often return a positive number indicating the error. In this example the return of exit 2 should occur with a -bad option:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test scan for analysis data, bogus option run scan -bad "analysis" exit 2 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Tests that check for operating system errors often return a negative number like this example:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test scan for analysis data, no data file run scan -file badfile "analysis" exit -141 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

If the exit value varies from the numerical value specified then that specific test will FAIL .

env
Many tests require some special environment variables to be applied before running the test. This is often combined with a name to make the test run independent of location.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test scan for analysis data, no data file env DATAFILE=bogus export DATAFILE run scan "analysis" exit -141 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

See some greater detail on this in envall .

class
The concept of a class allows the tester to create catagories to manage tests . Anyone wanting to run tests can refer to these classes of tests by name.

It helps if these classes are documented in the test specification so that others can use them to run the tests that apply to them. The use of the -class option allows a tester to include tests that refer to that class. The use of the -unclass option allows a tester to exclude tests.

Sample classes might be:

  • basic.. basic core functionality tests
  • critical.. critical tests
  • dangerous.. dangerous tests
  • error.. tests checking out errors
  • fix.. test that has been fixed
  • hog.. tests that hog memory or resources
  • kf.. known failures
  • suite.. run the product test suite
  • regres.. regression tests
  • archive.. save the product test suite
  • utility.. utility tests

The class keyword specifies class names that apply to all tests that follows the definition of this default test group. If a test also specifies a class then it is in addition to the group class values.

Finally, here are some samples given the test classes mentioned above using -class and -unclass options:

Benchmarking critial tests: tester -x -bench -class critial t.scan

Checking all known failures: tester -x -class kf t.scan

Checking fixes of known bugs: tester -x -class kf+fix t.scan

Run test suite minus bugs: tester -x -class suite -unclass kf t.scan

system defined names
There are predefined test names created automatically for for convenience of the tester.

The following is a list:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - $DEFCLASS - description of test group class $DEFDESC - description of test group $FILENAME - name of current file $FILESPEC - complete name of current file (TESTDIR/FILENAME) $TESTCLASS - name of current test class $TESTCMD - current test command before shell substitution $TESTDESC - description of current test $TESTDIR - runtime test directory, resolved by shell $TESTID - test identifier if specified on command line $TESTNAME - name of current test (FILENAME<id><test_number>) $TESTNUM - number of current test $TESTSCMD - command line invoking tests (quoted with 'cmd') $TESTSDESC - description of tests $TESTSPEC - complete name of test (TESTDIR/FILENAME<id><test_number>) $TESTVERSION - complete tester version (major.minor date time) $TESTVER - short tester version (major.minor) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Things concerning a group begin with $DEF. See $DEFCLASS and $DEFDESC .

Things concerning a test begin with $TEST. See $TESTCLASS , $TESTCMD , $TESTDESC , $TESTID , $TESTNAME , $TESTNUM , $TESTSPEC .

The tester version can be see with $TESTVER and $TESTVERSION .

The tester command is seen with $TESTSCMD .

The tester runtime directory is resolved at test time and placed in $TESTDIR . This insures all shell output and command output will go to the test subdirectory during complex testing. Also, the tester can always return for error recovery using, cd $TESTDIR .

The description of the test specification can be found at $TESTSDESC found in a specific file called $FILENAME . At test time the name $FILESPEC includes the run time directory.

save
A test often has output that varies from test to test due to dates, numbers, a growing database and so forth. This data is often useful to a tester and can be saved for analysis. This can extend the use of a tester as a collector.

Errors may come with timestamps, file inode numbers and so forth. But the validity of the error may be just the exit number which proves the error was caught (a successful test). The save command will keep the standard error around as informative information for the tester analysis.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Invalid file found run scan "analysis" < $TESTSPEC.data before cp $FILESPEC.bogus $TESTSPEC.data after rm -f $TESTSPEC.data save stderr exit 3 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Note that the above test succeeds even though the standard error output changes from run to run. The FAIL check is looking for an exit of the value 3.

The save output is renamed in this case to $TESTSPEC .S_SE. Any save output is renamed using the ``S_'' to distinguish it from expected results , ``R_''.

A user tag may be applied in a test to save user output that will vary.

test
A ``test'' is defined by a unique description following the ``test'' keyword. It must have a ``run'' command associated with it to do anything. The run command can be a multiple lines. Other commands supported are:

  • after DEFINE tear down commands for a test
  • before DEFINE set up commands for a test
  • diff DEFINE diff command for comparing results
  • env DEFINE environment commands for a test
  • exit DEFINE expected exit code value
  • expect DEFINE expected output by result tag
  • prune DEFINE commands to prune test results
  • save DEFINE saved output by result tag

These are the same keywords shown as default keywords. To override a default put the specific test value after ``test''. This override only applies to the current test.

Tests are fairly simple when set up this way. Show below are two tests with a simple run command.

run
The most simple run case is executing one command line as shown below:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test Looking for a vital file with a read option run scan -read myVitalFile # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The run line must be terminated by either another keyword, the end of the file, an end keyword or a comment . In this case we have put the shell omment symbol to insure that the test is run as a single command.

A multiple line command would look like:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test looking for a keyword with grep run grep "who" this will look through all these words given as input lines and who can guess which line will survive the test of this input? end - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

This multiple line input will run the ``grep'' command and read in as input all lines up to the keyword end . As you can guess, only the line ``and who can guess'' will be found by grep which looks for any lines with the word ``who'' in it.

Name substitution is allowed anywhere in a test. So this is a test template for searching for generic things with any search routine:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test looking for a $WORD with $FIND run $FIND $WORD < $FILESPEC.data - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The find routine looks for a word in a file which is names according to the name of the test specification. For more info see $FILESPEC . In this case the end of the file indicates this is a one line command.

EOF
As shown in the run section, the end of a file will terminate a run command. It also terminates any other multiple line command and indicates this is the last of all tests.

When running the -queuejobs options, this is the point where all tests are completed and put together as a single test. In most cases tests are built in chunks and run as they occur as in -execute .

end
The end keyword indicates that this is the end of the test specification and gives the tester a place to put other information which will not affect any test.

It is the natural plact to put tester notes, to document classes and to discuss any special usage notes for this file.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - end Test Overview Non-executable tests Valid Arguments positional arguments vs options duplicate options duplicate options long names Invalid Arguments no argument specified no filename specified invalid format invalid long format no format specified invalid option Executable tests Valid data with different sizes: 4, 8, 16, 32 clean up output files afterward after saving "ls -l" & sum remove date and time information that varies from run to run - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

comment
Comments are easy to include in the test specification. Any white space in the first column indicates that what follows is a comment. Use white space to spread out the test and to make it readable.

Any non-keyword in column one is ignored.

And any blank line is also ignored in a test if it falls inside of any multiple line command except the run command.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Testing sh, csh, korn, tcsh tests testing shells names COMMENT # beforeall echo Starting a test of shells - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Isn't the above easier to read than:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # Testing sh, csh, korn, tcsh tests testing shells names COMMENT # beforeall echo Starting a test of shells - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Any shell comment is honored using the pound sign in the first column. If you need to pass through shell comments, then use name substitution with a keyword as shown below:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - names COMMENT # OPTIONS -xv SHELL /bin/sh run $SHELL $OPTIONS $COMMENT Test of this input against a series of shell commands $COMMENT ... first, this comment is universal $COMMENT ... second, a command should just execute date exit 0 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

.

predefined test names
See the section on system defined names list for a quick overview of all these names. The following describes each in detail.

$DEFCLASS
The $DEFCLASS variable when used in a test specification to substitute the exact class names given from the default class keyword.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - default valid tests class regres io - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

For example, the entry above would substitute:

``regres io''

for this name. This lets the test creator define useful group classes.

Classes are used to select specific tests. It is recommended that these names be documented. If possible adopt names that will be in common use. Like:

regres - regression test for a complete regression suite.

io - input output test.

hog - testing group that takes up all of memory.

$DEFDESC
The $DEFDESC variable when used in a test specification to substitute the exact description given in the default command that describes to that group of tests.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - default valid tests - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

For example, the entry above would substitute:

``valid tests''

for this name. This lets the test creator uniquely define test groups.

$FILENAME
The $FILENAME variable when used in a test specification to substitute the exact name given to the test specification file. This is ofen used to create a global resource available to all tests since they will all see the same base $FILENAME .

To test the program ``scan'' one might create a test specification called t.scan. In that case any reference to $FILENAME returns the string ``t.scan''.

In creating a large number of tests it is important to choose some conventions that allow tests and test data to stand out and be unique.

In a test specification called t.scan we see the entry below for clean up.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - afterall rm -f $FILENAME.data - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

For example, the entry above would substitute ``t.scan'' for this name. This lets the test creator to remove global data used for testing.

In the next section the $FILESPEC includes the complete test file specification. It is safer to use $FILESPEC which includes $TESTDIR .

$FILESPEC
This variable defines the name of current test specification at runtime which becomes a combination of two system predefined variables as shown below:

$TESTDIR + $FILENAME

So the t.scan file can always be found as $FILESPEC . The test running in /test/scan will resolve to:

/test/scan/t.scan

In complex tests this allows a tester to get the original $FILENAME without trying to keep track of directory movements. A tester can create a common file using $FILESPEC as a root name as shown:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - tests Testing my program beforeall cp mydata $FILESPEC.data afterall rm $FILESPEC.data - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

$TESTCLASS
The $TESTCLASS variable when used in a test specification to substitute the exact class names given from the class keywords. This is additive so that any default class names are added to a specific test class set of names.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - default valid tests class regres io exit 0 test writing to a file class write run write mydata - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

For example, the entry above would substitute:

``regres io write''

for this name. This lets the test creator define useful test classes. Test classes are used to select specific tests. It is recommended that these names be documented. If possible adopt names that will be in common use and test specific. Like:

kf - known failures for outstanding bugs testing..

dangerous - dangerous test which may reboot the system for example.

critical - critical test that must pass for certification.

hog - testing that takes up all of memory.

$TESTCMD
The $TESTCMD variable allows the tester to record the exact command executed at the time of the test. It will not contain any shell substitutions or output redirection.

The test below shows its use to document a test:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Testing sh, csh, korn, tcsh tests testing shells names COMMENT # beforeall echo Starting a test of shells before echo Testing command: $TESTCMD - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

$TESTDESC
The $TESTDESC variable when used in a test specification to substitute the exact description given in the test command that describes a specific test.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - test changing an employee last name - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

For example, the entry above would substitute:

``changing an employee last name''

for this name. This lets the test creator uniquely define tests.

These descriptions can form an outline of the test plan so that the test specification will flow naturally from group to group and test to test.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ... default error testing ... test invalid filename, bogus ... test invalid option specified, -bonkers ... test no system library found, $MYLIB ... - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

The above is a typical set of descriptions as a program begins to wring out all possible errors that a user might see.

$TESTDIR
This is a keyword reserved for use by the shell at run time. It resolves to the directory where the tester is invoked. All tester files which manage test inputs, outputs, comparisons and results will be written to this test directory.

For complex tests that create output in strange places, this can be used to copy or move the data back to a place where the tester will see it.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - prune cp mydata.out $TESTSPEC.out - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Notice that $TESTSPEC and $FILESPEC implicitly use $TESTDIR to know where to place the test data.

An explicit use of it might be:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - after echo "Test directory: $TESTDIR" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

However, since the test directory is subject to change it is usually not used. Most recording of tests will be done by $TESTNAME or $TESTDESC to avoid having test files that will vary by test location.

$TESTID
The -tag id option of tester will allow an identifier to be given to tests just before they run. This tag will be substituted in $TESTID , which if specified becomes a part of $TESTNAME .

tester -class regres -execute -tag foo t.scan

This example shows the identifier ``foo'' that will be substituted for any $TESTID or $TESTNAME within the test specification.

By creating unique tags a tester can reuse the same test file over and over. For example, one file of tests may test out operations across three operating systems. MAC, PC and UNIX would be the unique tags.

$TESTNAME
This variable defines the name of current test which becomes a combination of three system predefined variables as shown below:

$FILENAME + $TESTID + $TESTNUM

So the t.scan file with the identifier, foo on the 5th initial test will be named:

t.scanfoo005

Tests will be labeled from 001 to 999 which shows the limit of 999 tests per test specification. See Test Ordering .

$TESTNUM
As tests are created they are given a number for each test in a test specification. They are numbered sequentially from 1 to 999. The value of $TESTNUM will always be three digits so that tests will remain in true alphabetic order.

Sometimes it is useful to use the ttags routine to track tests since the numbering of tests can be difficult to follow.

By default this test numbering is preserved in the test order database. See Test Ordering for more information.

$TESTSPEC
This variable defines the name of current test at runtime which becomes a combination of four system predefined variables as shown below:

$TESTDIR + $FILENAME + $TESTID + $TESTNUM

So the t.scan result file with the identifier, foo, on the 5th test, running in /test/scan will be named:

/test/scan/t.scanfoo005.result

Tests will be labeled from 001 to 999 which shows the limit of 999 tests per test specification.

$TESTSCMD
This variable is a string showing the exact command which called for these tests to be run or generated. It will be quoted with single quotes.

$TESTSDESC
The $TESTSDESC variable when used in a test specification to substitute the exact description given in the tests command that describes a test specification.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - tests testing the scan program - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

For example, the entry above would substitute:

``testing the scan program''

$TESTVERSION
The test program has a test version string from rcs which shows a release number along with the date timestamp. Use the $TESTVERSION to show a complete tester version along with a group of tests.

This in no way creates version control for the group of tests being run. To do that uses the names feature of the tester language to create your own $Version variable.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - names Version 1.0 22 Jan 91 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

$TESTVER
The test program has a test version string from rcs which shows a release number. Use $TESTVER to include the tester version within a group of tests.

This in no way creates version control for the group of tests being run. To do that uses the names feature of the tester language to create your own $Version variable.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - names Version 1.0 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

.

FAIL
Testing is an art. Testing is often an adhoc attempt to patch a series of problems. Most often tests are lost in the process of development and rarely passed forward intact to other groups. This tester is written to address the problems of testing in general. It is widely applicable to program development. It is not fancy. It just does the job.

The idea of a failure and success is problematic. So the following guidelines apply:

test for success
The definition of failure and success is key to developing a good test strategy. Test for success even when looking for failure.

All tests that fail will show up in the $TESTNAME .FAILED file so they should stand out like a sore thumb. A summary of failed tests will show up in the $FILESPEC .result file.

So once more, make all tests succeed when they find the expected results. So the failed tests will be work for the tester to resolve. These tests become known failures. Known failures are managed with fixes. And the process can use the class and -unclass features of the tester to manage these problems.

testing principles
Some other principles that apply for robust code:

o develope tests first o test basic functionality o test error possiblities o test common user scenarios o build benchmarks for critical functions

The tester achieves automation by using a test specification. The coverage is easier to achieve and maintain by automating steps:

o automate the test making o automate the test gathering o automate the testing

And once this exists a benchmark is just another test.

.

Hosted by www.Geocities.ws

1