Test Specs and Plans
There are numerous cases where you might want to run the same test program, but in a different configuration (i.e. with different settings), in order to measure or test some different aspect of the system. One of the simplest different type of test settings you might choose is whether to run a quick test or a thorough (long) test. Selecting between quick and long is a high-level concept, and corresponds to the concept of a test plan. The test plan selects different arguments, for the tests that this makes sense for.
For example, the arguments to run a long filesystem stress test, are different than the arguments to run a long network benchmark test. For each of these individual tests, the arguments will be different for different plans.
Another broad category of test difference is what kind of hardware device or media you are running a filesystem test on. For example, you might want to run a filesystem test on a USB-based device, but the results will likely not be comparable with the results for an MMC-based device. This is due to differences in how the devices operate at a hardware layer and how they are accessed by the system. Therefore, depending on what you are trying to measure, you may wish to measure only one or the other types of hardware.
The different settings for these different plans are stored in the test spec file. Each test in the system has a test spec file, which lists different specifications (or "specs") that can be incorporated into a plan. The specs list a set of variables for each spec. When a testplan references a particular spec, the variable values for that spec are set by the Fuego overlay generator during the test execution.
In general, test plan files are global and have the names of categories of tests.
NOTE: Note that a test plan may not apply to every test. In fact the only one that does is the default test plan. It is important for the user to recognize which test plans may be suitably used with which tests.
FIXTHIS - the Fuego system should handle this, by examining the test plans and specs, and only presenting to the user the plans that apply to a particular test.
These test plans allow this selection:
These plans select test specs named: 'usb', 'sata', and 'mmc' respectively.
Fuego also includes some test-specific test plans (for the Functional.bc and Functional.hello_world tests), but these are there more as examples to show how the test plan and spec system works, than for any real utility.
A test plan is specified by a file in JSON format, that indicates the test plan name, and for each test to which it applies, the specs which should be used for that test, when run with this plan. The test plan file should have a descriptive name starting with 'testplan_' and ending in the suffix '.json', and the file must be placed in the engine/overlays/testplans directory.
The Fuego system includes test plans that select these different behaviors. These test plan files are named:
Here is testplan_hello_world_random.json
The set of variables, and what they contain is highly test-specific.
The test spec file is in JSON format, and has the name "spec.json".
The test_spec file is placed in the test's home directory, which is based on the test's name:
This file is located in engine/overlays/tests/Functional.hello_world/spec.json
Here is the complete spec for this test:
During test execution, the variable FUNCTIONAL_HELLO_WORLD_ARG will be set to one of the three values shown, depending on which testplan is used to run the test.
The name of the argument is appended to the end of the test name to form the environment variable for the test. This can then be used in the base script as arguments to the test program (or for any other use).
Note that in the default spec for hello_world, the variable ('ARG' in the test spec) is left empty. This means that during execution of this test with testplan_default, the program 'hello' is called with no arguments, which will cause it to perform it's default operation. The default operation for 'hello' is a dummy test that always succeeds.
There should be an variable named "fail_case" declared in test test spec JSON file, and it should consist of an array of objects, each one specifying a 'fail_regexp' and a 'fail_message', with an optional variable (use_syslog) indicating to search for the item in the system log instead of the test log.
The regular expression is used with grep to scan lines in the test log. If a match is found, then the associated message is printed, and the test is aborted.
These variables are turned into environment variables by the overlay generator and are used with the function fail_check_cases which is called during the 'post test' phase of the test.
Note that the above items would be turned into the following environment variables internally in the fuego system:
- FUNCTIONAL_BC_FAIL_PATTERN_0="some test regexp"
- FUNCTIONAL_BC_FAIL_MESSAGE_0="some test message"
- FUNCTIONAL_BC_FAIL_MESSAGE_1="Bug or Oops detected in system log"
Here is the full list:
The storage-related testplans (mmc, sata, and usbstor) allow the test spec to configure the appropriate following variables:
Both the 'bc' and 'hello_world' testplans are example testplans to demonstrate how the testplan system works.
The 'bc' testplans are for selecting different operations to test in 'bc'. The 'hello_world' testplans are for selecting different results to test in 'hello_world'