Test Specs and Plans in 'raw' format
{{TableOfContents}} = Introduction = Fuego provides an mechanism to control test execution using something called "test specs" and "test plans". Together, these allow for customizing the invocation and execution of tests in the system. A test spec lists variables, with their possible values for different test scenarios, and the test plan selects the set of variable values to use. There are numerous cases where you might want to run the same test program, but in a different configuration (i.e. with different settings), in order to measure or test some different aspect of the system. One of the simplest different type of test settings you might choose is whether to run a quick test or a thorough (long) test. Selecting between quick and long is a high-level concept, and corresponds to the concept of a test plan. The test plan selects different arguments, for the tests that this makes sense for. For example, the arguments to run a long filesystem stress test, are different than the arguments to run a long network benchmark test. For each of these individual tests, the arguments will be different for different plans. Another broad category of test difference is what kind of hardware device or media you are running a filesystem test on. For example, you might want to run a filesystem test on a USB-based device, but the results will likely not be comparable with the results for an MMC-based device. This is due to differences in how the devices operate at a hardware layer and how they are accessed by the system. Therefore, depending on what you are trying to measure, you may wish to measure only one or the other types of hardware. The different settings for these different plans are stored in the test spec file. Each test in the system has a test spec file, which lists different specifications (or "specs") that can be incorporated into a plan. The specs list a set of variables for each spec. When a testplan references a particular spec, the variable values for that spec are set by the Fuego overlay generator during the test execution. In general, test plan files are global and have the names of categories of tests. NOTE: Note that a test plan may not apply to every test. In fact the only one that does is the default test plan. It is important for the user to recognize which test plans may be suitably used with which tests. FIXTHIS - the Fuego system should handle this, by examining the test plans and specs, and only presenting to the user the plans that apply to a particular test. = Test plans = The only "real" test plans currently in Fuego are the "default" test plan, and some test plans that allow for selecting between different kinds of hardware devices that provide file systems. Fuego includes a number of different file system tests, and these plans allow customizing each test to run with filesystems on either USB, SATA, or MMC devices. These test plans allow this selection: * testplan_usbstor * testplan_sata * testplan_mmc These plans select test specs named: 'usb', 'sata', and 'mmc' respectively. Fuego also includes some test-specific test plans (for the Functional.bc and Functional.hello_world tests), but these are there more as examples to show how the test plan and spec system works, than for any real utility. A test plan is specified by a file in JSON format, that indicates the test plan name, and for each test to which it applies, the specs which should be used for that test, when run with this plan. The test plan file should have a descriptive name starting with 'testplan_' and ending in the suffix '.json', and the file must be placed in the engine/overlays/testplans directory. == Example == The test program from the hello_world test allows for selecting whether the test succeeds, always fails, or fails randomly. It does this using a command line argument. The Fuego system includes test plans that select these different behaviors. These test plan files are named: * testplan_default.json * testplan_hello_world_fail.json * testplan_hello_world_random.json Here is testplan_hello_world_random.json {{{#!YellowBox { "testPlanName": "testplan_hello_world_random", "tests": [ { "testName": "Functional.hello_world", "spec": "hello-random" } ] } }}} = Test Specs = Each test in the system should have a 'test spec' file, which lists different specifications, and the variables for each one that can be customized for that test. Every test is required, at a minimum, to define the "default" test spec, which is the default spec used when running the test. The set of variables, and what they contain is highly test-specific. The test spec file is in JSON format, and has a name matching the test it relates to, followed by the suffix '.spec'. An example filename would be: Functional.hello_world.spec. The test_spec file is placed in the fuego test_specs directory. This is at /home/jenkins/fuego/engine/overlays/test_specs inside a Fuego container, or just engine/overlays/test_specs in the fuego-core source repository. FIXTHIS - should specify location of the test spec file in the test package. == Example == The Functional.hello_world test has a test spec that provides options for executing the test normally (the 'default' spec), for succeeding or failing randomly (the 'hello-random' spec) or for always failing (the 'hello-fail' spec). This file is located in engine/overlays/test_specs/Functional.hello_world.spec Here is the complete spec for this test: {{{#!YellowBox { "testName": "Functional.hello_world", "specs": [ { "name":"hello-fail", "ARG":"-f" }, { "name":"hello-random", "ARG":"-r" }, { "name":"default", "ARG":"" } ] } }}} During test execution, the variable FUNCTIONAL_HELLO_WORLD_ARG will be set to one of the three values shown, depending on which testplan is used to run the test. = Variable use during test execution = Variables from the test spec are expanded by the overlay generator during test execution. The variables declared in the test spec files may reference other variables from the environment, such as from the board file, relating to the toolchain, or from the fuego system itself. The name of the argument is appended to the end of the test name to form the environment variable for the test. This can then be used in the base script as arguments to the test program (or for any other use). == Example == In this hello-world example, here is what the actual program invocation looks like. This is an excerpt from the base script for this test (/home/jenkins/tests/Functional.hello_world/hello_world.sh). {{{#!YellowBox function test_run { report "cd $BOARD_TESTDIR/fuego.$TESTDIR; ./hello $FUNCTIONAL_HELLO_WORLD_ARG" } }}} Note that in the default spec for hello_world, the variable ('ARG' in the test spec) is left empty. This means that during execution of this test with testplan_default, the program 'hello' is called with no arguments, which will cause it to perform it's default operation. The default operation for 'hello' is a dummy test that always succeeds. = Specifying failure cases = The test spec file can also specify one or more failure cases. These represent string patterns that are scanned for in the test log, to detect error conditions indicating the that test failed. The syntax for this is described next. == Example of fail case == The following example of a test spec (from the Functional.bc test), shows how to declare an array of failure tests for this test. There should be an variable named "fail_case" declared in test test spec JSON file, and it should consist of an array of objects, each one specifying a 'fail_regexp' and a 'fail_message', with an optional variable (use_syslog) indicating to search for the item in the system log instead of the test log. The regular expression is used with grep to scan lines in the test log. If a match is found, then the associated message is printed, and the test is aborted. {{{#!YellowBox { "testName": "Functional.bc", "fail_case": [ { "fail_regexp": "some test regexp", "fail_message": "some test message" }, { "fail_regexp": "Bug", "fail_message": "Bug or Oops detected in system log", "use_syslog": 1 } ], "specs": [ { "name":"default", "EXPR":"3+3", "RESULT":"6" } ] } }}} These variables are turned into environment variables by the overlay generator and are used with the function [[function_fail_check_cases|fail_check_cases]] which is called during the 'post test' phase of the test. Note that the above items would be turned into the following environment variables internally in the fuego system: * FUNCTIONAL_BC_FAIL_CASE_COUNT=2 * FUNCTIONAL_BC_FAIL_PATTERN_0="some test regexp" * FUNCTIONAL_BC_FAIL_MESSAGE_0="some test message" * FUNCTIONAL_BC_FAIL_PATTERN_1="Bug" * FUNCTIONAL_BC_FAIL_MESSAGE_1="Bug or Oops detected in system log" * FUNCTIONAL_BC_FAIL_1_SYSLOG=true = Catalog of current plans = Fuego, as of January 2017, only has a few testplans defined. Here is the full list: * testplan_default * testplan_mmc * testplan_sata * testplan_usbstor * testplan_bc_add * testplan_bc_mult * testplan_hello_world_fail * testplan_hello_world_random The storage-related testplans (mmc, sata, and usbstor) allow the test spec to configure the appropriate following variables: * MOUNT_BLOCKDEV * MOUNT_POINT * TIMELIMIT * NPROCS Both the 'bc' and 'hello_world' testplans are example testplans to demonstrate how the testplan system works. The 'bc' testplans are for selecting different operations to test in 'bc'. The 'hello_world' testplans are for selecting different results to test in 'hello_world'