test execution flow outline
Here is an outline of the flow of control for test execution in fuego:
This example if for the fictional test Functional.foo:
- Jenkins - decides there is a test to perform (maybe based on user input)
- /var/lib/jenkins/jobs/<job_name>/config.xml has a shell fragment that is executed by Jenkins, this may do other setup, but ultimately it calls main.sh, which calls
- main.sh
- source overlays.sh
- set_overlay_vars
- run_python $OF_OVGEN $OF_CLASSDIR_ARGS $OF_OVFILES_ARGS $OF_TESTPLAN_ARGS $OF_SPECDIR_ARGS $OF_OUTPUT_FILE_ARGS
- ovgen.py
- source $OF_OUTPUT_FILE
- run_python $OF_OVGEN $OF_CLASSDIR_ARGS $OF_OVFILES_ARGS $OF_TESTPLAN_ARGS $OF_SPECDIR_ARGS $OF_OUTPUT_FILE_ARGS
- source functions.sh
- source common.sh
- source need_check.sh
- source fuego_test.sh
- defines the test base functions
- pre_test - verify that board is up and test can run, also prep board and start logs
- ov_transport_connect
- check_needs - check test dependencies
- test_pre_check - optional base script function to check test requirements
- build
- pre_build
- test_build - base script function to build the test program from source
- post_build
- deploy - put the test program on the board
- pre_deploy
- test_deploy - base script function to deploy the test to the target
- post_deploy
- test_run - base script function to run the test
- get_testlog
- post_test - clean up after the test
- fetch_results
- test_fetch_results - optional base script function to retrieve test results from the target
- cleanup
- test_cleanup - base script function to terminate any dangling processes on the target board
- fetch_results
- processing - do test results formatting and analysis
- test_processing - base script function to analyze the test results
- log_compare
- test_processing - base script function to analyze the test results
- main.sh
- /var/lib/jenkins/jobs/<job_name>/config.xml has a shell fragment that is executed by Jenkins, this may do other setup, but ultimately it calls main.sh, which calls
sample paths [edit section]
The overlay generator is called with the following arguments:run_python /fuego-core/engine/scripts/ovgen/ovgen.py \ --classdir /fuego-core/engine/overlays/base \ --ovfiles /fuego-core/engine/overlays/distribs/nologger.dist \ /fuego-ro/boards/beaglebone.board \ --testplan /fuego-core/engine/overlays/testplans/testplan_default.json \ --specdir /fuego-core/engine/tests/Functional.bc/ \ --output /fuego-rw/logs/Functional.bc/<run_name>/prolog.sh
Legacy (1.0) test information [edit section]
This section has some historical information that only applies to Fuego 1.0. We no longer ship the Jenkins config.xml files for jobs, but instead require the user to create them at installation time with 'ftc add-jobs'.
jobs definitions [edit section]
As of February 2017, there are 67 job config.xml files under fuego-core/jobs63 jobs source something besides functions.sh (functions.sh is sourced for the call to post_test)
The job Functional.libpng/config.xml sources 2 items (besides functions.sh) This test is very weird and runs all the phases itself, instead of running a single base script.
The following jobs don't source any shell scripts:
- Matrix.Nightly
- Matrix.Official
- Run ALL test on ALL targets
- Run ALL tests on SELECTED targets
- Service.ReloadConfiguration
There are 30 Functional job directories, and 30 Benchmark job directories.
The following job directly sources reports.sh, then does gen_report:
- Reports.make_pdf
The following tests don't start with Functional or Benchmark:
- LTP
- netperf
- OpenSSL
test definitions [edit section]
There are 31 Functional test directories, and 33 Benchmark test directories.- Functional.fsfuzz is a test with no job
- Functional.mesa-demos is a test with no job
- Functional.libpng is a job with no test
- apparently the test mechanisms used to be implemented as individual scripts under scripts/<phase>
- e.g. scripts/build/Functional.libpng.sh
- and scripts/deploy/Functional.libpng.sh
- apparently the test mechanisms used to be implemented as individual scripts under scripts/<phase>
- Benchmark.fs_mark is a test with no job
- Benchmark.nbench_byte is a test with no job
- but there's a Benchmark.nbench-byte
- Benchmark.sysbench is a test with no job
.sh scripts [edit section]
There are 29 Functional tests with .sh files:- Functional.fsfuzz has no .sh file
- Functional.mesa_demos has not .sh file
There are 31 Benchmark tests with .sh files:
- Benchmark.fs_mark has no .sh file
- Benchmark.sysbench has no .sh file
Functional tests not sourcing functional.sh [edit section]
There are 21 Functional .sh files that call functional.sh Here are the ones that don't:- Functional.LTP.Devices
- ltp-devices.sh sources a common script ../LTP/ltp.sh
- it calls functional phases itself
- Functional.LTP.Filesystem
- ltp-filesystems.sh sources a common script ../LTP/ltp.sh
- it calls functional phases itself
- Functional.LTP.Open_Posix
- ltp-open_posix.sh sources a common script ../LTP/ltp.sh
- it calls functional phases itself
- Functional.ft2demos
- ft2demos.sh calls functional phases itself
- Functional.netperf
- netperf-func.sh sources netperf/netperf.sh
- it calls test_run and get_testlog itself
- Functional.OpenSSL
- openssl-func.sh sources OpenSSL/openssl.sh
- it calls test_run, get_testlog and test_processing itself
- Functional.pi_tests
- calls stress.sh
There are 2 Functional tests that source stress.sh
Benchmark tests not sourcing benchmark.sh [edit section]
There are 28 Benchmark tests that source benchmark.sh.Here are the ones that don't:
- Benchmark.netperf
- netperf-bench.sh sources netperf/netperf.sh
- it calls test_run and bench_processing directly
- Benchmark.NetPipe
- NetPipe.sh sources functions.sh and overlays.sh
- it calls pre_test, build, deploy, and test_run directly
- it does not call set_testres_file, bench_processing, check_create_logrun