|
{{TableOfContents}}
|
Here is an outline of the flow of control for test execution in fuego:
|
Here is an outline of the flow of control for test execution in fuego:
|
This example if for the fictional test Functional.foo:
|
This example if for the fictional test Functional.foo:
|
- Jenkins - decides there is a test to perform (maybe based on user input)
* /var/lib/jenkins/jobs/<job_name>/config.xml has a shell fragment that is executed by Jenkins, this may do other setup, but ultimately it sources the base test script for this test.
* test script
* defines the test base functions, and then sources:
* functional.sh
* source overlays.sh
* set_overlay_vars
* run_python $OF_OVGEN $OF_CLASSDIR_ARGS $OF_OVFILES_ARGS $OF_TESTPLAN_ARGS $OF_SPECDIR_ARGS $OF_OUTPUT_FILE_ARGS
* ovgen.py
* source $OF_OUTPUT_FILE
* source reports.sh
* source functions.sh
* pre_test
* test_pre_check - base script function to check test requirements
* build
* pre_build
* test_build - base script function to build the test program from source
* post_build
* deploy
* pre_deploy
* test_deploy - base script function to deploy the test to the target
* post_deploy
* test_run - base script function to run the test
* get_testlog
* test_processing - base script function to analyze the test results
* log_compare
* post_test
* test_cleanup - base script function to terminate any dangling processes on the target board
* Jenkins post_build actions
* description setter plugin sets the build links
|
* '''Jenkins''' - decides there is a test to perform (maybe based on user input)
* '''/var/lib/jenkins/jobs/<job_name>/config.xml''' has a shell fragment that is executed by Jenkins, this may do other setup, but ultimately it sources the base test script for this test.
* ''test script''
* defines the test base functions, and then sources:
* '''functional.sh'''
* source '''overlays.sh'''
* '''set_overlay_vars'''
* '''run_python $OF_OVGEN $OF_CLASSDIR_ARGS $OF_OVFILES_ARGS $OF_TESTPLAN_ARGS $OF_SPECDIR_ARGS $OF_OUTPUT_FILE_ARGS'''
* ovgen.py
* source $OF_OUTPUT_FILE
* source '''reports.sh'''
* source '''functions.sh'''
* '''pre_test'''
* '''test_pre_check''' - base script function to check test requirements
* '''build'''
* '''pre_build'''
* '''test_build''' - base script function to build the test program from source
* '''post_build'''
* '''deploy'''
* '''pre_deploy'''
* '''test_deploy''' - base script function to deploy the test to the target
* '''post_deploy'''
* '''test_run''' - base script function to run the test
* '''get_testlog'''
* '''test_processing''' - base script function to analyze the test results
* '''log_compare'''
* '''post_test'''
* '''test_cleanup''' - base script function to terminate any dangling processes on the target board
* Jenkins post_build actions
* description setter plugin sets the build links
|
|
= sample paths =
The overlay generator is called with the following arguments:
{{{#!YellowBox
run_python /fuego-core/engine/scripts/ovgen/ovgen.py \
--classdir /fuego-core/engine/overlays/base \
--ovfiles /fuego-core/engine/overlays/distribs/nologger.dist \
/fuego-ro/boards/beaglebone.board \
--testplan /fuego-core/engine/overlays/testplans/testplan_default.json \
--specdir /fuego-core/engine/tests/Functional.bc/ \
--output /fuego-rw/logs/Functional.bc/<run_name>/prolog.sh
}}}
|
|
= comparison of top-level scripts =
Here is a comparison of the 3 top-level scripts used by Fuego base scripts:
* functional.sh
* benchmark.sh
* stress.sh
|
functional.sh outline:
* source overlays.sh, set_overlay_vars
* source reports.sh
* source functions.sh
* pre_test
* if $Rebuild, build
* deploy
* test_run
* get_testlog
* test_processing
|
functional.sh outline:
* source overlays.sh, set_overlay_vars
* source reports.sh
* source functions.sh
* pre_test
* if $Rebuild, build
* deploy
* test_run
* get_testlog
* test_processing
|
benchmark.sh outline:
* source overlays.sh, set_overlay_vars
* source reports.sh
* source functions.sh
* pre_test
* if $Rebuild, build
* deploy
* test_run
* set_testres_file
* bench_processing
* check_create_logrun
|
benchmark.sh outline:
* source overlays.sh, set_overlay_vars
* source reports.sh
* source functions.sh
* pre_test
* if $Rebuild, build
* deploy
* test_run
* set_testres_file
* bench_processing
* check_create_logrun
|
stress.sh outline:
* source overlays.sh, set_overlay_vars
* source reports.sh
* source functions.sh
* pre_test
* if $Rebuild, build
* deploy
* test_run
* get_testlog
|
stress.sh outline:
* source overlays.sh, set_overlay_vars
* source reports.sh
* source functions.sh
* pre_test
* if $Rebuild, build
* deploy
* test_run
* get_testlog
|
stress.sh is only used by 2 tests, and is a strict subset of the functional test. The 2 tests are: Functional.scrashme and Functional.pi_tests.
|
stress.sh is only used by 2 tests, and is a strict subset of the functional test. The 2 tests are: Functional.scrashme and Functional.pi_tests.
|
Here is a table showing the differences:
{{{#!Table
show_edit_links=0
show_sort_links=0
||step||stress.sh||functional.sh||benchmark.sh||
||source overlays.sh, set_overlay_vars||x||x||x||
||source reports.sh||x||x||x||
||source functions.sh||x||x||x||
||pre_test||x||x||x||
||if $Rebuild, build||x||x||x||
||deploy||x||x||x||
||test_run||x||x||x||
||get_testlog||x||x||(called inside bench_processing)||
||test_processing||.||x||.||
||set_testres_file||.||.||x||
||bench_processing||.||.||x||
||check_create_logrun||.||.||x||
}}}
|
Here is a table showing the differences:
{{{#!Table
show_edit_links=0
show_sort_links=0
||step||stress.sh||functional.sh||benchmark.sh||
||source overlays.sh, set_overlay_vars||x||x||x||
||source reports.sh||x||x||x||
||source functions.sh||x||x||x||
||[[function_pre_test|pre_test]]||x||x||x||
||if $Rebuild, [[function_build|build]]||x||x||x||
||[[function_deploy|deploy]]||x||x||x||
||[[function_test_run|test_run]]||x||x||x||
||[[function_get_testlog|get_testlog]]||x||x||(called inside bench_processing)||
||[[function_test_processing|test_processing]]||.||x||.||
||[[function_set_testres_file|set_testres_file]]||.||.||x||
||[[function_bench_processing|bench_processing]]||.||.||x||
||[[function_check_create_logrun|check_create_logrun]]||.||.||x||
}}}
|
|
= test information =
== jobs definitions ==
As of February 2017, there are 67 job config.xml files under fuego-core/jobs
|
63 jobs source something besides functions.sh (functions.sh is sourced for the call to post_test)
|
63 jobs source something besides functions.sh (functions.sh is sourced for the call to post_test)
|
The job Functional.libpng/config.xml sources 2 items (besides functions.sh)
This test is very weird and runs all the phases itself, instead of
running a single base script.
|
The job Functional.libpng/config.xml sources 2 items (besides functions.sh)
This test is very weird and runs all the phases itself, instead of
running a single base script.
|
The following jobs don't source any shell scripts:
* Matrix.Nightly
* Matrix.Official
* Run ALL test on ALL targets
* Run ALL tests on SELECTED targets
* Service.ReloadConfiguration
|
The following jobs don't source any shell scripts:
* Matrix.Nightly
* Matrix.Official
* Run ALL test on ALL targets
* Run ALL tests on SELECTED targets
* Service.ReloadConfiguration
|
There are 30 Functional job directories, and 30 Benchmark job directories.
|
There are 30 Functional job directories, and 30 Benchmark job directories.
|
The following job directly sources reports.sh, then does gen_report:
* Reports.make_pdf
|
The following job directly sources reports.sh, then does gen_report:
* Reports.make_pdf
|
The following tests don't start with Functional or Benchmark:
* LTP
* netperf
* OpenSSL
|
The following tests don't start with Functional or Benchmark:
* LTP
* netperf
* OpenSSL
|
|
== test definitions ==
There are 31 Functional test directories, and 33 Benchmark test directories.
* Functional.fsfuzz is a test with no job
* Functional.mesa-demos is a test with no job
* Functional.libpng is a job with no test
* apparently the test mechanisms used to be implemented as individual scripts under scripts/<phase>
* e.g. scripts/build/Functional.libpng.sh
* and scripts/deploy/Functional.libpng.sh
* Benchmark.fs_mark is a test with no job
* Benchmark.nbench_byte is a test with no job
* but there's a Benchmark.nbench-byte
* Benchmark.sysbench is a test with no job
|
|
=== .sh scripts ===
There are 29 Functional tests with .sh files:
* Functional.fsfuzz has no .sh file
* Functional.mesa_demos has not .sh file
|
There are 31 Benchmark tests with .sh files:
* Benchmark.fs_mark has no .sh file
* Benchmark.sysbench has no .sh file
|
There are 31 Benchmark tests with .sh files:
* Benchmark.fs_mark has no .sh file
* Benchmark.sysbench has no .sh file
|
|
=== Functional tests not sourcing functional.sh ===
There are 21 Functional .sh files that call functional.sh
Here are the ones that don't:
* Functional.LTP.Devices
* ltp-devices.sh sources a common script ../LTP/ltp.sh
* it calls functional phases itself
* Functional.LTP.Filesystem
* ltp-filesystems.sh sources a common script ../LTP/ltp.sh
* it calls functional phases itself
* Functional.LTP.Open_Posix
* ltp-open_posix.sh sources a common script ../LTP/ltp.sh
* it calls functional phases itself
* Functional.ft2demos
* ft2demos.sh calls functional phases itself
* Functional.netperf
* netperf-func.sh sources netperf/netperf.sh
* it calls test_run and get_testlog itself
* Functional.OpenSSL
* openssl-func.sh sources OpenSSL/openssl.sh
* it calls test_run, get_testlog and test_processing itself
* Functional.pi_tests
* calls stress.sh
|
|
* calls stress.sh
|
There are 2 Functional tests that source stress.sh
|
There are 2 Functional tests that source stress.sh
|
|
=== Benchmark tests not sourcing benchmark.sh ===
There are 28 Benchmark tests that source benchmark.sh.
|
Here are the ones that don't:
* Benchmark.netperf
* netperf-bench.sh sources netperf/netperf.sh
* it calls test_run and bench_processing directly
* Benchmark.NetPipe
* NetPipe.sh sources functions.sh and overlays.sh
* it calls pre_test, build, deploy, and test_run directly
* it does not call set_testres_file, bench_processing, check_create_logrun
|
Here are the ones that don't:
* Benchmark.netperf
* netperf-bench.sh sources netperf/netperf.sh
* it calls test_run and bench_processing directly
* Benchmark.NetPipe
* NetPipe.sh sources functions.sh and overlays.sh
* it calls pre_test, build, deploy, and test_run directly
* it does not call set_testres_file, bench_processing, check_create_logrun
|