Fuego release 1.0.9-notes in split format
Here is a list of problems encountered during the 1.0.9 release testing:
= Problems =
== general ==
* /fuego-core/engine/scripts/ftc-unit-test.sh
* can't download clitest
* uses /userdata instead of /fuego-ro for accessing board stuff
* had to create testplan_fuego (for fuego-specific tests)
|
{{TableOfContents}}
Here is a list of problems encountered during the 1.0.9 release testing:
= Problems =
== general ==
* /fuego-core/engine/scripts/ftc-unit-test.sh
* can't download clitest
* uses /userdata instead of /fuego-ro for accessing board stuff
* had to create testplan_fuego (for fuego-specific tests)
|
|
== Functional.fuego_board_check ==
* Jenkins jobs have the wrong TESTNAME
* e.g. docker.testplan_fuego.Functional.fuego_board_check has
"TESTNAME=fuego_board_check" instead of 'fuego_test')
* fuego-create-jobs doesn't read test.yaml file
* see if fuego-create-jobs uses the test name for the base script name
* (worked around) can't find test with default plan in spec list
* code is missing my patches for ignoring 'testplan_default'
* you get an error from ovgen.py
* docker board is missing reboot, according to Functional.fuego_board_check
* scan_form_items.sh reports HAS_REBOOT=0
* but docker has /sbin/reboot
* test is run as user 'jenkins' which doesn't have /sbin in the path
* Functional.fuego_board_check is correct!
|
|
== Benchmark.fuego_check_plots ==
* (fixed) parser.py gets an exception
* it's still using FUEGO_PARSER_PATH, instead of FUEGO_CORE environment var.
* plot.png is empty
* missing metrics.json file
* multiplot mapping requires a specific metric name patterns (leading item
of same name, to chart the data in a single chart?
* plotting is VERY fragile!!
* no one can debug this stuff
* matplotlib producing empty graphs is hard to debug
* FIXTHIS - should make plot API much easier, or at least document the rules better
|
|
= priorities for 1.1 release =
|
See Release 1.1 To Do
|
See [[Release 1.1 To Do]]
|
|
= post_test work =
List of functions that have post_test arguments:
* Benchmark.netpipe: kills iperf (should be NPtcp)
* netpipe test doesn't use benchmark.sh!! (there is no call to post_test)
* Functional.LTP.Open_Posix: kills run-posix-option-group-tst.sh run-tests.sh run-test
* this test is not in the 'next' branch
* Functional.netperf: kills netperf
* netperf doesn't use functional.sh!! (there is no call to post_test)
* Functional.LTP.Filesystem: kills run-test
* this test is not in the 'next' branch
* (OK) Benchmark.fio: fills fio
* Benchmark.OpenSSL: kills openssl
* Openssl doesn't use benchmark.sh (there is no call to post_test)
* (OK) Benchmark.lmbench2: kills lmbench lat_mem_rd par_mem
* (OK) Benchmark.IOzone: kills iozone
* (OK) Benchmark.iperf: kills iperf
* Benchmark.Java: kills java
* put in test_cleanup, but didn't call kill_procs
* there could be other java programs running - kill string is too generic
* (OK) Benchmark.Interbench: kills interbench
* (OK) Benchmark.gtkperf: kills gtkperf
|
|
=== fix functions that need to call post_test ===
* Benchmark.netpipe
* Functional.netperf
* Benchmark.OpenSSL
|