This page has information about a feature in progress, called "update criteria". This is the ability for Fuego users to update the criteria for a test using ftc.
The command to do this will be: ftc set-criteria.
It will read existing criteria from the currently applicable criteria file, and write a new criteria file to /fuego-rw/boards
- test use of board-specific criteria file
- add command help to ftc
- add do_set_criteria to ftc
- parse arguments
- find input file
- find output file
- update a single testcase
- match argument to testcase name
- update a count
- read data from previous runs
- set a single criteria
- set multiple criteria
- set criteria counts
- set criteria lists
- set a benchmark reference criteria
- set the value
- set the operation?
Should be able to set the criteria list based on currently observed behavior. That is, do something like:
- ftc set-criteria -b bbb -t Functional.LTP --add-current-fails-to-ok-list
- ftc set-criteria -b bbb -t Benchmark.signaltest --set-reference max_latency +10%
The first one finds existing testcases that fail, and adds them to the 'fail_ok_list'.
The second one finds the current average max_latency, adds 10% to it, and saves it as the new reference value.
- ftc set-criteria -b <board> -t <test> <expression>
- expression: <tguid> <criteria> <value>
- <tguid> must_pass [+-]<child_name> - add/remove/set child_name in/from must_pass_list
- <tguid> fail_ok [+-]<child_name> - add/remove/set child_name in fail_ok_list
- <tguid> fail_ok [+-]from run <run_id>
- <tguid> <op> <value> - set new reference operation and value
- <tguid> <op> from [max|avg|min] run all [+-]<num>%
- <tguid> max_fail <value> - set max_fail for tguid to value
- <tguid> min_pass <value> - set min_pass for tguid to value
- <tguid> base from run <src> - add all to must_pass or fail_ok list, based on current result in run_id
use "from run <src>", to use the value for this tguid from an existing data set (one run, or multiple runs)
src = run <run_id> (a specific run) src = run all (all previous runs)
What can be scripted?
- cp /fuego-core/engine/tests/Benchmark.signaltest/criteria.json /fuego-rw/boards/bbb-Benchmark.signaltest-criteria.json
- vi /fuego-rw/boards/bbb-Benchmark.signaltest-criteria.json
- (edit max_latency to be 12000)
- proposed command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency le 15000
- proposed command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency <= 15000
- alternate command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency from run 3 +10%
- alternate command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency from run all +10%
- update the criteria file in /fuego-rw/boards
If they want to preserve the criteria file, as part of fuego-ro, then copy it from /fuego-rw/boards to /fuego-ro/boards.
Sometimes, if you have started ignoring failures, you want to check to see if they are still failing. You can:
- look at the run.json file
- temporarily ignore the custom criteria files, and see the status
We should save "reason" information in the run.json file.