FrontPage 

Fuego wiki

Login or create account

Update Criteria in split format

{{TableOfContents}}
This page has information about a feature in progress, called "update criteria". This is the ability for Fuego users to update the criteria for a test using ftc.
This page has information about a feature in progress, called
"update criteria".  This is the ability for Fuego users to update
the criteria for a test using ftc.
The command to do this will be: ftc set-criteria.
The command to do this will be: ftc set-criteria.
It will read existing criteria from the currently applicable criteria file, and write a new criteria file to /fuego-rw/boards
It will read existing criteria from the currently applicable criteria file,
and write a new criteria file to /fuego-rw/boards

to do for this feature [edit section]

= to do for this feature =
 * test use of board-specific criteria file
 * add command help to ftc
 * add do_set_criteria to ftc
 * parse arguments
 * find input file
 * find output file
 * update a single testcase
   * match argument to testcase name
 * update a count
 * read data from previous runs

operation [edit section]

= operation =
This command should allow users to:
 * set a single criteria
 * set multiple criteria
 * set criteria counts
   * max_fail
   * min_pass
 * set criteria lists
   * must_pass_list
   * fail_ok_list
 * set a benchmark reference criteria
   * set the value
   * set the operation?
Should be able to set the criteria list based on currently observed behavior. That is, do something like: * ftc set-criteria -b bbb -t Functional.LTP --add-current-fails-to-ok-list * ftc set-criteria -b bbb -t Benchmark.signaltest --set-reference max_latency +10%
Should be able to set the criteria list based on currently observed
behavior.  That is, do something like:
 * ftc set-criteria -b bbb -t Functional.LTP --add-current-fails-to-ok-list
 * ftc set-criteria -b bbb -t Benchmark.signaltest --set-reference max_latency +10%
The first one finds existing testcases that fail, and adds them to the 'fail_ok_list'.
The first one finds existing testcases that fail, and adds them to the 'fail_ok_list'.
The second one finds the current average max_latency, adds 10% to it, and saves it as the new reference value.
The second one finds the current average max_latency, adds 10% to it, and saves
it as the new reference value.

command syntax [edit section]

= command syntax =
 * ftc set-criteria -b <board> -t <test>  <expression>
 * args:
   * expression: <tguid> <criteria> <value>
   * <tguid> must_pass [+-]<child_name> - add/remove/set child_name 
 in/from must_pass_list
   * <tguid> fail_ok [+-]<child_name> - add/remove/set child_name in fail_ok_list
   * <tguid> fail_ok [+-]from run <run_id>
   * <tguid> <op> <value> - set new reference operation and value
   * <tguid> <op> from [max|avg|min] run all [+-]<num>%
   * <tguid> max_fail <value> - set max_fail for tguid to value
   * <tguid> min_pass <value> - set min_pass for tguid to value
   * <tguid> base from run <src> - add all  to must_pass or fail_ok list,
based on current result in run_id
use "from run <src>", to use the value for this tguid from an existing data set (one run, or multiple runs)
use "from run <src>", to use the value for this tguid
from an existing data set (one run, or multiple runs)
src = run <run_id> (a specific run) src = run all (all previous runs)
src = run <run_id> (a specific run)
src = run all (all previous runs)
What can be scripted?
What can be scripted?

manual operations [edit section]

= manual operations =
 * cp /fuego-core/engine/tests/Benchmark.signaltest/criteria.json /fuego-rw/boards/bbb-Benchmark.signaltest-criteria.json
 * vi /fuego-rw/boards/bbb-Benchmark.signaltest-criteria.json
   * (edit max_latency to be 12000)
 * proposed command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency le 15000
  * proposed command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency <= 15000
 * alternate command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency from run 3 +10%
 * alternate command: ftc set-criteria -b bbb -t Benchmark.signaltest max_latency from run all +10%

Notes [edit section]

= Notes =
== Expected Workflow ==
A users sees that a test fails, and then determines they want to ignore it.
 - update the criteria file in /fuego-rw/boards
If they want to preserve the criteria file, as part of fuego-ro, then copy it from /fuego-rw/boards to /fuego-ro/boards.
If they want to preserve the criteria file, as part of fuego-ro, then copy
it from /fuego-rw/boards to /fuego-ro/boards.
Sometimes, if you have started ignoring failures, you want to check to see if they are still failing. You can: * look at the run.json file * temporarily ignore the custom criteria files, and see the status
Sometimes, if you have started ignoring failures, you want to check to see
if they are still failing.  You can:
 * look at the run.json file
 * temporarily ignore the custom criteria files, and see the status

More information for users [edit section]

== More information for users ==
It would be nice to put more information into the run.json file about
the reason for a failure, or the reason that a failure was ignored.
We could copy the comment from the criteria.json file, into the run.json
file, and that might help users see what's going on.
We should save "reason" information in the run.json file.
We should save "reason" information in the run.json file.
TBWiki engine 1.8.3 by Tim Bird