FrontPage 

Fuego wiki

Login or create account

parser func process in split format

= NAME = process
{{TableOfContents}}
= NAME =
process

SYNOPSIS [edit section]

= SYNOPSIS =
 * process(results)

DESCRIPTION [edit section]

= DESCRIPTION =
This function takes a list of results that have been prepared by
parsing the test log, and does the following processing on them:
 * it converts the results into structured data for the result file (run.json) for this test
 * it uses a criteria file to determine the final disposition of the test (PASS or FAIL)
 * it places the data into per-test aggregate data files
 * it regenerates the chart data for the test
The chart data ends up displayed in the job page for the the test, and can be either a plot (or set of plots) of measurement data for the test, or a table of testcase results, or a table of testset summary results, over time. See Fuego charting for more details.
The chart data ends up displayed in the job page for the the test, and
can be either a plot (or set of plots) of measurement data for the test,
or a table of testcase results, or a table of testset summary results,
over time.  See [[Fuego charting]] for more details.

Argument [edit section]

== Argument ==
process() takes as input a python dictionary of results for the testcases in this test. For each item in the dictionary, the key is the items unique test identifier, and the value is either:
 * a string indicating the testcase result (for a testcase)
 * a list of "measure" dictionaries (for numeric benchmark measurements) for the testcase
In the case of an individual testcase result, the result string should be one of: * PASS, FAIL, SKIP, ERROR
In the case of an individual testcase result, the result string should
be one of:
 * PASS, FAIL, SKIP, ERROR
These have the following meanings: * PASS - the test results matched the expected behavior * FAIL - the test executed, but the results of the test did not match expected behavior * SKIP - the test was not executed. Usually this is due to the environment or configuration of the system not being correct for the indicated testcase (from LTP this corresponds to TCONF) * ERROR - the test did not execute successfully. This is not the same as FAIL, but indicates some problem executing the test. Usually this is due to some condition outside the control of the test, or a forced interruption of the test (like a user abort, a program fault, or a machine hang).
These have the following meanings:
 * PASS - the test results matched the expected behavior
 * FAIL - the test executed, but the results of the test did not match expected behavior
 * SKIP - the test was not executed.  Usually this is due to the environment or configuration of the system not being correct for the indicated testcase (from LTP this corresponds to TCONF)
 * ERROR - the test did not execute successfully.  This is not the same as FAIL, but indicates some problem executing the test.  Usually this is due to some condition outside the control of the test, or a forced interruption of the test (like a user abort, a program fault, or a machine hang).
A measure dictionary has the following structure: { "name": "<measure name>", "measure": <numeric value> }
A measure dictionary has the following structure:
 { "name": "<measure name>", "measure": <numeric value> }
A single testcase for a Benchmark test can report multiple measures (which is why measures for a testcase are reported in a list).
A single testcase for a Benchmark test can report multiple measures (which is why measures for a testcase are reported in a list).

Input files [edit section]

== Input files ==
The following files are used during the process function:
 * criteria.json
 * reference.json
 * chart_config.json
The criteria.json file is used to specify the pass criteria for the test. This can consist of a number of different evaluation criteria. The default filename for the file containing the evaluation data is "criteria.json", but a different criteria file can be specified. For example, a criteria file can be specified for a particular board.
The [[criteria.json]] file is used to specify the pass criteria for the test.
This can consist of a number of different evaluation criteria.
The default filename for the file containing the evaluation data is "criteria.json", but a different criteria file can be specified.
For example, a criteria file can be specified
for a particular board.
The reference.json file has data indicating the expected testcases and measure names, and, for measures, indicates their units.
The [[reference.json]] file has data indicating the expected testcases
and measure names, and, for measures, indicates their units.
The chart_config.json file is used to specify the type of chart to prepare for display on the Jenkins page for this test job.
The [[chart_config.json]] file is used to specify the type of chart
to prepare for display on the Jenkins page for this test job.

Output files [edit section]

== Output files ==
The output data from process() is placed in several files, which reside in the log directory for a particular test.
There are 4 different files manipulated by the process operation: * run.json is in the log directory for the individual test run, and holds the data and final results for the run of the test.
There are 4 different files manipulated by the process operation:
 * [[run.json]] is in the log directory for the individual test run, and holds the data and final results for the run of the test.
Three other files hold aggregate data, which means data from multiple runs of the test. These are stored in the log directory for the test (not the individual run log directory), and are used for generating charts, generating reports, and for querying the run data for the system. * results.json holds the data for multiple runs, in json format * flat_plot_data.txt is a flat list of results for a particular test, and is stored in the log directory for a test * flot_chart_data.json has charts (either plots or html tables) for the runs of a particular test (for presentation in the Jenkins interface).
Three other files hold aggregate data, which means data from multiple runs
of the test.  These are stored in the log directory for the test (not the individual run log directory), and are used for generating charts, generating
reports, and for querying the run data for the system.
 * results.json holds the data for multiple runs, in json format
 * flat_plot_data.txt is a flat list of results for a particular test, and is stored in the log directory for a test
 * flot_chart_data.json has charts (either plots or html tables) for the runs of a particular test (for presentation in the Jenkins interface).

EXAMPLES [edit section]

= EXAMPLES =

results dictionaries [edit section]

== results dictionaries ==
Here are samples of valid results dictionaries:
{{{#!YellowBox
  {
     "test_set1.testcase_A" : "PASS",
     "test_set1.testcase_B" : "SKIP"
     "test_set2.testcase_FOO" : "PASS"
  }
}}}
{{{#!YellowBox
  {
     "Sequential_Output.Block": [
         { "name": "speed", "measure": 123 },
         { "name": "cpu", "measure": 78 }
      ],
     "Sequential_Output.Char": [
         { "name": "speed", "measure": 456 },
         { "name": "cpu", "measure": 99 }
      ]
  }
}}}

Dhrystone invocation [edit section]

== Dhrystone invocation ==
Here is a sample invocation:
{{{#!YellowBox
  regex_string = "^(Dhrystones.per.Second:)(\ *)([\d]{1,8}.?[\d]{1,3})(.*)$"
  matches = plib.parse_log(regex_string)
measurements = {} measurements["default.Dhrystone"] = [{"name": "Speed", "measure": float(matches[0][2])}] return_code = plib.process(measurements) }}}
  measurements = {}
  measurements["default.Dhrystone"] = [{"name": "Speed", "measure": float(matches[0][2])}]
  return_code = plib.process(measurements)
}}}

ENVIRONMENT and ARGUMENTS [edit section]

= ENVIRONMENT and ARGUMENTS =
The "common.py" module uses the following environment variables, to find the test log path, and to
populate the tests run.json file:
 * FUEGO_RW
 * FUEGO_RO
 * FUEGO_CORE
 * NODE_NAME
 * TESTDIR
 * TESTSPEC
 * BUILD_NUMBER
 * BUILD_ID
 * BUILD_TIMESTAMP
 * PLATFORM
 * FWVER
 * LOGDIR
 * FUEGO_START_TIME
 * FUEGO_HOST
 * Reboot
 * Rebuild
 * Target_PreCleanup
 * WORKSPACE
 * JOB_NAME
Here is a special variable, that can be set in a job: * FUEGO_CRITERIA_JSON_PATH
Here is a special variable, that can be set in a job:
 * FUEGO_CRITERIA_JSON_PATH
FIXTHIS - document FUEGO_CRITERIA_JSON_PATH for process()
FIXTHIS - document FUEGO_CRITERIA_JSON_PATH for process()

OUTPUT [edit section]

= OUTPUT =
The process function is the main routine to analyze and save results for a test.
It outputs the test results for this individual test in the "run.json" file, which is located in the log directory for this test run.
It outputs the test results for this individual test in the "run.json" file, which is located
in the log directory for this test run.

RETURN [edit section]

= RETURN =
Returns an integer indicating the overall result of the test.  0=PASS and any other
number indicates FAIL or some error in processing the test.
This number is intended to be used as the return code from the calling parser.py program, and is used by Fuego and Jenkins to determine overall status of the test.
This number is intended to be used as the return code from the calling parser.py program,
and is used by Fuego and Jenkins to determine overall status of the test.

SOURCE [edit section]

= SOURCE =
Located in ''scripts/parser/common.py''

SEE ALSO [edit section]

parser.py, parse_log, Fuego charting
= SEE ALSO =
 * [[parser.py]], [[parser_func_parse_log|parse_log]], [[Fuego charting]]
TBWiki engine 1.8.3 by Tim Bird