parser func process

NAME [edit section]

process

SYNOPSIS [edit section]

DESCRIPTION [edit section]

This function takes a list of results that have been prepared by parsing the test log, and does the following processing on them:

The chart data ends up displayed in the job page for the the test, and can be either a plot (or set of plots) of measurement data for the test, or a table of testcase results, or a table of testset summary results, over time. See Fuego charting for more details.

Argument [edit section]

process() takes as input a python dictionary of results for the testcases in this test. For each item in the dictionary, the key is the items unique test identifier, and the value is either:

In the case of an individual testcase result, the result string should be one of:

These have the following meanings:

A measure dictionary has the following structure: { "name": "<measure name>", "measure": <numeric value> }

A single testcase for a Benchmark test can report multiple measures (which is why measures for a testcase are reported in a list).

Input files [edit section]

The following files are used during the process function:

The criteria.json file is used to specify the pass criteria for the test. This can consist of a number of different evaluation criteria. The default filename for the file containing the evaluation data is "criteria.json", but a different criteria file can be specified. For example, a criteria file can be specified for a particular board.

The reference.json file has data indicating the expected testcases and measure names, and, for measures, indicates their units.

The chart_config.json file is used to specify the type of chart to prepare for display on the Jenkins page for this test job.

Output files [edit section]

The output data from process() is placed in several files, which reside in the log directory for a particular test.

There are 4 different files manipulated by the process operation:

Three other files hold aggregate data, which means data from multiple runs of the test. These are stored in the log directory for the test (not the individual run log directory), and are used for generating charts, generating reports, and for querying the run data for the system.

EXAMPLES [edit section]

results dictionaries [edit section]

Here are samples of valid results dictionaries:

Dhrystone invocation [edit section]

Here is a sample invocation:

ENVIRONMENT and ARGUMENTS [edit section]

The "common.py" module uses the following environment variables, to find the test log path, and to populate the tests run.json file:

Here is a special variable, that can be set in a job:

FIXTHIS - document FUEGO_CRITERIA_JSON_PATH for process()

OUTPUT [edit section]

The process function is the main routine to analyze and save results for a test.

It outputs the test results for this individual test in the "run.json" file, which is located in the log directory for this test run.

RETURN [edit section]

Returns an integer indicating the overall result of the test. 0=PASS and any other number indicates FAIL or some error in processing the test.

This number is intended to be used as the return code from the calling parser.py program, and is used by Fuego and Jenkins to determine overall status of the test.

SOURCE [edit section]

Located in scripts/parser/common.py

SEE ALSO [edit section]