FrontPage 

Fuego 1.0 wiki

Login or create account

parser.py in split format

PROGRAM [edit section]

= PROGRAM =
parser.py

DESCRIPTION [edit section]

= DESCRIPTION =
parser.py in a python program that is used by each benchmark test to
parse the test log for a test run, check the threshold(s) for success or
failure, and store the data used to generate plots.
Each benchmark should include an executable file called 'parser.py' in the test directory (/home/jenkins/tests/Benchmark.<testname>).
Each benchmark should include an executable file called 'parser.py'
in the test directory (/home/jenkins/tests/Benchmark.<testname>).
The test log for the current run is parsed by this routine, and one or more metrics are extracted (using regular expressions).
The test log for the current run is parsed by this routine, and one or more metrics are extracted (using regular expressions).

input files [edit section]

== input files ==
The file used to specify the threshold and comparison operator for each
metric is called '''reference.log'''.  The metric names for the benchmark
are stored in a file called '''metrics.json'''.  

output files [edit section]

== output files ==
The output data is placed in a file called '''plot.data''', and a plot image
is created in the file '''plot.png'''.

outline [edit section]

== outline ==
The program has the following rough outline:
 * import the parser library
 * specify a search pattern for finding one or more metrics from the test log
 * call the parse() function
 * build a dictionary of metric values
 * call the process_data function

metric names [edit section]

== metric names ==
The parser.py program provides the name for the metric(s) read from the
benchmark test log file.  It also provide a value for each metric, and
passes the parsed data value for each metric to the processing routine.
This metric name must be consistent in the tests.info file, and the reference.log file.
This metric name must be consistent in the tests.info file, and the
reference.log file.

SAMPLES [edit section]

= SAMPLES =
Here is a sample parser.py that does simple processing of a single
metric.  This is for Benchmark.GLMark
{{{#!YellowBox
#!/bin/python
# See common.py for description of command-line arguments
import os, re, sys
import os, re, sys
sys.path.insert(0, os.environ['FUEGO_PARSER_PATH']) import common as plib
sys.path.insert(0, os.environ['FUEGO_PARSER_PATH'])
import common as plib
ref_section_pat = "\[[\w]+.[gle]{2}\]" cur_search_pat = re.compile("^(Your GLMark08 Score is )([\d]{1,3})",re.MULTILINE)
ref_section_pat = "\[[\w]+.[gle]{2}\]"
cur_search_pat = re.compile("^(Your GLMark08 Score is )([\d]{1,3})",re.MULTILINE)
cur_dict = {} pat_result = plib.parse(cur_search_pat) if pat_result: cur_dict["GLMark_Score"] = pat_result[0][1]
cur_dict = {}
pat_result = plib.parse(cur_search_pat)
if pat_result:
        cur_dict["GLMark_Score"] = pat_result[0][1]
sys.exit(plib.process_data(ref_section_pat, cur_dict, 's', 'FPS')) }}}
sys.exit(plib.process_data(ref_section_pat, cur_dict, 's', 'FPS'))
}}}

ENVIRONMENT and ARGUMENTS [edit section]

= ENVIRONMENT and ARGUMENTS =
parser.py takes takes positional arguments on its command line.
 * $JOB_NAME
 * $PLATFORM
 * $BUILD_ID
 * $BUILD_NUMBER
 * $FW
 * $PLATFORM
 * $NODE_NAME
parser.py is called with the following invocation, from bench_processing:
parser.py is called with the following invocation, from [[function_bench_processing|bench_processing]]:
{{{#!YellowBox
  run_python $PYTHON_ARGS $FUEGO_TESTS_PATH/${JOB_NAME}/parser.py $JOB_NAME $PLATFORM $BUILD_ID $BUILD_NUMBER $FW $PLATFORM $NODE_NAME
}}}

SOURCE [edit section]

= SOURCE =
Located in ''/home/fuego/tests/<testname>/parser.py'', which is the same as ''engine/tests/<testname>/parser.py''.

SEE ALSO [edit section]

dataload.py, bench_processing, Parser module API, Benchmark_parser_notes
= SEE ALSO =
 * [[dataload.py]], [[function_bench_processing|bench_processing]], [[Parser module API]], [[Benchmark_parser_notes]]
TBWiki engine 1.8.3 by Tim Bird