Benchmark parser notes
FIXTHIS - add information about metrics.json file, when that goes live in the next fuego release
files [edit section]
There are several files used:- system-wide:
- tests.info
- dataload.py
- per-test:
- parser.py
- reference.log
- metrics.json
- per-run:
- <test>.info.json
- <test>.<metric>.json
- plot.data
- plot.png
overall flow [edit section]
A benchmark test will do the following:- source benchmarks.sh
- after running the test, calls bench_processing
- calls parser.py
- imports parser/common.py
- gets test log
- parses it into a python dictionary (key/value pairs)
- saves it to plot.data
- generates plot.png
- writes res.json file
- calls dataload.py
- reads data from plot.data
- writes data to test.metric.json files
- writes data to test.info.json file
- calls parser.py
- after running the test, calls bench_processing
In the Jenkins interface:
- On the status page for the Benchmark test, there are plots automatically generated from the metric data from the test runs for the benchmark.
- Jenkins calls the flot (jquery plotter) plugin to show plots for the data based on <test.metric.json, test.info.json, and tests.info> files
- the code that reads the .info and .json files is mod.js
- Jenkins calls the flot (jquery plotter) plugin to show plots for the data based on <test.metric.json, test.info.json, and tests.info> files
- Also on the status page for a test, a user can click on the 'graph' link on the Benchmark.test page in Jenkins, to see plot.png
- this image is created by scripts/parser/common.py, and is generated using matplotlib, from data in the plot.data file.
parser elements [edit section]
tests.info [edit section]
This is a system-wide file that defines the metrics associated with each benchmark test.It is in json format, with each line consisting of a test name as the key, followed by a list of metric names.
ex: "Dhrystone": ["Dhrystone"],
In this example, the metric name is the same as the test name.
parser.py [edit section]
This is a python program that parses the log file for the value (otherwise known as the Benchmark 'metric'), and calls a function called 'process_data' in the common.py benchmark parser library.The arguments to the process_data are:
- ref_section_pat = pattern used to find the threshold expression in the reference.log file
- dictionary = map of values parsed from the log file. There should be key:value pair for each metric gathered by this parser
- 3rd arg = ???
- 4th arg = ???
plib.TEST_LOG is the filename for the current log file
See parser.py and Parser module API
reference.log [edit section]
Each section in the reference.log file has a threshold specifier, which consists of 2 lines:- [metric operator]
- threshold value
The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.
The threshold value is a numeric value to compare the benchmark result with, to indicate success or failure.
Example:
[SciMark.FFT|ge] 0 [SciMark.LU|ge] 0
See reference.log
tests.info [edit section]
This file is a json-formatted file that lists all the metrics for each benchmark test in the system. It is found at:- /fuego-rw/logs/tests.info
- /var/lib/jenkins/userContent/fuego.logs/tests.info (this is the same file, obtained via a symlink to the log directory)
The following javascript program:
- fuego/frontend-install/plugins_src/flot-plotter-plugin/src/main/webapp/flot/mod.js
- same as: var/lib/jenkins/plugins/flot/flot/mod.js
does the following lookup:
- jQuery.ajax({ url: jenurl+'/tests.info', method: 'GET', datatype: 'json', async: false, success: getSuitesInfo});
var jenurl = 'http://'+'/'+location['host'] + '/' + prefix +'/userContent/fuego.logs/';
The resulting url is: http://<host>/<prefix>/userContent/fuego.logs//tests.info
The following URL retrieves the fuego-core tests.info file:
file format [edit section]
json, with a map of:- key: list of metrics
- the key is the second part of the test name
- the metric is the name of the value parsed from the log file
Benchmark.[testname].info.json [edit section]
mod.js has:- JQuery.ajax({url: jenurl+'/'+testname+'/'+testname+'.info.json', method: 'GET', dataType: 'json', async: false, success: getBuildsInfo});
The resulting url is:
- http://localhost:8080/fuego/userContent/fuego.logs/Benchmark.Dhrystone/Benchmark.Dhrystone.info.json
file format [edit section]
list of device maps:
Each device map has:
- key 'device': <target name>
- key 'info': list of 3 lists:
- test run id list (number)
- firmware list
- platform list
The 3 lists have to have the same number of elements
Sample:
[ { "device": "bbb-poky-sdk", "info": [ ["1", "2", "3"], ["3.8.13-bone50", "3.8.13-bone50", "3.8.13-bone50"], ["poky-qemuarm", "poky-qemuarm", "poky-qemuarm"] ] }, { "device": "qemu-test-arm", "info": [ ["4"], ["4.4.18-yocto-standard"], ["poky-qemuarm"] ] } ]
Benchmark.[testname].[metricname].json file [edit section]
Each benchmark test run has one or more metric value json files (depending on the number of metrics defined for the test).They are placed in the log file directory for this test, by the dataload.py program.
The have a name including the test name and the metric name:
- ex: Benchmark.Dhrystone.Dhrystone.json
These are used by the 'flot' plugin to do dynamic charting for benchmark tests.
file format [edit section]
Contents:- list of maps:
- each map has keys: 'data', 'label', 'points'
- data has a list of lists:
- each list has a test run id, and a value
- label has a string: <target>-<test>.<metric>[.ref]
- points has a map: with 'symbol': <symbol-name>
- data has a list of lists:
- each map has keys: 'data', 'label', 'points'
- each target has a 2 data sets, one for the metric, and one for the metric.ref value
Sample:
[ { "data": [ ["1", 2500000.0], ["2", 2500000.0], ["3", 2500000.0] ] "label": "bbb-poky-sdk-Dhrystone.Dhrystone", "points": {"symbol": "circle"} }, { "data": [ ["1", 1.0], ["2", 1.0], ["3", 1.0]], "label": "bbb-poky-sdk-Dhrystone.Dhrystone.ref", "points": {"symbol": "cross"} }, { "data": [ ["4", 909090.9]], "label": "qemu-test-arm-Dhrystone.Dhrystone", "points": {"symbol": "circle"} }, { "data": [ ["4", 1.0]], "label": "qemu-test-arm-Dhrystone.Dhrystone.ref", "points": {"symbol": "cross"} } ]