FrontPage 

Fuego 1.0 wiki

Login or create account

Benchmark parser notes in split format

Here are notes about different elements of the Benchmark log parser and plot processing.
{{TableOfContents}}
Here are notes about different elements of the Benchmark log parser and
plot processing.
FIXTHIS - add information about metrics.json file, when that goes live in the next fuego release
FIXTHIS - add information about metrics.json file, when that goes live in the next fuego release

files [edit section]

dataload.py * per-test: * parser.py * reference.log * metrics.json * per-run: * <test>.info.json * <test>.<metric>.json * plot.data * plot.png
= files =
There are several files used:
 * system-wide:
   * tests.info
   * [[dataload.py]]
 * per-test:
   * [[parser.py]]
   * [[reference.log]]
   * metrics.json
 * per-run:
   * <test>.info.json
   * <test>.<metric>.json
   * plot.data
   * plot.png

overall flow [edit section]

bench_processing * calls parser.py * imports parser/common.py * gets test log * parses it into a python dictionary (key/value pairs) * saves it to plot.data * generates plot.png * calls dataload.py * reads data from plot.data * writes data to test.metric.json files * writes data to test.info.json file
= overall flow =
A benchmark test will do the following:
 * source benchmarks.sh
   * after running the test, calls [[function_bench_processing|bench_processing]]
     * calls parser.py
        * imports parser/common.py
        * gets test log
        * parses it into a python dictionary (key/value pairs)
        * saves it to plot.data
        * generates plot.png
     * calls dataload.py
        * reads data from plot.data
        * writes data to test.metric.json files
        * writes data to test.info.json file
In the Jenkins interface: * On the status page for the Benchmark test, there are plots automatically generated from the metric data from the test runs for the benchmark. * Jenkins calls the flot (jquery plotter) plugin to show plots for the data based on <test.metric.json, test.info.json, and tests.info> files * the code that reads the .info and .json files is mod.js * Also on the status page for a test, a user can click on the 'graph' link on the Benchmark.test page in Jenkins, to see plot.png * this image is created by scripts/parser/common.py, and is generated using matplotlib, from data in the plot.data file.
In the Jenkins interface:
 * On the status page for the Benchmark test, there are plots automatically generated from the metric data from the test runs for the benchmark.
   * Jenkins calls the flot (jquery plotter) plugin to show plots for the data based on <test.metric.json, test.info.json, and tests.info> files
     * the code that reads the .info and .json files is mod.js
 * Also on the status page for a test, a user can click on the 'graph' link on the Benchmark.test page in Jenkins, to see plot.png
   * this image is created by scripts/parser/common.py, and is generated using matplotlib, from data in the plot.data file.

parser elements [edit section]

= parser elements =
== tests.info ==
This is a system-wide file that defines the metrics associated with each benchmark test.
It is in json format, with each line consisting of a test name as the key, followed by a list of metric names.
It is in json format, with each line consisting of a test name as the key, followed by a list of metric names.
ex: "Dhrystone": ["Dhrystone"],
ex: "Dhrystone": ["Dhrystone"],
In this example, the metric name is the same as the test name.
In this example, the metric name is the same as the test name.

parser.py [edit section]

== parser.py ==
This is a python program that parses the log file for the value (otherwise
known as the Benchmark 'metric'), and calls a function called 'process_data'
in the common.py benchmark parser library.
The arguments to the process_data are: ref_section_pat = pattern used to find the threshold expression in the reference.log file dictionary = map of values parse from the log file. There should be key:value pair for each metric gathered by this parser 3rd arg = ??? 4th arg = ???
The arguments to the process_data are:
 ref_section_pat = pattern used to find the threshold expression in the reference.log file
 dictionary = map of values parse from the log file. There should be key:value pair for each metric gathered by this parser
 3rd arg = ???
 4th arg = ???
plib.CUR_LOG is the filename for the current log file
plib.CUR_LOG is the filename for the current log file
See parser.py and Parser module API
See [[parser.py]] and [[Parser module API]]

reference.log [edit section]

== reference.log ==
Each section in the reference.log file has a threshold specifier, which consists
of 2 lines:
 * [metric operator]
 * threshold value
The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.
The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.
The threshold value is a numeric value to compare the benchmark result with, to indicate success or failure.
The threshold value is a numeric value to compare the benchmark result with,
to indicate success or failure.
Example: {{{#!YellowBox [SciMark.FFT|ge] 0 [SciMark.LU|ge] 0 }}}
Example:
{{{#!YellowBox
[SciMark.FFT|ge]
0
[SciMark.LU|ge]
0
}}}
See reference.log
See [[reference.log]]

tests.info [edit section]

== tests.info ==
 * /userdata/logs/tests.info
 * /home/jenkins/fuego/jobs/tests.info
This is found in 5 places: * 1. /userdata/logs/tests.info (the one from fuego git repository) * 2. /home/jenkins/logs/tests.info (same as 1, via 'logs' symlink) * 3. /var/lib/jenkins/userContent/fuego.logs/tests.info (same as 1, via fuego.logs symlink) * 4. /home/jenkins/fuego/jobs/tests.info (the one from fuego-core git repo) * 5. /var/lib/jenkins/userContent/tests.info (same as 4 via symlink)
This is found in 5 places:
 * 1. /userdata/logs/tests.info (the one from fuego git repository)
 * 2. /home/jenkins/logs/tests.info (same as 1, via 'logs' symlink)
 * 3. /var/lib/jenkins/userContent/fuego.logs/tests.info (same as 1, via fuego.logs symlink)
 * 4. /home/jenkins/fuego/jobs/tests.info (the one from fuego-core git repo)
 * 5. /var/lib/jenkins/userContent/tests.info (same as 4 via symlink)
The following javascript program: * fuego/frontend-install/plugins_src/flot-plotter-plugin/src/main/webapp/flot/mod.js * same as: var/lib/jenkins/plugins/flot/flot/mod.js
The following javascript program:
 * fuego/frontend-install/plugins_src/flot-plotter-plugin/src/main/webapp/flot/mod.js
 * same as: var/lib/jenkins/plugins/flot/flot/mod.js 
does the following lookup: * jQuery.ajax({ url: jenurl+'/tests.info', method: 'GET', datatype: 'json', async: false, success: getSuitesInfo});
does the following lookup:
 * jQuery.ajax({ url: jenurl+'/tests.info', method: 'GET', datatype: 'json', async: false, success: getSuitesInfo});
var jenurl = 'http://'+'/'+location['host'] + '/' + prefix +'/userContent/fuego.logs/';
var jenurl = 'ht''''''tp://'+'/'+location['host'] + '/' + prefix +'/userContent/fuego.logs/';
The resulting url is: http://<host>/<prefix>/userContent/fuego.logs//tests.info
The resulting url is: ht''''''tp://<host>/<prefix>/userContent/fuego.logs//tests.info
The following URL retrieves the fuego-core tests.info file: * http://localhost:8080/fuego/userContent/tests.info
The following URL retrieves the fuego-core tests.info file:
 * http://localhost:8080/fuego/userContent/tests.info

file format [edit section]

=== file format ===
json, with a map of:
 * key: list of metrics
  • the key is the second part of the test name * the metric is the name of the value parsed from the log file
 * the key is the second part of the test name
 * the metric is the name of the value parsed from the log file

Benchmark.[testname].info.json [edit section]

== Benchmark.[testname].info.json ==
mod.js has:
 * JQuery.ajax({url: jenurl+'/'+testname+'/'+testname+'.info.json', method: 'GET', dataType: 'json', async: false, success: getBuildsInfo});
The resulting url is: * http://localhost:8080/fuego/userContent/fuego.logs/Benchmark.Dhrystone/Benchmark.Dhrystone.info.json
The resulting url is:
 * ht''''''tp://localhost:8080/fuego/userContent/fuego.logs/Benchmark.Dhrystone/Benchmark.Dhrystone.info.json

file format [edit section]

=== file format ===
list of device maps:
list of device maps:
Each device map has: * key 'device': <target name> * key 'info': list of 3 lists: * test run id list (number) * firmware list * platform list
Each device map has:
 * key 'device': <target name>
 * key 'info': list of 3 lists:
    * test run id list (number)
    * firmware list
    * platform list
The 3 lists have to have the same number of elements
The 3 lists have to have the same number of elements
Sample: {{{ [ { "device": "bbb-poky-sdk", "info": [ ["1", "2", "3"], ["3.8.13-bone50", "3.8.13-bone50", "3.8.13-bone50"], ["poky-qemuarm", "poky-qemuarm", "poky-qemuarm"] ] }, { "device": "qemu-test-arm", "info": [ ["4"], ["4.4.18-yocto-standard"], ["poky-qemuarm"] ] } ] }}}
Sample:
{{{
[
  { "device": "bbb-poky-sdk",
    "info": [
      ["1", "2", "3"],
      ["3.8.13-bone50", "3.8.13-bone50", "3.8.13-bone50"],
      ["poky-qemuarm", "poky-qemuarm", "poky-qemuarm"]
    ]
  },
  { "device": "qemu-test-arm",
    "info": [
      ["4"],
      ["4.4.18-yocto-standard"],
      ["poky-qemuarm"]
    ]
  }
]
}}}

Benchmark.[testname].[metricname].json file [edit section]

== Benchmark.[testname].[metricname].json file ==
Each benchmark test run has one or more metric value json files (depending on the number of metrics defined for the test).
They are placed in the log file directory for this test, by the dataload.py program.
They are placed in the log file directory for this test, by the dataload.py
program.
The have a name including the test name and the metric name: * ex: Benchmark.Dhrystone.Dhrystone.json
The have a name including the test name and the metric name:
 * ex: Benchmark.Dhrystone.Dhrystone.json
These are used by the 'flot' plugin to do dynamic charting for benchmark tests.
These are used by the 'flot' plugin to do dynamic charting for benchmark tests.

file format [edit section]

=== file format ===
Contents:
 * list of maps:
   * each map has keys: 'data', 'label', 'points'
     * data has a list of lists:
       * each list has a test run id, and a value
     * label has a string: <target>-<test>.<metric>[.ref]
     * points has a map: with 'symbol': <symbol-name>
  • each target has a 2 data sets, one for the metric, and one for the metric.ref value
 * each target has a 2 data sets, one for the metric, and one for the metric.ref value
Sample: {{{ [ { "data": [ ["1", 2500000.0], ["2", 2500000.0], ["3", 2500000.0] ]
Sample:
{{{
[
  { "data": [
      ["1", 2500000.0],
      ["2", 2500000.0],
      ["3", 2500000.0]
     ]
"label": "bbb-poky-sdk-Dhrystone.Dhrystone", "points": {"symbol": "circle"} }, { "data": [ ["1", 1.0], ["2", 1.0], ["3", 1.0]], "label": "bbb-poky-sdk-Dhrystone.Dhrystone.ref", "points": {"symbol": "cross"} }, { "data": [ ["4", 909090.9]], "label": "qemu-test-arm-Dhrystone.Dhrystone", "points": {"symbol": "circle"} }, { "data": [ ["4", 1.0]], "label": "qemu-test-arm-Dhrystone.Dhrystone.ref", "points": {"symbol": "cross"} } ] }}}
    "label": "bbb-poky-sdk-Dhrystone.Dhrystone",
    "points": {"symbol": "circle"}
  },
  { "data": [
     ["1", 1.0],
     ["2", 1.0],
     ["3", 1.0]],
    "label": "bbb-poky-sdk-Dhrystone.Dhrystone.ref",
    "points": {"symbol": "cross"}
  },
  { "data": [
     ["4", 909090.9]],
    "label": "qemu-test-arm-Dhrystone.Dhrystone",
    "points": {"symbol": "circle"}
  },
  { "data": [
     ["4", 1.0]],
    "label": "qemu-test-arm-Dhrystone.Dhrystone.ref",
    "points": {"symbol": "cross"}
  }
]
}}}
TBWiki engine 1.8.3 by Tim Bird