FrontPage 

Fuego wiki

Login or create account

Benchmark parser notes in split format

Here are notes about different elements of the log parser and plot processing.
{{TableOfContents}}
Here are notes about different elements of the log parser and
plot processing.

files [edit section]

common.py * fuego_parser_results.py * fuego_prepare_chart_data.py * per-test: * inputs and control files: * parser.py * chart_config.json * replaces metrics.json * criteria.json * replaces reference.log * reference.json * replaces reference.log * outputs: * results.json * flat_plot_data.txt * flot_chart_data.json * per-run: * run.json
= files =
There are several files used:
 * system-wide (parser and results processing library):
   * [[common.py]]
   * fuego_parser_results.py
   * fuego_prepare_chart_data.py
 * per-test:
   * inputs and control files:
     * [[parser.py]]
     * [[chart_config.json]]
       * replaces metrics.json
     * [[criteria.json]]
       * replaces reference.log
     * [[reference.json]]
       * replaces reference.log
   * outputs:
     * results.json
     * flat_plot_data.txt
     * flot_chart_data.json
 * per-run:
   * run.json

overall flow [edit section]

processing * this calls parser.py * imports common.py (usually as plib) * calls plib.parse_log with a search pattern * read the test log, and returns a list of matches * puts the results into a python dictionary * calls plib.process() * evaluates the results, according to criteria.json file * writes the results to results.json * writes the results to flat_plot_data.txt * generates chart data * uses chart_config.json as input * put output in flot_chart_data.json
= overall flow =
The following occurs, during the processing phase of a test:
   * after running the test, fuego calls [[function_processing|processing]]
     * this calls parser.py
        * imports common.py (usually as plib)
        * calls plib.parse_log with a search pattern
          * read the test log, and returns a list of matches
        * puts the results into a python dictionary
        * calls plib.process()
          * evaluates the results, according to '''criteria.json''' file
          * writes the results to '''results.json'''
          * writes the results to '''flat_plot_data.txt'''
          * generates chart data
            * uses '''chart_config.json''' as input
            * put output in '''flot_chart_data.json'''
In the Jenkins interface: * On the status page for the Benchmark test, there are plots automatically generated from the measure data from the test runs for the benchmark. * Jenkins calls the flot (jquery plotter) plugin to show charts for the data based on the flot_chart_data.json file * the code that reads the flot_chart_data.json is in /fuego-core/engine/scripts/mod.js
In the Jenkins interface:
 * On the status page for the Benchmark test, there are plots automatically generated from the measure data from the test runs for the benchmark.
   * Jenkins calls the flot (jquery plotter) plugin to show charts for the data based on the flot_chart_data.json file
     * the code that reads the flot_chart_data.json is in /fuego-core/engine/scripts/mod.js

parser elements [edit section]

= parser elements =
FIXTHIS - These are described elsewhere.

legacy files [edit section]

== legacy files ==
These files are deprecated, but are still supported by Fuego:

reference.log [edit section]

=== reference.log ===
This file has been replaced by '''criteria.json'''.  However, if a test
does not have a criteria.json file, this file will be read and converted
into the necessary structure by the parser library.
Each section in the reference.log file has a threshold specifier, which consists of 2 lines: * [metric operator] * threshold value
Each section in the reference.log file has a threshold specifier, which consists
of 2 lines:
 * [metric operator]
 * threshold value
The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.
The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.
The threshold value is a numeric value to compare the benchmark result with, to indicate success or failure.
The threshold value is a numeric value to compare the benchmark result with,
to indicate success or failure.
Example: {{{#!YellowBox [SciMark.FFT|ge] 0 [SciMark.LU|ge] 0 }}}
Example:
{{{#!YellowBox
[SciMark.FFT|ge]
0
[SciMark.LU|ge]
0
}}}
See reference.log
See [[reference.log]]
TBWiki engine 1.8.3 by Tim Bird