Updating_Fuego >> Benchmark_parser_notes 

Fuego wiki

Login or create account

Benchmark parser notes

Here are notes about different elements of the log parser and plot processing.

files [edit section]

There are several files used:
  • system-wide (parser and results processing library):
    • common.py
    • fuego_parser_results.py
    • fuego_prepare_chart_data.py
  • per-test:
  • per-run:
    • run.json

overall flow [edit section]

The following occurs, during the processing phase of a test:
  • after running the test, fuego calls processing
    • this calls parser.py
      • imports common.py (usually as plib)
      • calls plib.parse_log with a search pattern
        • read the test log, and returns a list of matches
      • puts the results into a python dictionary
      • calls plib.process()
        • evaluates the results, according to criteria.json file
        • writes the results to results.json
        • writes the results to flat_plot_data.txt
        • generates chart data
          • uses chart_config.json as input
          • put output in flot_chart_data.json

In the Jenkins interface:

  • On the status page for the Benchmark test, there are plots automatically generated from the measure data from the test runs for the benchmark.
    • Jenkins calls the flot (jquery plotter) plugin to show charts for the data based on the flot_chart_data.json file
      • the code that reads the flot_chart_data.json is in /fuego-core/engine/scripts/mod.js

parser elements [edit section]

FIXTHIS - These are described elsewhere.

legacy files [edit section]

These files are deprecated, but are still supported by Fuego:

reference.log [edit section]

This file has been replaced by criteria.json. However, if a test does not have a criteria.json file, this file will be read and converted into the necessary structure by the parser library.

Each section in the reference.log file has a threshold specifier, which consists of 2 lines:

  • [metric operator]
  • threshold value

The metric operator line specifies the name of the metric, a vertical bar ('|'), and either 'ge' or 'le', all inside square brackets.

The threshold value is a numeric value to compare the benchmark result with, to indicate success or failure.

Example:

    [SciMark.FFT|ge]
    0
    [SciMark.LU|ge]
    0

See reference.log

TBWiki engine 1.8.3 by Tim Bird