Introduction [edit section]

"Monitors" is a proposed feature for Fuego to execution separate processes (or use external hardware) to perform measurement before, after and/or during a test.

This data can be used to stress the system, or to gather more information about the execution of the target board related to the test. However, the primary use is intended to be to allow adding an extra dimension to testing, but allowing the data from a monitor to be used for results determination.

For example, imagine that you wish to capture data from a power measuring device (a separate piece of hardware attached to the device under test) during test execution. Suppose the measuring device reported data via a serial connection to the host. A test could start a monitor, which would collect data during the test, and then the test could check that data to see if the power exceeded some threshold. In this case, the test "log" data would come from the monitor, rather than the program being executed on the target.

Ideas [edit section]

There needs to be a mechanism to start a monitor, stop a monitor, and tell the monitor where to place data (probably the log directory).

There needs to be a mechanism for a test to check the monitor data after the test (either by converting the monitor data into the test data, or by telling the parser the alternate data stream to check).

There could be multiple monitors running.

Details [edit section]

Where is meta-data for the monitor setup kept? [edit section]

what should the monitor output? [edit section]

how should the monitor be defined? [edit section]

aligning monitor data with testlog data [edit section]

There will need to be some kind of system to synchronize the monitor data with the testlog data.

Solution: time annotations per line.

For now, ignore this issue, but here are some ideas:

how to monitors stop [edit section]

The test can terminate them manually.

Fuego should have the capability to stop them when the test_run is complete automatically. This means that Fuego tracks monitors and manages their lifecycles.

use of monitor data for test results [edit section]

A monitor could be used for diagnostic data only (not used by the test, but only for human use for analysis after the test), or the post-processing step of the test might use the data to indicate the result.

For example, here are two different scenarios:

Resources [edit section]

0day [edit section]

0day has a monitors. They are implemented as shell scripts.


They have features for automatically adding the loop.

A monitor can be a simple as a mention of a file in /proc. apparently, the framework will sample that file at a regular frequency.

There is some kind of timeout for a monitor.

(I'm not sure whether monitors stop themselves, or are stopped by the system)

There is also some mechanism where the monitor will do setup, then wait for a signal (not a Unix signal, but some indication) that they should start.

workload automation [edit section]

wa has a feature called "augmentations" that is similar to monitors (I believe). They have a plugin called an "instrument", that can be used to gather data during a run (?).