Using Fuego with LAVA

This page has notes about using Fuego with LAVA (Linaro Automated Validation Architecture).

Some users have pre-existing testing labs that use LAVA, or want to use LAVA as a board management layer or provisioning layer for their boards.

This page has information about different ways to use Fuego in a LAVA environment.

Most of this is based on a presentation by Liu Wenlong, of Fujitsu. A link to this presentation is below.

There are five main approaches to integration with LAVA, which I will refer to as:

Presentations [edit section]

See

Different Approaches [edit section]

LAVA hacking session approach [edit section]

Fuego includes some scripts that you reference in a board description, to utilize LAVA as the board manager for that board.

To do this, you have to create two additional files for each board that is under management by LAVA:

Then, you modify Fuego's board file to use special "setup" and "teardown" routines, which will be called as part of a test execution for that board.

Add a $BOARD.lava file [edit section]

The $BOARD.lava file resides in fuego-ro/boards (alongside the Fuego board definition file). It contains several shell environment variables that are used for the LAVA integration.

Here is a list of the variables used:

There is a sample $BOARD.yaml file in fuego repository, in the fuego-ro/boards directory. Most commonly, the way to create your own $BOARD.lava file is to copy this existing example into your own file, and edit the values to match those required for your board and LAVA configuration.

Add a $BOARD.lava.yaml file [edit section]

The $BOARD.lava.yaml file is a job definition file for LAVA, that Fuego will populate with information from different information

Use LAVA setup and teardown scripts [edit section]

These scripts are called:

To use these, add the following lines to your Fuego board file:

Functional.lava approach [edit section]

... FIXTHIS - finish this page ...

This approach was proposed by Fujitsu (in the above presentation), but as of this writing (December, 2018), it has not been integrated into Fuego.

It uses the "release engineering" scripts from AGL, to create a set of jobs for Fuego, using a new Fuego test called "Functional.lava".

Functional.lava will create a LAVA test job.

Then lava-tool is used to submit the job to LAVA for execution.

Functional.Linaro test [edit section]

This is a Fuego test that allow one to run a Linaro test in Fuego.

A test variable is used to indicate which test from the Linaro test repository to run.

See the test (in the Fuego 'next' branch as of this writing: 2019-04-04) for details of it's operation

Fuego multi-node [edit section]

Chase Qi has been working on a system to execute the Fuego docker container on the LAVA dispatcher node, and communicate with it as one node of a multi-node setup in LAVA. See this page for details: https://github.com/Linaro/test-definitions/blob/master/automated/linux/fuego-multinode/README.MD

Fuego native [edit section]

In this approach, Fuego is installed "headless" and directly onto the board under test, and then executed from the command line.

This requires that the board under test have support for bash and python (the languages used by Fuego core), and whatever else is needed for any individual test. This system treats the device under test as the host, and uses a 'local' board to execute tests. For tests that have a compiled test program, this means that the device under test needs to also have a toolchain installed, capable of building the test programs.

Use the Fuego 'install-debian.sh' script to install Fuego directly onto a machine (that is, not into a docker container), and configure a LAVA job to call 'ftc run-test -b local -t <testname>' for the test that you want to run (which can be a batch test to run multiple tests in a single run).

Obviously, this system is not capable of doing tests that reboot the board or re-provision the board.

More ideas [edit section]

From an exchange between Dan Rue and Tim Bird on the fuego mailing list:

Dan said:

> I'm wondering what it would take to make the fuego tests [1] compatible
> with LAVA test definitions, so that they can be run directly in any LAVA
> job. In my case, we can assume that the test files already exist on the
> target.
[1] https://bitbucket.org/fuegotest/fuego-core/src/master/tests/

This is one of the big assumption differences between LAVA and Fuego.

There is work afoot to address this, via 2 different methods:

To my knowledge, I'm working on 1) and Fujitsu engineers are working on 2) and have some patches they've submitted to LAVA.

> I looked at Using Fuego with LAVA[2] and the slides
> but it seems LAVA was used to boot the board and then Fuego
> ssh'd in to run tests. It seems that 'Solution I' is what I'm asking
> about, but I can't tell how tests were actually run.

[2] http://fuegotest.org/wiki/Using_Fuego_with_LAVA

I'll just start shooting out some ideas and thoughts here.

The second major impedance mismatch between Fuego and LAVA is Fuego's architecture of driving the test from the host. Many Fuego tests have a "run" phase that consist of a single "report" line, which is the Fuego function that (from the host) starts a test program on the target, saves its output as the test log, and collects the test program return code. For such tests, one could imagine a method of converting these directly into LAVA jobs.

However, other Fuego tests do more processing or interaction on the host. For example, Functional.serial_rx sets up the host side of the serial port for the test. Functional.ospfd modifies the zebra configuration file prior to the test (from the host). Functional.JAVA runs a sequence of tests as separate java commands, and several of the filesystem benchmarks tests (one example is Benchmark.fio) do mount point preparation and cleanup from the host.

With regard to directly executing Fuego tests in LAVA, I think it might be possible to do the following:

You would need a Fuego support library on the target to support the Fuego functions called by the instructions in "test_run", but you might get away with a significantly reduced subset of the entire Fuego function library. Only a few functions are called from 'test_run' in practice.

One issue that might be a problem is that fuego_test.sh is allowed to use bash-isms. That is, Fuego uses a full-featured shell running on the host. Bash might not be available on the target.

There's also the issue of log parsing, which Fuego does on the host, and LAVA most often does on the target (but can do on the host). Fuego does this as a post-processing step, and LAVA (to my knowledge) does this synchronously at test program execution time. Tim Orling had an interesting session at ELC Europe talking about regularizing the output of tests, that, if adopted, would solve a lot of problems (for both LAVA and Fuego).

And finally, there's the issue of dependency checking. You are certainly free to take our needs_check.sh library, and apply those to your systems. Or, you do take our test_pre_check functions and execute them on the target, as part of the test.

One other approach would be to do pre-builds of the test programs, and modify Fuego to be able to run without it's full docker container or Jenkins (we're pretty close to this already). The dependencies would be python and bash. And then just run that as a test execution engine on the target.

As part of the test definition standards work that I took as an action item from the testing summit, I think it would be good to examine the set of operations that most tests need to perform, and standardize on them in a target-side library. Some of the things that Fuego does host-side could be moved to be target-side and put into a wrapper script, if such a library existed. It might be good to standardize the name and invocation rules for such a script. This would help both Fuego and LAVA.