FrontPage 

Fuego wiki

Login or create account

Using Fuego with LAVA in split format

{{TableOfContents}}
This page has notes about using Fuego with LAVA (Linaro Automated Validation Architecture).
This page has notes about using Fuego with LAVA (Linaro Automated Validation Architecture).
Some users have pre-existing testing labs that use LAVA, or want to use LAVA as a board management layer or provisioning layer for their boards.
Some users have pre-existing testing labs that use LAVA, or want to use
LAVA as a board management layer or provisioning layer for their boards.
This page has information about different ways to use Fuego in a LAVA environment.
This page has information about different ways to use Fuego in a LAVA
environment.
Most of this is based on a presentation by Liu Wenlong, of Fujitsu. A link to this presentation is below.
Most of this is based on a presentation by Liu Wenlong, of Fujitsu.  A
link to this presentation is below.
There are five main approaches to integration with LAVA, which I will refer to as: * "LAVA hacking session" approach - run Fuego jobs in a LAVA lab * "Functional.lava" approach * "Functional.Linaro" approach - run Linaro jobs in Fuego * "Fuego multi-node" approach - run Fuego in a LAVA lab * "Fuego native" approach - run Fuego command line directly on device under test
There are five main approaches to integration with LAVA, which I will refer
to as:
 * "LAVA hacking session" approach - run Fuego jobs in a LAVA lab
 * "Functional.lava" approach
 * "Functional.Linaro" approach - run Linaro jobs in Fuego
 * "Fuego multi-node" approach - run Fuego in a LAVA lab
 * "Fuego native" approach - run Fuego command line directly on device under test

Presentations [edit section]

Establish an automated test lab for AGL (Presentation by Liu Wenlong, of Fujitsu) * How to integrate Fuego automated testing tool in your CI loop (Presentation by Daniel Sangorrin)
= Presentations =
See
 * [[file:Establish_an_automated_testing_lab_for_AGL_fujitsu_Liu_Wenlong.pdf|Establish an automated test lab for AGL]] (Presentation by Liu Wenlong, of Fujitsu)
 * [[file:How_to_integrate_Fuego_into_your_CI_loop-Toshiba-Daniel-Sangorrin.pdf|How to integrate Fuego automated testing tool in your CI loop]] (Presentation by Daniel Sangorrin)

Different Approaches [edit section]

= Different Approaches =
== LAVA hacking session approach ==
Fuego includes some scripts that you reference in a board
description, to utilize LAVA as the board manager for that board.
To do this, you have to create two additional files for each board that is under management by LAVA: * $BOARD.lava holds environment variables used by LAVA for job submission * $BOARD.lava.yaml holds a template for the LAVA test job
To do this, you have to create two additional files for each board
that is under management by LAVA:
 * $BOARD.lava holds environment variables used by LAVA for job submission
 * $BOARD.lava.yaml holds a template for the LAVA test job
Then, you modify Fuego's board file to use special "setup" and "teardown" routines, which will be called as part of a test execution for that board.
Then, you modify Fuego's board file to use special "setup" and "teardown" routines, which will be called as part of a test execution for that board.

Add a $BOARD.lava file [edit section]

=== Add a $BOARD.lava file ===
The $BOARD.lava file resides in fuego-ro/boards (alongside the Fuego board
definition file).  It contains several shell environment variables that
are used for the LAVA integration.
Here is a list of the variables used: * LAVA_HOST="lava.myhost.org/RPC2" * LAVA_USER="myusername" * LAVA_TOKEN="mylavatoken" * DEFAULT_LAVA_TEMPLATE="raspberrypi3.lava.yaml" * DEFAULT_LAVA_BOOT_TYPE="bootm" * DEFAULT_LAVA_KERNEL="http://lab/rpi3/Image-raspberrypi3.bin" * DEFAULT_LAVA_DTB="http://lab/rpi3/Image-bcm2710-rpi-3-b.dtb" * DEFAULT_LAVA_NFSROOTFS="http://lab/rpi3/agl-image-ivi-qa-raspberrypi3.tar.xz" * DEFAULT_LAVA_ROOTFS_COMPRESSION="xz" * DEFAULT_LAVA_LOGIN_PROMT="login:" * DEFAULT_LAVA_LOGIN_USER="root" * DEFAULT_LAVA_LOGIN_PASS="mypassword" * DEFAULT_LAVA_PUBKEY="" * DEFAULT_LAVA_SCHEDULE_TIMEOUT_MINUTES=30 * DEFAULT_LAVA_BOOT_TIMEOUT_MINUTES=15 * SSH_PORT="2222" * IPADDR="10.0.0.235"
Here is a list of the variables used:
 * LAVA_HOST="lava.myhost.org/RPC2"
 * LAVA_USER="myusername"
 * LAVA_TOKEN="mylavatoken"
 * DEFAULT_LAVA_TEMPLATE="raspberrypi3.lava.yaml"
 * DEFAULT_LAVA_BOOT_TYPE="bootm"
 * DEFAULT_LAVA_KERNEL="http://lab/rpi3/Image-raspberrypi3.bin"
 * DEFAULT_LAVA_DTB="http://lab/rpi3/Image-bcm2710-rpi-3-b.dtb"
 * DEFAULT_LAVA_NFSROOTFS="http://lab/rpi3/agl-image-ivi-qa-raspberrypi3.tar.xz"
 * DEFAULT_LAVA_ROOTFS_COMPRESSION="xz"
 * DEFAULT_LAVA_LOGIN_PROMT="login:"
 * DEFAULT_LAVA_LOGIN_USER="root"
 * DEFAULT_LAVA_LOGIN_PASS="mypassword"
 * DEFAULT_LAVA_PUBKEY=""
 * DEFAULT_LAVA_SCHEDULE_TIMEOUT_MINUTES=30
 * DEFAULT_LAVA_BOOT_TIMEOUT_MINUTES=15
 * SSH_PORT="2222"
 * IPADDR="10.0.0.235"
There is a sample $BOARD.yaml file in fuego repository, in the fuego-ro/boards directory. Most commonly, the way to create your own $BOARD.lava file is to copy this existing example into your own file, and edit the values to match those required for your board and LAVA configuration.
There is a sample $BOARD.yaml file in fuego repository, in the
fuego-ro/boards directory.  Most commonly, the way to create your
own $BOARD.lava file is to copy this existing example
into your own file, and edit the values to match those required
for your board and LAVA configuration.

Add a $BOARD.lava.yaml file [edit section]

=== Add a $BOARD.lava.yaml file ===
The $BOARD.lava.yaml file is a job definition file for LAVA, that Fuego will
populate with information from different information 

Use LAVA setup and teardown scripts [edit section]

=== Use LAVA setup and teardown scripts ===
These scripts are called:
 * fuego-lava-target-setup
 * fuego-lava-target-teardown
To use these, add the following lines to your Fuego board file: {{{#!YellowBox TARGET_SETUP_LINK="fuego-lava-target-setup" TARGET_TEARDOWN_LINK="fuego-lava-target-teardown" }}}
To use these, add the following lines to your Fuego board file:
{{{#!YellowBox
TARGET_SETUP_LINK="fuego-lava-target-setup"
TARGET_TEARDOWN_LINK="fuego-lava-target-teardown"
}}}

Functional.lava approach [edit section]

== Functional.lava approach ==
... FIXTHIS - finish this page ...
This approach was proposed by Fujitsu (in the above presentation), but as of this writing (December, 2018), it has not been integrated into Fuego.
This approach was proposed by Fujitsu (in the above presentation), but
as of this writing (December, 2018), it has not been integrated into Fuego.
It uses the "release engineering" scripts from AGL, to create a set of jobs for Fuego, using a new Fuego test called "Functional.lava".
It uses the "release engineering" scripts from AGL, to create a set of
jobs for Fuego, using a new Fuego test called "Functional.lava".
Functional.lava will create a LAVA test job.
Functional.lava will create a LAVA test job.
Then lava-tool is used to submit the job to LAVA for execution.
Then lava-tool is used to submit the job to LAVA for execution.

Functional.Linaro test [edit section]

== Functional.Linaro test ==
This is a Fuego test that allow one to run a Linaro test in Fuego.
A test variable is used to indicate which test from the Linaro test repository to run.
A test variable is used to indicate which test from the Linaro test repository to run.
See the test (in the Fuego 'next' branch as of this writing: 2019-04-04) for details of it's operation
See the test (in the Fuego 'next' branch as of this writing: 2019-04-04)
for details of it's operation

Fuego multi-node [edit section]

https://github.com/Linaro/test-definitions/blob/master/automated/linux/fuego-multinode/README.MD
== Fuego multi-node ==
Chase Qi has been working on a system to execute the Fuego docker container
on the LAVA dispatcher node, and communicate with it as one node of a multi-node setup in LAVA.  See this page for details:
https://github.com/Linaro/test-definitions/blob/master/automated/linux/fuego-multinode/README.MD

Fuego native [edit section]

== Fuego native ==
In this approach, Fuego is installed "headless" and directly onto the
board under test, and then executed from the command line.
This requires that the board under test have support for bash and python (the languages used by Fuego core), and whatever else is needed for any individual test. This system treats the device under test as the host, and uses a 'local' board to execute tests. For tests that have a compiled test program, this means that the device under test needs to also have a toolchain installed, capable of building the test programs.
This requires that the board under test have support for bash and python
(the languages used by Fuego core), and whatever else is needed for
any individual test.  This system treats the device under test as
the host, and uses a 'local' board to execute tests.  For tests that
have a compiled test program, this means that the device under test
needs to also have a toolchain installed, capable of building the
test programs.
Use the Fuego 'install-debian.sh' script to install Fuego directly onto a machine (that is, not into a docker container), and configure a LAVA job to call 'ftc run-test -b local -t <testname>' for the test that you want to run (which can be a batch test to run multiple tests in a single run).
Use the Fuego 'install-debian.sh' script to install Fuego directly
onto a machine (that is, not into a docker container), and configure
a LAVA job to call 'ftc run-test -b local -t <testname>' for the
test that you want to run (which can be a batch test to run multiple
tests in a single run).
Obviously, this system is not capable of doing tests that reboot the board or re-provision the board.
Obviously, this system is not capable of doing tests that reboot
the board or re-provision the board.

More ideas [edit section]

= More ideas =
From an exchange between Dan Rue and Tim Bird on the fuego mailing list:
Dan said: {{{ > I'm wondering what it would take to make the fuego tests [1] compatible > with LAVA test definitions, so that they can be run directly in any LAVA > job. In my case, we can assume that the test files already exist on the > target. }}} [1] https://bitbucket.org/fuegotest/fuego-core/src/master/tests/
Dan said:
{{{
> I'm wondering what it would take to make the fuego tests [1] compatible
> with LAVA test definitions, so that they can be run directly in any LAVA
> job. In my case, we can assume that the test files already exist on the
> target. 
}}}
[1] https://bitbucket.org/fuegotest/fuego-core/src/master/tests/
This is one of the big assumption differences between LAVA and Fuego.
This is one of the big assumption differences between LAVA and Fuego.
There is work afoot to address this, via 2 different methods: * 1) have Fuego create a package (build artifact) that could be included in the LAVA image (at image build time) * 2) have LAVA support deployment of test materials at test runtime.
There is work afoot to address this, via 2 different methods:
 * 1) have Fuego create a package (build artifact) that could be included in the LAVA image (at image build time)
 * 2) have LAVA support deployment of test materials at test runtime.
To my knowledge, I'm working on 1) and Fujitsu engineers are working on 2) and have some patches they've submitted to LAVA.
To my knowledge, I'm working on 1) and Fujitsu engineers are working on 2)
and have some patches they've submitted to LAVA.
> I looked at Using Fuego with LAVA[2] and the slides 
> but it seems LAVA was used to boot the board and then Fuego
> ssh'd in to run tests. It seems that 'Solution I' is what I'm asking
> about, but I can't tell how tests were actually run.
}}}
{{{
> I looked at Using Fuego with LAVA[2] and the slides 
> but it seems LAVA was used to boot the board and then Fuego
> ssh'd in to run tests. It seems that 'Solution I' is what I'm asking
> about, but I can't tell how tests were actually run.
}}}
[2] http://fuegotest.org/wiki/Using_Fuego_with_LAVA
[2] http://fuegotest.org/wiki/Using_Fuego_with_LAVA
I'll just start shooting out some ideas and thoughts here.
I'll just start shooting out some ideas and thoughts here.
The second major impedance mismatch between Fuego and LAVA is Fuego's architecture of driving the test from the host. Many Fuego tests have a "run" phase that consist of a single "report" line, which is the Fuego function that (from the host) starts a test program on the target, saves its output as the test log, and collects the test program return code. For such tests, one could imagine a method of converting these directly into LAVA jobs.
The second major impedance mismatch between Fuego and LAVA is 
Fuego's architecture of driving the test from the host.  Many
Fuego tests have a "run" phase that consist of a single "report"
line, which is the Fuego function that (from the host) starts
a test program on the target, saves its output as the test
log, and collects the test program return code.  For such
tests, one could imagine a method of converting these directly
into LAVA jobs.
However, other Fuego tests do more processing or interaction on the host. For example, Functional.serial_rx sets up the host side of the serial port for the test. Functional.ospfd modifies the zebra configuration file prior to the test (from the host). Functional.JAVA runs a sequence of tests as separate java commands, and several of the filesystem benchmarks tests (one example is Benchmark.fio) do mount point preparation and cleanup from the host.
However, other Fuego tests do more processing or interaction on
the host. For example, Functional.serial_rx sets up the host side
of the serial port for the test. Functional.ospfd modifies the
zebra configuration file prior to the test (from the host).
Functional.JAVA runs a sequence of tests as separate java commands,
and several of the filesystem benchmarks tests (one example is
Benchmark.fio) do mount point preparation and cleanup from the host.
With regard to directly executing Fuego tests in LAVA, I think it might be possible to do the following: - ignore the build and deploy phases of fuego_test.sh (since your test programs and any materials they need are already on the target) - take the test_run function from fuego_test.sh and convert it into a standalone bash script (which is what it is), and - run that from your LAVA job.
With regard to directly executing Fuego tests in LAVA, I think it might be possible
to do the following:
 - ignore the build and deploy phases of fuego_test.sh (since your test programs and any materials they need are already on the target)
 - take the test_run function from fuego_test.sh and convert it into a standalone bash script (which is what it is), and
  - run that from your LAVA job.
You would need a Fuego support library on the target to support the Fuego functions called by the instructions in "test_run", but you might get away with a significantly reduced subset of the entire Fuego function library. Only a few functions are called from 'test_run' in practice.
You would need a Fuego support library on the target to support the
Fuego functions called by the instructions in "test_run", but you might get
away with a significantly reduced subset of the entire Fuego function library.
Only a few functions are called from 'test_run' in practice.
One issue that might be a problem is that fuego_test.sh is allowed to use bash-isms. That is, Fuego uses a full-featured shell running on the host. Bash might not be available on the target.
One issue that might be a problem is that fuego_test.sh is allowed to use
bash-isms.  That is, Fuego uses a full-featured shell running on the host.
Bash might not be available on the target.
There's also the issue of log parsing, which Fuego does on the host, and LAVA most often does on the target (but can do on the host). Fuego does this as a post-processing step, and LAVA (to my knowledge) does this synchronously at test program execution time. Tim Orling had an interesting session at ELC Europe talking about regularizing the output of tests, that, if adopted, would solve a lot of problems (for both LAVA and Fuego).
There's also the issue of log parsing, which Fuego does on the host, and LAVA most
often does on the target (but can do on the host).  Fuego does this as a post-processing
step, and LAVA (to my knowledge) does this synchronously at test program execution time.
Tim Orling had an interesting session at ELC Europe talking about regularizing the output
of tests, that, if adopted, would solve a lot of problems (for both LAVA and Fuego).
And finally, there's the issue of dependency checking. You are certainly free to take our needs_check.sh library, and apply those to your systems. Or, you do take our test_pre_check functions and execute them on the target, as part of the test.
And finally, there's the issue of dependency checking.  You are certainly free to take our
needs_check.sh library, and apply those to your systems.  Or, you do take our
test_pre_check functions and execute them on the target, as part of the test.
One other approach would be to do pre-builds of the test programs, and modify Fuego to be able to run without it's full docker container or Jenkins (we're pretty close to this already). The dependencies would be python and bash. And then just run that as a test execution engine on the target.
One other approach would be to do pre-builds of the test programs, and modify 
Fuego to be able to run without it's full docker container or Jenkins (we're
pretty close to this already).  The dependencies would be python and bash.
And then just run that as a test execution engine on the target.
As part of the test definition standards work that I took as an action item from the testing summit, I think it would be good to examine the set of operations that most tests need to perform, and standardize on them in a target-side library. Some of the things that Fuego does host-side could be moved to be target-side and put into a wrapper script, if such a library existed. It might be good to standardize the name and invocation rules for such a script. This would help both Fuego and LAVA.
As part of the test definition standards work that I took as an action item from
the testing summit, I think it would be good to examine the set of operations
that most tests need to perform, and standardize on them in a target-side library.
Some of the things that Fuego does host-side could be moved to be target-side
and put into a wrapper script, if such a library existed.  It might be good to standardize
the name and invocation rules for such a script.  This would help both Fuego and
LAVA.
TBWiki engine 1.8.3 by Tim Bird