Fuego wiki

Login or create account

Test server system

This page describes a proposed "Test server system" for Fuego.

Introduction [edit section]

One of the long-term goals of Fuego is to allow a network of boards to server as a distributed board farm, where an engineer can design a test, and schedule it to be run on boards that have certain characteristics, in order to validate something that the engineer does not have in front of them.

One of the significant problems with automated test frameworks, is that people don't look at the results. That is, if someone sets up a continuous integration test, it is quite common for the test to be running on every iteration of the software (or every day), but to have no one dedicated to examining the results and following up with bug fixes to solve the problem.

A test server I envision as a "test hub" (kind of like an app store), where people can place new test packages, and individual sites can select the tests they want to use with their system. There would be facilities for browsing the tests, downloading and installing the tests, and possibly rating tests or reporting issues about tests, to support the "app store"-like functionality.

A company could set up it's own test hub, that it's own internal nodes interacted with, if they want to do private testing. The goal, however, is to get thousands of nodes registered and interacting with the main open test hub for fuego, to provide unrelated developers opportunities for testing their software on a variety of hardware, platforms and toolchains.

Specification [edit section]

I believe the following items are needed to create a test server system:
  • There needs to be a test package system (see Test package system)
  • There needs to be a test job system
    • this involves packaging the job itself, and having it execute on the target
    • test jobs should specify the criteria for the board/hardware/platform/kernel that they want to test
      • for example, maybe a particular version of the kernel must be used
        • eg. top of tree for the linux-media 'next' branch, with a particular patch applied
      • maybe a particular platform, filesystem type, kind of network, amount of ram should be able to be required
      • It should be possible for the overall system to match the job to the board for testing, in an automated fashion (this part requires some thought)
      • maybe we could have a test's pre-test do a probe of the system, to determine if it is a candidate, and give a pass/fail result
        • maybe this could be structured as a dependent test
        • the pre-test could set a board-level variable, indicating support (or not) for some feature
          • e.g. BOARD_HAS_USB_STORAGE
      • could use NEED_KCONFIG, NEED_XXX system for this?
      • maybe download the test, and only run the pre-test to validate which boards locally support that operation, then send the board capability info to the central server (for future reference)
  • need a way to send board info to the central server
    • does scheduling occur on the requesting client, or on the server?
      • should support both
  • There needs to be a results sharing system
    • for now, this can just be putting the log file back onto the server, and making a command to retrieve the logs for the test
      • results are named: <node><board><platform><test> - and have logs for the test
    • long-term, it would be nice to display the log results using the server web interface
    • It may be useful to support encrypting the logs, so that only the publisher and consumer can see the results

  • Write a program to download a test
  • add support to the server for required operations and features
    • provide a list of tests (to a program)
    • show a list of tests (in the HTML interface)
    • accept test package upload
    • accept test job request upload
    • assign test job request to clients
    • make a client area
      • list boards per client
      • have board data (such as BOARD_HAS_USB_STORAGE) or other parameters
  • add support to the client
    • support polling mode to grab assigned jobs, perform test, and respond with results
    • client get-var and set-var operations can work locally or with data on server

Prototype [edit section]

Please see the a prototype (proof of concept only) version of the server at:


Architecture [edit section]

  • test server
    • has test packages
    • has test job requests
    • has client area, with boards
      • client data can be anonymized
    • has test results area
      • has a bundle for each test executed
        • bundle contains:
          • json run description
            • includes results (pass/fail for functional, metrics for benchmarks)_
          • logs

  • test client - is a host/target combination
    • has local tests (installed)
    • has local boards (ready to test)
    • has global configuration
      • indicates upstream test server
    • has logs for jobs that were executed

How much of this does Jenkins already have?

work in progress [edit section]

Tim is working on the test drtbrt system, as of February 2017. Here are some notes about that:

to do [edit section]

  • define list of items managed by server (done, for now)
    • tests, jobs (job requests), runs, boards, hosts
    • in jenkins (job=fuego test, build=fuego run, node=fuego board)
  • show a list of test packages on the host
    • could show a list of of test pages, or test packages in the tests dir
      • which one to use?
  • create job format
    • look at jenkins job format
    • look at kernelci job format (see below)
  • write python library routines for:
    • send data to server
      • dictionary, filelist, directory
    • get data from server
      • url, place to unpack
    • get list of available items
      • url
  • create run-results package format
    • json file
    • logs
    • in a tar archive, gzipped
  • create board package format
  • upload logs to server
    • tar up logs directory, with json manifest
  • look at kernelci for JSON-based APIs


  • will certainly need test signing at some point, to avoid running unchecked 3rd party code on 10,000 machine
    • we don't want to support the creation of a fuego botnet
    • details for test package authentication and test job request authentication will be figured out later
  • use the same set of routines for doing:
    • send data to server:
      • write a json file
      • tar gzip the file
      • upload to server
    • get data from server:
      • download the file
      • extract locally

questions [edit section]

  • could we pay for individual nodes, to incentivize people to participate?
    • like - $100 per year for each participating node. If a person provides a 10-board farm, they get $1000 per year.
    • this works out to $1 million per year, for 10,000 nodes (or $100,000 for 1000 nodes)
    • does this discourage voluntary participation (yes)
  • how to determine if a board supports running a top-of-tree kernel?
    • see if they have 'kinstall' support
      • if they do, try to run a top-of-tree kernel
  • can we use the server software for the client software local interface?
    • yes, but that's reproducing a lot of jenkins

existing test server analysis [edit section]

For each of these, describe the:
  • job description
  • client/server architecture
  • results description
  • node description

The purpose is to evaluate fields used, format of each item, etc.

jenkins [edit section]

  • does Jenkins scale to 10,000 nodes?
  • how does Jenkins control job scheduling to nodes?

Build results: Here is a Jenkins build.xml file (for Functional.hello_world)

    <?xml version='1.0' encoding='UTF-8'?>
        <jp.ikedam.jenkins.plugins.groovy__label__assignment.GroovyLabelAssignmentAction plugin="groovy-label-assignment@1.0.0">
          <label class="hudson.model.labels.LabelAtom">bbb-poky-sdk</label>
              <description>If checked target device will be rebooted <u>before</u> running test.</description>
              <description>If checked all existing build instances of the test suite will be deleted and test suite will be rebuilt from tarball.</description>
        <hudson.plugins.descriptionsetter.DescriptionSetterAction plugin="description-setter@1.8">
          <description>Example hello-world test<br></description>
        <org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildSummaryAction plugin="groovy-postbuild@1.6-SNAPSHOT">
          <textBuilder><font color="black">Firmware revision 3.8.13-bone50</font></textBuilder>
        <org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction plugin="groovy-postbuild@1.6-SNAPSHOT">
          <text>bbb-poky-sdk / 3.8.13-bone50</text>
        <com.sonyericsson.rebuild.RebuildAction plugin="rebuild@1.9"/>
      <description>Example hello-world test<br></description>
      <scm class="hudson.scm.NullChangeLogParser"/>
      <culprits class="com.google.common.collect.EmptyImmutableSortedSet"/>

Simplifying all this junk, we would get the following attributes for a build (test run):

variable value notes
Device bbb-poky-sdk <empty cell>
Reboot false <empty cell>
Rebuild false <empty cell>
Target_Cleanup true <empty cell>
TESTPLAN testplan_default <empty cell>
CauseAction UserIdCause <empty cell>
iconPath help.gif <empty cell>
textBuilder Firmware revision 3.8.13-bone50 <empty cell>
GroovyPostbuildSummaryAction.text bbb-poky-sdk / 3.8.13-bone50 <empty cell>
number 1 <empty cell>
startTime 1487118317469 in seconds since epoch
result SUCCESS <empty cell>
description Example hello-world test<br> <empty cell>
duration 7702 <empty cell>
charset US-ASCII <empty cell>
keepLog false <empty cell>
builtOn bbb-poky-sdk <empty cell>
workspace /home/jenkins/buildzone <empty cell>
hudsonVersion 1.509.2 <empty cell>
scm_class "hudson.scm.NullChangeLogParser" <empty cell>
culprits_class "com.google.common.collect.EmptyImmutableSortedSet" <empty cell>

  • Device=bbb-poky-sdk
  • Reboot=false
  • Rebuild=false
  • Target_Cleanup=true
  • TESTPLAN=testplan_default
  • CauseAction=UserIdCause
  • iconPath=help.gif
  • textBuilder=Firmware revision 3.8.13-bone50
  • GroovyPostbuildSummaryAction:
    • text=bbb-poky-sdk / 3.8.13-bone50
  • number=1
  • startTime=1487118317469
  • result=SUCCESS
  • description="Example hello-world test<br>"
  • duration=7702
  • charset=US-ASCII
  • keepLog=false
  • builtOn=bbb-poky-sdk
  • workspace=/home/jenkins/buildzone
  • hudsonVersion=1.509.2
  • scm_class="hudson.scm.NullChangeLogParser"
  • culprits_class="com.google.common.collect.EmptyImmutableSortedSet"

lava [edit section]

Here is a test definition in Lava:

        name: passfail
        format: "Lava-Test-Shell Test Definition 1.0"
        description: "Pass/Fail test."
        version: 1.0
            - "lava-test-case passtest --result pass"
            - "lava-test-case failtest --result pass"

Here's a job description for lava:

It's in json.

      "job_name": "kvm-test",
      "device_type": "kvm",
      "timeout": 1800,
      "actions": [
          "command": "deploy_linaro_image",
              "image": "http://images.validation.linaro.org/kvm-debian-wheezy.img.gz"
          "command": "boot_linaro_image"
          "command": "submit_results",
              "server": "http://<username>@validation.linaro.org/RPC2/",
              "stream": "/anonymous/test/"

Results are in the form of something called a Bundle Stream.

kernelci [edit section]

server elements [edit section]

The main components of the kernelci server are divided into two parts, the front end and the back end:
  • front end:
    • nginx web server
      • web app (flask and wsgi)
        • cache (redis)
  • back end:
    • nginx web server
      • web app (tornado framework)
        • database (mongodb)
        • task queue (celery)
        • task broker (redis) Builds are done on a dedicated server, using Jenkins multi-configuration jobs to trigger the builds. Then the builds are distributed to labs (via Lava?), where the boot test is performed.

Build results and boot results are sent back to the server using the Web App API.

Uploads are stored on a storage server.

Here are the objects in the kernelci system:

  • boot
  • boot_regression
  • build
  • build_logs
  • build_logs_summary
  • job
  • lab

Here are some other things that have schemas in kernelci:

  • batch
  • compare
  • report
  • send
  • test
    • test_suite - a collection of test_sets and test_cases
    • test_set - a collection of test_cases
    • test_case - an individual test
  • token - authentication tokens to control access

Here is the test case schema:

        "$schema": "http://api.kernelci.org/json-schema/1.0/post_test_case.json",
        "id": "http://api.kernelci.org/json-schema/1.0/post_test_case.json",
        "title": "test_case",
        "description": "A test case JSON object",
        "type": "object",
        "properties": {
            "version": {
                "type": "string",
                "description": "The version number of this JSON schema",
                "enum": ["1.0"]
            "name": {
                "type": "string",
                "description": "The name given to this test case"
            "test_set_id": {
                "type": "string",
                "description": "The test set ID associated with this test case"
            "test_suite_id": {
                "type": "string",
                "description": "The test suite ID associated with this test case"
            "measurements": {
                "type": "array",
                "description": "Array of measurement objects registered by this test case",
                "items": {"$ref": "http://api.kernelci.org/json-schema/1.0/measurement.json"},
                "additionalItems": true
            "minimum": {
                "type": ["integer", "number"],
                "description": "The minimum measurement registered"
            "maximum": {
                "type": ["integer", "number"],
                "description": "The maximum measurement registered"
            "samples": {
                "type": "integer",
                "description": "Number of registered measurements"
            "samples_sum": {
                "type": ["integer", "number"],
                "description": "Sum of the registered measurements"
            "samples_sqr_sum": {
                "type": ["integer", "number"],
                "description": "Sum of the square of the registered measurements"
            "parameters": {
                "type": "object",
                "description": "Free form object to store key-value pairs describing the parameters used to run the test case"
            "status": {
                "type": "string",
                "description": "The status of the execution of this test case",
                "enum": ["PASS", "FAIL", "SKIP", "ERROR"],
                "default": "PASS"
            "time": {
                "type": "number",
                "description": "The number of seconds it took to execute this test case",
                "default": -1
            "definition_uri": {
                "type": "string",
                "description": "The URI where this test case definition is stored"
            "vcs_commit": {
                "type": "string",
                "description": "The VCS commit value if the $definition_uri field is a VCS URI"
            "attachments": {
                "type": "array",
                "description": "List of attachment objects produced by this test case",
                "items": {"$ref": "http://api.kernelci.org/json-schema/1.0/attachment.json"},
                "additionalItems": true
            "kvm_guest": {
                "type": "string",
                "description": "The name of the KVM guest this test case has been executed on"
            "metadata": {
                "type": "object",
                "description": "Free form object where to store accessory test case data"
        "required": ["name", "test_suite_id"]

powerci.org [edit section]

avacado [edit section]

TBWiki engine 1.8.2 by Tim Bird