FrontPage 

Fuego wiki

Login or create account

Test server system in 'raw' format

{{TableOfContents}}
This page describes a proposed "Test server system" for Fuego.

A prototype implementation of this system is the 'fserver' system.
See [[Using Fuego with fserver]] for information about the fserver
system and how to configure Fuego to interact with it.

The rest of this document is mostly brainstorming about the test server
system proposal from 2017.  When the test server is rolled out as a
fully supported feature of Fuego, this page should be updated, and the
old brainstorming material should be removed or stored in a legacy
information page.

= Introduction =
One of the long-term goals of Fuego is to allow a network of
boards to server as a distributed board farm, where an engineer can design
a test, and schedule it to be run on boards that have certain characteristics,
in order to validate something that the engineer does not have in front of
them.

One of the significant problems with automated test frameworks, is that
people don't look at the results.  That is, if someone sets up a continuous
integration test, it is quite common for the test to be running on every
iteration of the software (or every day), but to have no one dedicated to
examining the results and following up with bug fixes to solve the problem.

A test server I envision as a "test hub" (kind of like an app store), where people can place new test packages, and individual sites can select the tests they want to use with their system. There would be facilities for browsing the tests, downloading and installing the tests, and possibly rating tests or reporting issues about tests, to support the "app store"-like functionality.

A company could set up it's own test hub, that it's own internal nodes interacted with, if they want to do private testing.  The goal, however, is
to get thousands of nodes registered and interacting with the main 
open test hub for fuego, to provide unrelated developers opportunities for
testing their software on a variety of hardware, platforms and toolchains.

= Specification =
I believe the following items are needed to create a test server system:
 * There needs to be a test package system (see [[Test package system]])
 * There needs to be a test request system
   * this involves packaging a request
   * test jobs should specify the criteria for the board/hardware/platform/kernel that they want to test
     * for example, maybe a particular version of the kernel must be used
       * eg. top of tree for the linux-media 'next' branch, with a particular patch applied
     * maybe a particular platform, filesystem type, kind of network, amount of ram should be able to be required
     * It should be possible for the overall system to match the job to the board for testing, in an automated fashion (''this part requires some thought'')
     * maybe we could have a test's pre-test do a probe of the system, to determine if it is a candidate, and give a pass/fail result
       * maybe this could be structured as a dependent test
       * the pre-test could set a board-level variable, indicating support (or not) for some feature
         * e.g. BOARD_HAS_USB_STORAGE
     * could use NEED_KCONFIG, NEED_XXX system for this?
     * maybe download the test, and only run the pre-test to validate which boards locally support that operation, then send the board capability info to the central server (for future reference)
  * need a way to send board info to the central server
    * does scheduling occur on the requesting client, or on the server?
      * should support both
 * There needs to be a results sharing system
   * for now, this can just be putting the log file back onto the server,
   and making a command to retrieve the logs for the test
     * results are named: <node><board><platform><test> - and have logs for the test
   * long-term, it would be nice to display the log results using the server
   web interface
   * It may be useful to support encrypting the logs, so that only the
   publisher and consumer can see the results

 * Write a program to download a test
 * add support to the server for required operations and features
    * provide a list of tests (to a program)
    * show a list of tests (in the HTML interface)
    * accept test package upload
    * accept test request upload
    * assign test request to clients
    * make a host area
      * list boards per client
      * have board data (such as BOARD_HAS_USB_STORAGE) or other parameters
 * add support to the client
    * support polling mode to grab assigned jobs, perform test, and respond with results
    * client get-var and set-var operations can work locally or with data on server

== Other Resources ==
For rankings in the "test store" - use this math:
https://www.evanmiller.org/how-not-to-sort-by-average-rating.html


== Prototype ==
Please see the a prototype (proof of concept only) version of the server at:

http://fuegotest.org/cgi-bin/fserver.py


== Architecture ==
 * test server
   * has test packages
   * has test requests
   * has client area, with boards
     * client data can be anonymized
   * has test results area
     * has a bundle for each test executed
       * bundle contains:
         * json run description
           * includes results (pass/fail for functional, metrics for benchmarks)_
         * logs

 * test client - is a host/target combination
   * has local tests (installed)
   * has local boards (ready to test)
   * has global configuration
     * indicates upstream test server
   * has logs for jobs that were executed

How much of this does Jenkins already have?



= work in progress =
Tim is working on the test drtbrt system, as of February 2017.
Here are some notes about that:

== to do ==
 * define list of items managed by server (done, for now)
   * tests, jobs (job requests), runs, boards, hosts
   * in jenkins (job=fuego test, build=fuego run, node=fuego board)
 * create board package format
 * upload logs to server
   * tar up logs directory, with json manifest
 * look at kernelci for JSON-based APIs
   * see [[http://docs.python-requests.org/en/latest/|python 'requests' module]]

Principles:
 * will certainly need test signing at some point, to avoid running unchecked 3rd party code on 10,000 machine
   * we don't want to support the creation of a fuego botnet
   * details for test package authentication and test job request authentication will be figured out later
 * use the same set of routines for doing:
   * send data to server:
     * write a json file
     * tar gzip the file
     * upload to server
   * get data from server:
     * download the file
     * extract locally

==== questions ====
 * could we pay for individual nodes, to incentivize people to participate?
   * like - $100 per year for each participating node.  If a person provides a 10-board farm, they get $1000 per year.
   * this works out to $1 million per year, for 10,000 nodes (or $100,000 for 1000 nodes)
   * does this discourage voluntary participation (yes)
 * how to determine if a board supports running a top-of-tree kernel?
   * check the board entry for board attributes (duh!)
     * board attributes includes installed (or installable) kernel versions
 * can we use the server software for the client software local interface?
   * yes, but that's reproducing a lot of jenkins





== existing test server analysis ==
For each of these, describe the:
 - job description
 - client/server architecture
 - results description
 - node description

The purpose is to evaluate fields used, format of each item, etc.

=== jenkins ===
 * does Jenkins scale to 10,000 nodes?
 * how does Jenkins control job scheduling to nodes?

Build results:  Here is a Jenkins build.xml file (for Functional.hello_world)

{{{#!YellowBox
<xmp>
<?xml version='1.0' encoding='UTF-8'?>
<build>
  <actions>
    <jp.ikedam.jenkins.plugins.groovy__label__assignment.GroovyLabelAssignmentAction plugin="groovy-label-assignment@1.0.0">
      <label class="hudson.model.labels.LabelAtom">bbb-poky-sdk</label>
    </jp.ikedam.jenkins.plugins.groovy__label__assignment.GroovyLabelAssignmentAction>
    <hudson.model.ParametersAction>
      <parameters>
        <hudson.model.StringParameterValue>
          <name>Device</name>
          <description>(target)</description>
          <value>bbb-poky-sdk</value>
        </hudson.model.StringParameterValue>
        <hudson.model.BooleanParameterValue>
          <name>Reboot</name>
          <description>If checked target device will be rebooted <u>before</u> running test.</description>
          <value>false</value>
        </hudson.model.BooleanParameterValue>
        <hudson.model.BooleanParameterValue>
          <name>Rebuild</name>
          <description>If checked all existing build instances of the test suite will be deleted and test suite will be rebuilt from tarball.</description>
          <value>false</value>
        </hudson.model.BooleanParameterValue>
        <hudson.model.BooleanParameterValue>
          <name>Target_Cleanup</name>
          <description></description>
          <value>true</value>
        </hudson.model.BooleanParameterValue>
        <hudson.model.StringParameterValue>
          <name>TESTPLAN</name>
          <description></description>
          <value>testplan_default</value>
        </hudson.model.StringParameterValue>
      </parameters>
    </hudson.model.ParametersAction>
    <hudson.model.CauseAction>
      <causes>
        <hudson.model.Cause_-UserIdCause/>
      </causes>
    </hudson.model.CauseAction>
    <hudson.plugins.descriptionsetter.DescriptionSetterAction plugin="description-setter@1.8">
      <description>Example hello-world test<br></description>
    </hudson.plugins.descriptionsetter.DescriptionSetterAction>
    <org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildSummaryAction plugin="groovy-postbuild@1.6-SNAPSHOT">
      <iconPath>help.gif</iconPath>
      <textBuilder><font color="black">Firmware revision 3.8.13-bone50</font></textBuilder>
    </org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildSummaryAction>
    <org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction plugin="groovy-postbuild@1.6-SNAPSHOT">
      <text>bbb-poky-sdk / 3.8.13-bone50</text>
      <color>black</color>
      <background>#FFFFFF</background>
      <border>0px</border>
      <borderColor></borderColor>
    </org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction>
    <com.sonyericsson.rebuild.RebuildAction plugin="rebuild@1.9"/>
  </actions>
  <number>1</number>
  <startTime>1487118317469</startTime>
  <result>SUCCESS</result>
  <description>Example hello-world test<br></description>
  <duration>7702</duration>
  <charset>US-ASCII</charset>
  <keepLog>false</keepLog>
  <builtOn>bbb-poky-sdk</builtOn>
  <workspace>/home/jenkins/buildzone</workspace>
  <hudsonVersion>1.509.2</hudsonVersion>
  <scm class="hudson.scm.NullChangeLogParser"/>
  <culprits class="com.google.common.collect.EmptyImmutableSortedSet"/>
</build>
</xmp>
}}}

Simplifying all this junk, we would get the following attributes for a build (test run):

{{{#!Table:jenkins_build
show_edit_links=0
show_sort_links=0
||variable            ||value       ||notes||
||Device              ||bbb-poky-sdk||
||Reboot              ||false||
||Rebuild             ||false||
||Target_Cleanup      ||true||
||TESTPLAN            ||testplan_default||
||CauseAction         ||UserIdCause||
||iconPath            ||help.gif||
||textBuilder         ||Firmware revision 3.8.13-bone50||
||GroovyPostbuildSummaryAction.text||bbb-poky-sdk / 3.8.13-bone50||
||number              ||1||
||startTime           ||1487118317469||in seconds since epoch||
||result              ||SUCCESS||
||description         ||Example hello-world test<br>||
||duration            ||7702||
||charset             ||US-ASCII||
||keepLog             ||false||
||builtOn             ||bbb-poky-sdk||
||workspace           ||/home/jenkins/buildzone||
||hudsonVersion       ||1.509.2||
||scm_class           ||"hudson.scm.NullChangeLogParser"||
||culprits_class      ||"com.google.common.collect.EmptyImmutableSortedSet"||
}}}

 * Device=bbb-poky-sdk
 * Reboot=false
 * Rebuild=false
 * Target_Cleanup=true
 * TESTPLAN=testplan_default
 * CauseAction=UserIdCause
 * iconPath=help.gif
 * textBuilder=Firmware revision 3.8.13-bone50
 * GroovyPostbuildSummaryAction:
    * text=bbb-poky-sdk / 3.8.13-bone50
 * number=1
 * startTime=1487118317469
 * result=SUCCESS
 * description="Example hello-world test<br>"
 * duration=7702
 * charset=US-ASCII
 * keepLog=false
 * builtOn=bbb-poky-sdk
 * workspace=/home/jenkins/buildzone
 * hudsonVersion=1.509.2
 * scm_class="hudson.scm.NullChangeLogParser"
 * culprits_class="com.google.common.collect.EmptyImmutableSortedSet"



=== lava ===

Here is a test definition in Lava:
{{{#!YellowBox
metadata:
    name: passfail
    format: "Lava-Test-Shell Test Definition 1.0"
    description: "Pass/Fail test."
    version: 1.0

run:
    steps:
        - "lava-test-case passtest --result pass"
        - "lava-test-case failtest --result pass"
}}}

Here's a job description for lava:

It's in json.

{{{#!YellowBox
{
  "job_name": "kvm-test",
  "device_type": "kvm",
  "timeout": 1800,
  "actions": [
    {
      "command": "deploy_linaro_image",
      "parameters":
        {
          "image": "http://images.validation.linaro.org/kvm-debian-wheezy.img.gz"
        }
    },
    {
      "command": "boot_linaro_image"
    },
    {
      "command": "submit_results",
      "parameters":
        {
          "server": "http://<username>@validation.linaro.org/RPC2/",
          "stream": "/anonymous/test/"
        }
    }
  ]
}
}}}

Results are in the form of something called a Bundle Stream.

=== kernelci ===
==== server elements ====
The main components of the kernelci server are divided into two parts,
the front end and the back end:
 * front end:
   * nginx web server
     * web app (flask and wsgi)
       * cache (redis)
 * back end:
   * nginx web server
     * web app (tornado framework)
       * database (mongodb)
       * task queue (celery)
       * task broker (redis)
Builds are done on a dedicated server, using Jenkins multi-configuration
jobs to trigger the builds.  Then the builds are distributed
to labs (via Lava?), where the boot test is performed.

Build results and boot results are sent back to the server using
the Web App API.

Uploads are stored on a storage server.

Here are the objects in the kernelci system:
 * boot
 * boot_regression
 * build
 * build_logs
 * build_logs_summary
 * job
 * lab

Here are some other things that have schemas in kernelci:
 * batch
 * compare
 * report
 * send
 * test
   * test_suite - a collection of test_sets and test_cases
   * test_set - a collection of test_cases
   * test_case - an individual test
 * token - authentication tokens to control access

Here is the test case schema:
{{{#!YellowBox
{
    "$schema": "http://api.kernelci.org/json-schema/1.0/post_test_case.json",
    "id": "http://api.kernelci.org/json-schema/1.0/post_test_case.json",
    "title": "test_case",
    "description": "A test case JSON object",
    "type": "object",
    "properties": {
        "version": {
            "type": "string",
            "description": "The version number of this JSON schema",
            "enum": ["1.0"]
        },
        "name": {
            "type": "string",
            "description": "The name given to this test case"
        },
        "test_set_id": {
            "type": "string",
            "description": "The test set ID associated with this test case"
        },
        "test_suite_id": {
            "type": "string",
            "description": "The test suite ID associated with this test case"
        },
        "measurements": {
            "type": "array",
            "description": "Array of measurement objects registered by this test case",
            "items": {"$ref": "http://api.kernelci.org/json-schema/1.0/measurement.json"},
            "additionalItems": true
        },
        "minimum": {
            "type": ["integer", "number"],
            "description": "The minimum measurement registered"
        },
        "maximum": {
            "type": ["integer", "number"],
            "description": "The maximum measurement registered"
        },
        "samples": {
            "type": "integer",
            "description": "Number of registered measurements"
        },
        "samples_sum": {
            "type": ["integer", "number"],
            "description": "Sum of the registered measurements"
        },
        "samples_sqr_sum": {
            "type": ["integer", "number"],
            "description": "Sum of the square of the registered measurements"
        },
        "parameters": {
            "type": "object",
            "description": "Free form object to store key-value pairs describing the parameters used to run the test case"
        },
        "status": {
            "type": "string",
            "description": "The status of the execution of this test case",
            "enum": ["PASS", "FAIL", "SKIP", "ERROR"],
            "default": "PASS"
        },
        "time": {
            "type": "number",
            "description": "The number of seconds it took to execute this test case",
            "default": -1
        },
        "definition_uri": {
            "type": "string",
            "description": "The URI where this test case definition is stored"
        },
        "vcs_commit": {
            "type": "string",
            "description": "The VCS commit value if the $definition_uri field is a VCS URI"
        },
        "attachments": {
            "type": "array",
            "description": "List of attachment objects produced by this test case",
            "items": {"$ref": "http://api.kernelci.org/json-schema/1.0/attachment.json"},
            "additionalItems": true
        },
        "kvm_guest": {
            "type": "string",
            "description": "The name of the KVM guest this test case has been executed on"
        },
        "metadata": {
            "type": "object",
            "description": "Free form object where to store accessory test case data"
        }
    },
    "required": ["name", "test_suite_id"]
}
}}}


=== powerci.org ===
=== avacado ===






















TBWiki engine 1.8.3 by Tim Bird