FrontPage 

Fuego 1.0 wiki

Login or create account

Other test systems in split format

This page has a list of other test systems and with notes or links to their online resources
{{TableOfContents}}
This page has a list of other test systems and with notes or links to their
online resources

LAVA [edit section]

LAVA -- see Tims LAVA Notes
= LAVA =
 * [[https://validation.linaro.org/|LAVA]] -- see [[Tims LAVA Notes]]

KernelCI [edit section]

KernelCI
= KernelCI =
 * [[http://kernelci.org/|KernelCI]]

AGL CIAT [edit section]

AGL's EGCIAT
= AGL CIAT =
 * [[https://wiki.automotivelinux.org/eg-ciat|AGL's EGCIAT]] - Expert Group, Continuous Integration and Test
   * see https://wiki.automotivelinux.org/eg-ciat/meetings for what they're working on lately

kselftest [edit section]

= kselftest =

0-day [edit section]

https://01.org/lkp/documentation/0-day-test-service * https://github.com/fengguang/lkp-tests.git
= 0-day =
 * https://01.org/lkp/documentation/0-day-test-service
 * https://github.com/fengguang/lkp-tests.git
For more details see Tims 0day notes
For more details see [[Tims 0day notes]]
Puts each test (named a 'job') into a yaml file
Puts each test (named a 'job') into a yaml file
lkp is a command line tool for executing a test
lkp is a command line tool for executing a test
some command line options are: * lkp install <test_package> * ls $LKP_SRC/jobs to see available jobs * lkp split-job <test_package> * lkp run * lkp result <testname>
some command line options are:
 * lkp install <test_package>
   * ls $LKP_SRC/jobs to see available jobs
 * lkp split-job <test_package>
 * lkp run
 * lkp result <testname>
Here's what a test looks like:
Here's what a test looks like:
From the file tbench.yaml {{{#!YellowBox suite: tbench testcase: tbench category: benchmark
From the file tbench.yaml
{{{#!YellowBox
suite: tbench
testcase: tbench
category: benchmark
# upto 40% CPU cycles may be used by latency stats disable_latency_stats: 1
# upto 40% CPU cycles may be used by latency stats
disable_latency_stats: 1
nr_threads: 100%
nr_threads: 100%
cluster: cs-localhost
cluster: cs-localhost
if role server: tbench-server:
if role server:
  tbench-server:
if role client: tbench: }}}
if role client:
  tbench:
}}}

autotest [edit section]

= autotest =

Avacado [edit section]

https://avocado-framework.github.io/
  • main site * Presentation - https://www.youtube.com/watch?v=tdEg07BfdBw * every job has a sha1 * every job can output in multiple formats * including standalone HTML output * also xunit, others * is basically a test runner, with an API * simple test has pass or fail (0 or 1 result) * more complicated test can use testing API * can add a monitor (like perf) * has a test multiplexer (YAML-based) to create test matrix * full persistence of every test executed * very easy configuration of programs to run and files to save (eg. from /proc) for every test executed. * demo was in python (is API only in python?) * creates something similar to a shell script. * you can run it individually from the build directory using 'check' * big question about test naming (as of presentation, avocado names tests by number) * can run a test on a remote system * has plugins for different features and functionality of the system * including basic functionality, like list tests or list systems * ability to stop system with gdb and do some interaction. * avacado server * exposes results with HTTP REST API * avacado integration with Jenkins * run the avacado command as a build in a freestyle job * put output in xxunit * get results from xunit * working on isolating variable that caused the problem * diff 2 systems and all the files, messages, versions, etc, and identify the thing that has resulted in the error * used for bisection?
= Avacado =
 * https://avocado-framework.github.io/ - main site
 * Presentation - https://www.youtube.com/watch?v=tdEg07BfdBw 
   * every job has a sha1
   * every job can output in multiple formats
     * including standalone HTML output
     * also xunit, others
   * is basically a test runner, with an API
     * simple test has pass or fail (0 or 1 result)
     * more complicated test can use testing API
       * can add a monitor (like perf)
   * has a test multiplexer (YAML-based) to create test matrix
   * full persistence of every test executed
   * very easy configuration of programs to run and files to save (eg. from /proc) for every test executed.
   * demo was in python (is API only in python?)
   * creates something similar to a shell script.
     * you can run it individually from the build directory using 'check'
   * big question about test naming (as of presentation, avocado names tests by number)
   * can run a test on a remote system
   * has plugins for different features and functionality of the system
     * including basic functionality, like list tests or list systems
   * ability to stop system with gdb and do some interaction.
   * avacado server
     * exposes results with HTTP REST API
   * avacado integration with Jenkins
     * run the avacado command as a build in a freestyle job
     * put output in xxunit
     * get results from xunit
   * working on isolating variable that caused the problem
     * diff 2 systems and all the files, messages, versions, etc, and identify the
     thing that has resulted in the error
     * used for bisection?
TBWiki engine 1.8.3 by Tim Bird