This page has a list of other test systems and with notes or links to their
online resources
|
{{TableOfContents}}
This page has a list of other test systems and with notes or links to their
online resources
|
See the related page: Test results formats
|
See the related page: [[Test results formats]]
|
LAVA -- see Tims LAVA Notes
|
= LAVA =
* [[https://validation.linaro.org/|LAVA]] -- see [[Tims LAVA Notes]]
|
KernelCI
|
= KernelCI =
* [[http://kernelci.org/|KernelCI]]
|
AGL's EGCIAT
|
= AGL CIAT =
* [[https://wiki.automotivelinux.org/eg-ciat|AGL's EGCIAT]] - Expert Group, Continuous Integration and Test
* see https://wiki.automotivelinux.org/eg-ciat/meetings for what they're working on lately
|
https://kselftest.wiki.kernel.org/
|
= kselftest =
* see https://kselftest.wiki.kernel.org/
|
https://01.org/lkp/documentation/0-day-test-service
* https://github.com/fengguang/lkp-tests.git
|
= 0-day =
* https://01.org/lkp/documentation/0-day-test-service
* https://github.com/fengguang/lkp-tests.git
|
For more details see Tims 0day notes
|
For more details see [[Tims 0day notes]]
|
Puts each test (named a 'job') into a yaml file
|
Puts each test (named a 'job') into a yaml file
|
lkp is a command line tool for executing a test
|
lkp is a command line tool for executing a test
|
some command line options are:
* lkp install <test_package>
* ls $LKP_SRC/jobs to see available jobs
* lkp split-job <test_package>
* lkp run
* lkp result <testname>
|
some command line options are:
* lkp install <test_package>
* ls $LKP_SRC/jobs to see available jobs
* lkp split-job <test_package>
* lkp run
* lkp result <testname>
|
Here's what a test looks like:
|
Here's what a test looks like:
|
From the file tbench.yaml
{{{#!YellowBox
suite: tbench
testcase: tbench
category: benchmark
|
From the file tbench.yaml
{{{#!YellowBox
suite: tbench
testcase: tbench
category: benchmark
|
# upto 40% CPU cycles may be used by latency stats
disable_latency_stats: 1
|
# upto 40% CPU cycles may be used by latency stats
disable_latency_stats: 1
|
nr_threads: 100%
|
nr_threads: 100%
|
cluster: cs-localhost
|
cluster: cs-localhost
|
if role server:
tbench-server:
|
if role server:
tbench-server:
|
if role client:
tbench:
}}}
|
if role client:
tbench:
}}}
|
|
= autotest =
|
|
= ctest =
|
https://avocado-framework.github.io/- main site
* Presentation - https://www.youtube.com/watch?v=tdEg07BfdBw
* every job has a sha1
* every job can output in multiple formats
* including standalone HTML output
* also xunit, others
* is basically a test runner, with an API
* simple test has pass or fail (0 or 1 result)
* more complicated test can use testing API
* can add a monitor (like perf)
* has a test multiplexer (YAML-based) to create test matrix
* full persistence of every test executed
* very easy configuration of programs to run and files to save (eg. from /proc) for every test executed.
* demo was in python (is API only in python?)
* creates something similar to a shell script.
* you can run it individually from the build directory using 'check'
* big question about test naming (as of presentation, avocado names tests by number)
* can run a test on a remote system
* has plugins for different features and functionality of the system
* including basic functionality, like list tests or list systems
* ability to stop system with gdb and do some interaction.
* avacado server
* exposes results with HTTP REST API
* avacado integration with Jenkins
* run the avacado command as a build in a freestyle job
* put output in xxunit
* get results from xunit
* working on isolating variable that caused the problem
* diff 2 systems and all the files, messages, versions, etc, and identify the
thing that has resulted in the error
* used for bisection?
|
= Avacado =
* https://avocado-framework.github.io/ - main site
* Presentation - https://www.youtube.com/watch?v=tdEg07BfdBw
* every job has a sha1
* every job can output in multiple formats
* including standalone HTML output
* also xunit, others
* is basically a test runner, with an API
* simple test has pass or fail (0 or 1 result)
* more complicated test can use testing API
* can add a monitor (like perf)
* has a test multiplexer (YAML-based) to create test matrix
* full persistence of every test executed
* very easy configuration of programs to run and files to save (eg. from /proc) for every test executed.
* demo was in python (is API only in python?)
* creates something similar to a shell script.
* you can run it individually from the build directory using 'check'
* big question about test naming (as of presentation, avocado names tests by number)
* can run a test on a remote system
* has plugins for different features and functionality of the system
* including basic functionality, like list tests or list systems
* ability to stop system with gdb and do some interaction.
* avacado server
* exposes results with HTTP REST API
* avacado integration with Jenkins
* run the avacado command as a build in a freestyle job
* put output in xxunit
* get results from xunit
* working on isolating variable that caused the problem
* diff 2 systems and all the files, messages, versions, etc, and identify the
thing that has resulted in the error
* used for bisection?
|
https://openqa.opensuse.org/
* OpenQA project: http://open.qa/
|
= OpenQA =
* OpenQA test visualization: https://openqa.opensuse.org/
* OpenQA project: http://open.qa/
|
https://github.com/sstephenson/bats
|
= Bats: Bash Automated Testing System =
BATS is a TAP-compliant system for testing shell scripts or Unix commands.
* see https://github.com/sstephenson/bats
|
|
= Yocto ptest =
|
https://intel-fgx-ci.01.org/
* LWN.net article about their talk at plumbers: https://lwn.net/Articles/735468
|
= Intel GFX CI =
Intel does CI on the kernel driver for their graphics hardware. They gave
a report at the testing session at Plumbers in 2017, with some interesting
insights and comments on kernel CI testing.
* the project web page is at: https://intel-fgx-ci.01.org/
* LWN.net article about their talk at plumbers: https://lwn.net/Articles/735468
|
|
== Key observations ==
They make a few key observations in their talk:
* pre-merge testing is key to get developers to respond to bugs
* once a patch is committed, interest in fixing regressions it caused goes down dramatically
* pre-merge testing of hardware is hard because other parts of the kernel break a lot
* it's very important to aggressively reduce false positives
* nothing turns off developers faster than false positives
* putting results as a response on the mailing list is good
* just posting a result on a web site is not very useful
|
From the comments:
* no one-size-fits-all CI is possible
* developers have much higher incentive to test their own stuff
* using resources on stuff no one cares about is a waste of time
* dogfooding is what kept the kernel half-decent in the absence of rigorous testing
* each subsystem and driver should have it's own CI running, before stuff hits linux-next
|
From the comments:
* no one-size-fits-all CI is possible
* developers have much higher incentive to test their own stuff
* using resources on stuff no one cares about is a waste of time
* dogfooding is what kept the kernel half-decent in the absence of rigorous testing
* each subsystem and driver should have it's own CI running, before stuff hits linux-next
|