Other test systems
See the related page: Test results formats
LAVA [edit section]
- LAVA -- see Tims LAVA Notes
KernelCI [edit section]
AGL CIAT [edit section]
- AGL's EGCIAT - Expert Group, Continuous Integration and Test
- see https://wiki.automotivelinux.org/eg-ciat/meetings for what they're working on lately
kselftest [edit section]
0-day [edit section]
For more details see Tims 0day notes
Puts each test (named a 'job') into a yaml file
lkp is a command line tool for executing a test
some command line options are:
- lkp install <test_package>
- ls $LKP_SRC/jobs to see available jobs
- lkp split-job <test_package>
- lkp run
- lkp result <testname>
Here's what a test looks like:
From the file tbench.yaml
suite: tbench testcase: tbench category: benchmark # upto 40% CPU cycles may be used by latency stats disable_latency_stats: 1 nr_threads: 100% cluster: cs-localhost if role server: tbench-server: if role client: tbench:
autotest [edit section]
ctest [edit section]
Avacado [edit section]
- https://avocado-framework.github.io/ - main site
- Presentation - https://www.youtube.com/watch?v=tdEg07BfdBw
- every job has a sha1
- every job can output in multiple formats
- including standalone HTML output
- also xunit, others
- is basically a test runner, with an API
- simple test has pass or fail (0 or 1 result)
- more complicated test can use testing API
- can add a monitor (like perf)
- has a test multiplexer (YAML-based) to create test matrix
- full persistence of every test executed
- very easy configuration of programs to run and files to save (eg. from /proc) for every test executed.
- demo was in python (is API only in python?)
- creates something similar to a shell script.
- you can run it individually from the build directory using 'check'
- big question about test naming (as of presentation, avocado names tests by number)
- can run a test on a remote system
- has plugins for different features and functionality of the system
- including basic functionality, like list tests or list systems
- ability to stop system with gdb and do some interaction.
- avacado server
- exposes results with HTTP REST API
- avacado integration with Jenkins
- run the avacado command as a build in a freestyle job
- put output in xxunit
- get results from xunit
- working on isolating variable that caused the problem
- diff 2 systems and all the files, messages, versions, etc, and identify the thing that has resulted in the error
- used for bisection?
OpenQA [edit section]
- OpenQA test visualization: https://openqa.opensuse.org/
- OpenQA project: http://open.qa/
Bats: Bash Automated Testing System [edit section]
BATS is a TAP-compliant system for testing shell scripts or Unix commands.
Yocto ptest [edit section]
Intel GFX CI [edit section]
Intel does CI on the kernel driver for their graphics hardware. They gave a report at the testing session at Plumbers in 2017, with some interesting insights and comments on kernel CI testing.- the project web page is at: https://intel-fgx-ci.01.org/
- LWN.net article about their talk at plumbers: https://lwn.net/Articles/735468
Key observations [edit section]
They make a few key observations in their talk:- pre-merge testing is key to get developers to respond to bugs
- once a patch is committed, interest in fixing regressions it caused goes down dramatically
- pre-merge testing of hardware is hard because other parts of the kernel break a lot
- it's very important to aggressively reduce false positives
- nothing turns off developers faster than false positives
- putting results as a response on the mailing list is good
- just posting a result on a web site is not very useful
From the comments:
- no one-size-fits-all CI is possible
- developers have much higher incentive to test their own stuff
- using resources on stuff no one cares about is a waste of time
- dogfooding is what kept the kernel half-decent in the absence of rigorous testing
- each subsystem and driver should have it's own CI running, before stuff hits linux-next