FrontPage 

Fuego wiki

Login or create account

New test ideas in 'raw' format

{{TableOfContents}}
This page is a dumping ground for ideas for new tests that could be added to Fuego.

Try to keep the tests in categories as shown below.
= Desired system tests overview =
Fuego is a system test framework.

Here are some tests that are good for determining overall system health:
 * LTP system call test (already in Fuego)
 * LTP posix test suite (already in Fuego)
 * LSB-FHS - Linux Standard Base Filesystem Hierarchy Standard test
   * See https://www.opengroup.org/testing/lsb-fhs/

== Things to check in a newly created product ==
After a change to the kernel or software stack (Linux distribution) on
a product, the following items should be checked:

 * interrupts are working (at correct or expected rate)
 * modules that were expected are loaded correctly
 * file systems were mounted with correct permissions and attributes
 * busybox or toybox has all needed sub-commands
 * shell has needed features (should be part of posix test, but I suspect it's not)
 * kselftest - for kernel features
 * libraries that were expected are present

= Tests from other distros or projects =
Some existing Linux distributions or project have their own selection
of tests, for the package they include.  Here are some candidate
project to get test ideas from (if not the tests themselves).

== Apertis tests ==
See https://gitlab.apertis.org/pkg/apertis-tests.git

== AGL tests ==
See https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts

=== busybox presence test ===
see https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts/busybox.sh

=== service availability test ===
https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts/service-check.sh

== CIP tests ==
For overall testing strategy, see https://wiki.linuxfoundation.org/civilinfrastructureplatform/ciptesting/centalisedtesting

Tests are at: https://gitlab.com/cip-project/cip-testing/cip-kernel-tests

Tests are at: https://gitlab.com/cip-project/cip-testing/test-definitions/-/tree/master/automated/linux

== CKI (RedHat) tests ==

== Debian tests ==


== Yocto Project tests (ptests?) ==


= Tests by technology area or subsystem 
== Security tests ==
=== OpenVAS ===
 * check out OpenVAS - a tool that checks for CVEs (from OSSJ)
   * see http://www.openvas.org/

=== cvecheck ===
 * see https://github.com/clearlinux/cve-check-tool (last updated 5 years ago)
 * For yocto project, see https://git.yoctoproject.org/poky/tree/meta/classes/cve-check.bbclass

=== Lynis ===
Lynis is a vulnerability scanner for Linux systems.
 * see https://github.com/CISOfy/lynis


== Memory ==
=== mmtests ===
mmtests by Mel Gorman
 * https://github.com/gormanm/mmtests


== filesystem ==
=== xfstests ===
xfstests seems to be the new standard for measuring Linux file system
performance.  We should include this test in fuego.

See the following for more information:
 * https://git.kernel.org/cgit/fs/xfs/xfstests-dev.git/
   * clone with 'git clone git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git'
   * I think this is the main upstream repository for xfstests (the repository at http://oss.sgi.com/ has been deprecated)
 * [[https://lwn.net/Articles/592783/|An automated xfstests infrastructure using kvm]]
   * Ted Ts'o's work on automating xfstests
 * [[https://lwn.net/Articles/591985/|Toward better testing]]
   * Dave Chimmer's report on the status of xfstests at an event in 2014

== block layer performance measurement ==
Possibly something simple like 'time dd ...' is useful for catching 
some things (and it's short).

Here is a post from Linus Walleij about using a simple dd to measure block layer
performance.  He found a regression of performance using the MQ block layer
scheduler, using this.

{{{
I got blk-mq running for MMC/SD today and I see a gross performance
regression, from 37 MB/s to 27 MB/s on Ux500 7.38 GB eMMC
with a simple dd test:

BEFORE switching to MQ:

time dd if=/dev/mmcblk3 of=/dev/null bs=1M count=1024
1073741824 bytes (1.0GB) copied, 27.530335 seconds, 37.2MB/s
real    0m 27.54s
user    0m 0.02s
sys     0m 7.56s

AFTER switching to MQ:

time dd if=/dev/mmcblk3 of=/dev/null bs=1M count=1024
1073741824 bytes (1.0GB) copied, 37.170990 seconds, 27.5MB/s
real    0m 37.18s
user    0m 0.02s
sys     0m 7.32s

I will however post my hacky patch as a RFD to the blockdevs and
the block maintainers, along with the numbers and a speculation
about what may be causing it. asynchronous requests (request
pipelining) is one thing, another thing is front/back merge in
the block layer I guess.
}}}

== Bus testing ==
=== serial port ===
=== i2c ===
=== USB testing ===
=== CAN bus testing ===
From agl-discussions list Dec 13: https://lists.linuxfoundation.org/pipermail/automotive-discussions/2016-December/003056.html

I'm interested your benchmark amb can data.
http://docs.automotivelinux.org/docs/apis_services/en/dev/reference/iotbzh2016/signaling/AGL-AppFW-CAN-Signaling-Benchmark.pdf

I want to test amb d-bus can data benchmark, 
So can you share your used "can data" and "amd configuration" ?

This test apparently has a CAN packet injector, written by Cogent.

= Miscellaneous =
== year 2038 test ==
Arnd Bergmann is a leading kernel expert on this topic.  He gave a talk
at Linaro Connect 2017 in Budapest.  See his session at:
http://connect.linaro.org/resource/bud17/bud17-512/
and an lwn.net report on it here: https://lwn.net/Articles/717076/

There's a page with some very small test snippets at:
http://maul.deepsky.com/~merovech/2038.html

== list of tests from presentations ==
=== ELCE 2022: linux next testing ===
See https://elinux.org/images/5/50/OSS-EU22-MC-PPT-ELC-linux-next-testing.pdf

From slide 13:
 * modetest, ifconfig, ping, rtcwake
 * inspect /sys/kernel/debug/devices_deferred
 * From Marek's diagram:
   * boot, bash, defer, rtc0, rtc1, fbtest, ping, wifi, bt, alsa, cpu_hp, sleep, sleep2, ping2, reboot, off

=== ELCE 2022: Growing a lab for upstream testing
This talk was about Collabora's KernelCI testing lab.

See https://elinux.org/images/5/5c/ELCEU2022_Growing_a_Lab_for_Automated_Upstream_Testing.pdf

From slide 27:
 * Test suites:
   * dEQP, Khronos GL and VK CTS, Piglit, trace replaying for OpenGL, Vulkan and
Direct3D, Skqp, va-utils


= stress tests =
Should probably add stressng
 * http://kernel.ubuntu.com/~cking/stress-ng/

= kernel tests =
== kselftest ==

== kernelci ==
 * build test
 * boot test

== device driver tests ==
Texax Instruments has a project (under LTP?) called the "Device Driver Tests"
aka DDT.

See http://processors.wiki.ti.com/index.php/LTP-DDT


= Tests from other test frameworks =
== 0day ==
Figure out a way to run all existing 0day tests.

(known by Intel as Linux Kernel Performance (LKP) tests)

See https://github.com/intel/lkp-tests

== LKFT ==
For LKFT's tests, see https://lkft.linaro.org/tests/

== kernelci ==


== syzbot ==


== CompassCI ==
See https://static.linaro.org/connect/lvc21/presentations/lvc21-202.pdf

Source code at: https://gitee.com/openeuler/compass-ci

== Phoronix Test Suite ==
See https://openbenchmarking.org/


















TBWiki engine 1.8.3 by Tim Bird