FrontPage 

Fuego 1.5 wiki

Login or create account

New test ideas in split format

This page is a dumping ground for ideas for new tests that could be added to Fuego.
{{TableOfContents}}
This page is a dumping ground for ideas for new tests that could be added to Fuego.
Try to keep the tests in categories as shown below. = general system sanity = Here are some tests that are good for general system sanity: * LTP system call test (already in Fuego) * LTP posix test suite (already in Fuego) * LSB-FHS - Linux Standard Base Filesystem Hierarchy Standard test * See https://www.opengroup.org/testing/lsb-fhs/
Try to keep the tests in categories as shown below.
= general system sanity =
Here are some tests that are good for general system sanity:
 * LTP system call test (already in Fuego)
 * LTP posix test suite (already in Fuego)
 * LSB-FHS - Linux Standard Base Filesystem Hierarchy Standard test
   * See https://www.opengroup.org/testing/lsb-fhs/
  • interrupts are working (at correct or expected rate) * modules that were expected are loaded correctly * file systems were mounted with correct permissions and attributes * busybox or toybox has all needed sub-commands * shell has needed features (should be part of posix test, but I suspect it's not) * kselftest - for kernel features * libraries that were expected are present
 * interrupts are working (at correct or expected rate)
 * modules that were expected are loaded correctly
 * file systems were mounted with correct permissions and attributes
 * busybox or toybox has all needed sub-commands
 * shell has needed features (should be part of posix test, but I suspect it's not)
 * kselftest - for kernel features
 * libraries that were expected are present

AGL tests [edit section]

https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts/busybox.sh
== AGL tests ==
=== busybox presence test ===
see https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts/busybox.sh

service availability test [edit section]

https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts/service-check.sh
=== service availability test ===
https://git.automotivelinux.org/src/qa-testdefinitions/tree/common/scripts/service-check.sh

Security tests [edit section]

http://www.openvas.org/
= Security tests ==
 * check out OpenVAS - a tool that checks for CVEs (from OSSJ)
   * see http://www.openvas.org/

memory [edit section]

https://github.com/gormanm/mmtests
= memory =
mmtests by Mel Gorman
 * https://github.com/gormanm/mmtests

filesystem [edit section]

= filesystem =
== xfstests ==
xfstests seems to be the new standard for measuring Linux file system
performance.  We should include this test in fuego.
See the following for more information: * https://git.kernel.org/cgit/fs/xfs/xfstests-dev.git/ * clone with 'git clone git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git' * I think this is the main upstream repository for xfstests (the repository at http://oss.sgi.com/ has been deprecated) * An automated xfstests infrastructure using kvm * Ted Ts'o's work on automating xfstests * Toward better testing * Dave Chimmer's report on the status of xfstests at an event in 2014
See the following for more information:
 * https://git.kernel.org/cgit/fs/xfs/xfstests-dev.git/
   * clone with 'git clone git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git'
   * I think this is the main upstream repository for xfstests (the repository at http://oss.sgi.com/ has been deprecated)
 * [[https://lwn.net/Articles/592783/|An automated xfstests infrastructure using kvm]]
   * Ted Ts'o's work on automating xfstests
 * [[https://lwn.net/Articles/591985/|Toward better testing]]
   * Dave Chimmer's report on the status of xfstests at an event in 2014

block layer performance measurement [edit section]

== block layer performance measurement ==
Possibly something simple like 'time dd ...' is useful for catching 
some things (and it's short).
Here is a post from Linus Walleij about using a simple dd to measure block layer performance. He found a regression of performance using the MQ block layer scheduler, using this.
Here is a post from Linus Walleij about using a simple dd to measure block layer
performance.  He found a regression of performance using the MQ block layer
scheduler, using this.
I got blk-mq running for MMC/SD today and I see a gross performance
regression, from 37 MB/s to 27 MB/s on Ux500 7.38 GB eMMC
with a simple dd test:
{{{
I got blk-mq running for MMC/SD today and I see a gross performance
regression, from 37 MB/s to 27 MB/s on Ux500 7.38 GB eMMC
with a simple dd test:
BEFORE switching to MQ:
BEFORE switching to MQ:
time dd if=/dev/mmcblk3 of=/dev/null bs=1M count=1024 1073741824 bytes (1.0GB) copied, 27.530335 seconds, 37.2MB/s real 0m 27.54s user 0m 0.02s sys 0m 7.56s
time dd if=/dev/mmcblk3 of=/dev/null bs=1M count=1024
1073741824 bytes (1.0GB) copied, 27.530335 seconds, 37.2MB/s
real    0m 27.54s
user    0m 0.02s
sys     0m 7.56s
AFTER switching to MQ:
AFTER switching to MQ:
time dd if=/dev/mmcblk3 of=/dev/null bs=1M count=1024 1073741824 bytes (1.0GB) copied, 37.170990 seconds, 27.5MB/s real 0m 37.18s user 0m 0.02s sys 0m 7.32s
time dd if=/dev/mmcblk3 of=/dev/null bs=1M count=1024
1073741824 bytes (1.0GB) copied, 37.170990 seconds, 27.5MB/s
real    0m 37.18s
user    0m 0.02s
sys     0m 7.32s
I will however post my hacky patch as a RFD to the blockdevs and the block maintainers, along with the numbers and a speculation about what may be causing it. asynchronous requests (request pipelining) is one thing, another thing is front/back merge in the block layer I guess. }}}
I will however post my hacky patch as a RFD to the blockdevs and
the block maintainers, along with the numbers and a speculation
about what may be causing it. asynchronous requests (request
pipelining) is one thing, another thing is front/back merge in
the block layer I guess.
}}}

Bus testing [edit section]

https://lists.linuxfoundation.org/pipermail/automotive-discussions/2016-December/003056.html
= Bus testing =
== CAN bus testing ==
From agl-discussions list Dec 13: https://lists.linuxfoundation.org/pipermail/automotive-discussions/2016-December/003056.html
I'm interested your benchmark amb can data. http://docs.automotivelinux.org/docs/apis_services/en/dev/reference/iotbzh2016/signaling/AGL-AppFW-CAN-Signaling-Benchmark.pdf
I'm interested your benchmark amb can data.
http://docs.automotivelinux.org/docs/apis_services/en/dev/reference/iotbzh2016/signaling/AGL-AppFW-CAN-Signaling-Benchmark.pdf
I want to test amb d-bus can data benchmark, So can you share your used "can data" and "amd configuration" ?
I want to test amb d-bus can data benchmark, 
So can you share your used "can data" and "amd configuration" ?
This test apparently has a CAN packet injector, written by Cogent.
This test apparently has a CAN packet injector, written by Cogent.

year 2038 test [edit section]

http://connect.linaro.org/resource/bud17/bud17-512/ and an lwn.net report on it here: https://lwn.net/Articles/717076/
= year 2038 test =
Arnd Bergmann is a leading kernel expert on this topic.  He gave a talk
at Linaro Connect 2017 in Budapest.  See his session at:
http://connect.linaro.org/resource/bud17/bud17-512/
and an lwn.net report on it here: https://lwn.net/Articles/717076/
There's a page with some very small test snippets at: http://maul.deepsky.com/~merovech/2038.html
There's a page with some very small test snippets at:
http://maul.deepsky.com/~merovech/2038.html

stress tests [edit section]

http://kernel.ubuntu.com/~cking/stress-ng/
= stress tests =
Should probably add stressng
 * http://kernel.ubuntu.com/~cking/stress-ng/

kernel tests [edit section]

= kernel tests =
== kselftest ==
== kernelci ==
 * build test
 * boot test

device driver tests [edit section]

== device driver tests ==
Texax Instruments has a project (under LTP?) called the "Device Driver Tests"
aka DDT.
See http://processors.wiki.ti.com/index.php/LTP-DDT
See http://processors.wiki.ti.com/index.php/LTP-DDT

all 0day tests [edit section]

= all 0day tests =
Figure out a way to run all existing 0day tests.
TBWiki engine 1.8.3 by Tim Bird