|
{{TableOfContents}}
|
Here are some miscellaneous notes about LTP
|
Here are some miscellaneous notes about LTP
|
See http://ltp.sourceforge.net/documentation/how-to/ltp.php
|
See http://ltp.sourceforge.net/documentation/how-to/ltp.php
|
For latest results, see:
* LTP-results-analysis
|
For latest results, see:
* [[LTP-results-analysis]]
|
|
= Fuego LTP execution =
== Sequence of events ==
Sequence of events:
- Jenkins launches the jobs (something like bbb.default.Functional.LTP)
- Jenkins starts fuego_test.sh
- fuego_test.sh reads the desired spec from Functional.LTP/spec.json
- the spec has variables which select the LTP sub-tests to run, and whether to do a buildonly or runonly of the tests
- the test names in the spec correspond to "scenario" filenames, located in LTP's 'runtest' directory (in the case of regular tests)
- Fuego does a build in /fuego-rw/buildzone/<test_name>-<platform>/
- materials for the target board are put in <LTP_DIR>/target_bin
- Fuego does a deploy
- it installs the materials to the board
- this is usually about 400M of binaries on the target
- in directory $TEST_HOME
- usually something like /home/a/fuego.Functional.LTP
- fuego_test.sh then runs (on the target board)
- ltp_target_run.sh
- which runs:
- for regular tests:
- ltprun
- this wrapper processes the scenario file, and strips it of any items that are mentioned in the skiplist (converting it to /tmp/<tmpdir>/alltests)
- which runs:
- ltp-pan
- which runs the individual test programs that are listed in a command file
- e.g. abort01
- for posix tests:
- run-posix-option-group-test.sh
- for realtime tests:
- test_realtime.sh
|
- Fuego then fetches results from the target board
- it collects the testlog for the test
- it calls test_fetch_results in fuego_test.sh
- this gathers the results from the board and puts it into a 'result' directory in the log directory for the run
- Fuego then processes the results:
- it calls ltp_process.py to create the results.xlxs file and/or rt.log
- the parser.py is called to process regular test results output and create the run.json file
- the parser also creates the files used for charting (results.json, flat_plot_data.txt, flot_chart_data.json)
|
- Fuego then fetches results from the target board
- it collects the testlog for the test
- it calls test_fetch_results in fuego_test.sh
- this gathers the results from the board and puts it into a 'result' directory in the log directory for the run
- Fuego then processes the results:
- it calls ltp_process.py to create the results.xlxs file and/or rt.log
- the parser.py is called to process regular test results output and create the run.json file
- the parser also creates the files used for charting (results.json, flat_plot_data.txt, flot_chart_data.json)
|
|
== defining a new spec ==
Variables - the following variables can be defined for an LTP test in a test spec:
- tests - this is a space-separated list of LTP sub-tests to execute on the board
- tests come in 3 categories (regular, posix and realtime). The end user should not have to know about these categories, as fuego_test.sh will filter the entries in the 'tests' list into the right groups for execution on the board.
- a full list of all available tests is in fuego_test.sh in the variable ALL_TESTS
- phases - defines a list of test phases to execute for this spec.
- the allowed phases are: build deploy maketar run
- phases are space-separated
- the default list, if none is provided, is "build deploy run"
- extra_success_links
- extra_fail_links
|
Board variables:
* FUNCTIONAL_LTP_HOMEDIR - if specified, indicates the place where LTP system and binaries can be found (pre-installed)
* or, where they should be placed, if deploy is specified
|
Board variables:
* FUNCTIONAL_LTP_HOMEDIR - if specified, indicates the place where LTP system and binaries can be found (pre-installed)
* or, where they should be placed, if deploy is specified
|
|
== List of specs ==
* default
* install - special spec, to only build and install LTP
* make_pkg - special spec, to create a tarball that can be installed manually
* quickhit
* quickhitwithskips
* smoketest
* rtonly
* somefail
* selection
* selectionwithrt
* ptsonly
* ltplite
|
- if defined and set to true, then the build is performed and the software deployed to the target, but no run phase is executed
- runonly - if defined and set to true, then LTP is not built, but only run on the target
- runfolder - if specified, this specifies the location on the target where LTP has been installed.
* currently, the deploy step does not use this - this is intended for people who manually install LTP to somewhere besides the default Fuego test directory on the target board.
- skiplist - contains a space-separated list of individual LTP test programs that should not be executed on the target
|
- if defined and set to true, then the build is performed and the software deployed to the target, but no run phase is executed
- runonly - if defined and set to true, then LTP is not built, but only run on the target
- runfolder - if specified, this specifies the location on the target where LTP has been installed.
* currently, the deploy step does not use this - this is intended for people who manually install LTP to somewhere besides the default Fuego test directory on the target board.
- skiplist - contains a space-separated list of individual LTP test programs that should not be executed on the target
|
|
== 'core' invocation lines ==
Here are different invocation lines for these scripts and programs, in Fuego:
|
|
=== ltp_target_run.sh ===
* report 'cd /fuego-rw/tests/fuego.Functional.LTP; TESTS="quickhit "; PTSTESTS=""; RTTESTS=""; . ./ltp_target_run.sh'
|
ltp_target_run.sh looks for the environment variables TESTS, PTSTESTS and RTTESTS, and executes each test in each list.
* In the TESTS list, it executes runltp for each test
* results for each test go into $TEST_HOME/results/<test>/*.log
* in the RTTESTS list, it executes test_realtime.sh for each
* pts.log is generated and put into $TEST_HOME/results/pts.log
* in the PTSTESTS list, it executes run-posix-option-group-test.sh for each test
* log files under testcases/realtime/logs are combined into $TEST_HOME/results/rt.log
|
ltp_target_run.sh looks for the environment variables TESTS, PTSTESTS and RTTESTS, and executes each test in each list.
* In the TESTS list, it executes runltp for each test
* results for each test go into $TEST_HOME/results/<test>/*.log
* in the RTTESTS list, it executes test_realtime.sh for each
* pts.log is generated and put into $TEST_HOME/results/pts.log
* in the PTSTESTS list, it executes run-posix-option-group-test.sh for each test
* log files under testcases/realtime/logs are combined into $TEST_HOME/results/rt.log
|
|
=== ltprun ===
ltp_target_run.sh calls runltp as follows:
|
Note that OUTPUT_DIR is $(pwd)/result (usually $TEST_HOME/result).
|
Note that OUTPUT_DIR is $(pwd)/result (usually $TEST_HOME/result).
|
|
{{{#!YellowBox
./runltp -C ${OUTPUT_DIR}/${i}/failed.log \
-l ${OUTPUT_DIR}/${i}/result.log \
-o ${OUTPUT_DIR}/${i}/output.log \
-d ${TMP_DIR} \
-S ./skiplist.txt \
-f $i > ${OUTPUT_DIR}/${i}/head.log 2>&1
}}}
|
This says to:
* put list of commands that failed to failed.log (-C)
* put the machine-readable list of program executions and statuses to result.log (-l)
* this does not have full testcase status in it, just the exit code, duration, utime, of each test program run
* put the human-readable output to output.log (-o)
* use $TMP_DIR for temporary data
* use skiplist.txt to skip specific tests
* this code is used to process the skiplist file:
{{{#!YellowBox
if [ -n "${SKIPFILE}" ]; then
for test_name in $(awk '{print $1}' "${SKIPFILE}"); do
case "${test_name}" in \#*) continue;; esac
sed -i "/\<${test_name}\>/c\\${test_name} exit 32;" alltests
done
fi
}}}
* this has the effect of converting the command to run for the indicated testcase name to "exit 32;", which is interpreted as a (what? TSKIP?)
* run the $i test (-f)
* put runltp output and all errors into head.log (>, 2>&1)
|
This says to:
* put list of commands that failed to failed.log (-C)
* put the machine-readable list of program executions and statuses to result.log (-l)
* this does not have full testcase status in it, just the exit code, duration, utime, of each test program run
* put the human-readable output to output.log (-o)
* use $TMP_DIR for temporary data
* use skiplist.txt to skip specific tests
* this code is used to process the skiplist file:
{{{#!YellowBox
if [ -n "${SKIPFILE}" ]; then
for test_name in $(awk '{print $1}' "${SKIPFILE}"); do
case "${test_name}" in \#*) continue;; esac
sed -i "/\<${test_name}\>/c\\${test_name} exit 32;" alltests
done
fi
}}}
* this has the effect of converting the command to run for the indicated testcase name to "exit 32;", which is interpreted as a (what? TSKIP?)
* run the $i test (-f)
* put runltp output and all errors into head.log (>, 2>&1)
|
|
=== ltp-pan ===
From $TEST_HOME/result/quickhit/output.log (the output from runltp), here is a sample ltp-pan invocation:
|
|
{{{#!YellowBox
COMMAND: /fuego-rw/tests/fuego.Functional.LTP/bin/ltp-pan -e -S \
-a 16855 -n 16855 \
-f /fuego-rw/tests/fuego.Functional.LTP/tmp/ltp-Ua4obsCcQw/alltests \
-l /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/result.log \
-o /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/output.log \
-C /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/failed.log \
-T /fuego-rw/tests/fuego.Functional.LTP/output/LTP_RUN_ON-output.log.tconf
|
LOG File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/result.log
OUTPUT File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/output.log
FAILED COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/failed.log
TCONF COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/output/LTP_RUN_ON-output.log.tconf
Running tests.......
INFO: ltp-pan reported all tests PASS
LTP Version: 20170116
}}}
|
LOG File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/result.log
OUTPUT File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/output.log
FAILED COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/result/quickhit/failed.log
TCONF COMMAND File: /fuego-rw/tests/fuego.Functional.LTP/output/LTP_RUN_ON-output.log.tconf
Running tests.......
INFO: ltp-pan reported all tests PASS
LTP Version: 20170116
}}}
|
Let's break this out:
* -e - exit non-zero it any command exits non-zero
* -S - run tests sequentially (not randomly), in order found in command file
* this implies "-s <number of commands listed in file>"
* -a 16855 - specify an active-file of 16855
* -n 16855 - specify a tag name of 16855
* -f <tmp>/alltests - specify a command file of 'alltests'
* -l - specify the result.log file (machine-readable)
* -o - specify the output.log file (human readable test output)
* -C - specify the failed.log file (list of failed tests)
* -T - specify the TCONF file
- the source says that test cases that are not fully tested will be recorded in this file
|
Let's break this out:
* -e - exit non-zero it any command exits non-zero
* -S - run tests sequentially (not randomly), in order found in command file
* this implies "-s <number of commands listed in file>"
* -a 16855 - specify an active-file of 16855
* -n 16855 - specify a tag name of 16855
* -f <tmp>/alltests - specify a command file of 'alltests'
* -l - specify the result.log file (machine-readable)
* -o - specify the output.log file (human readable test output)
* -C - specify the failed.log file (list of failed tests)
* -T - specify the TCONF file
- the source says that test cases that are not fully tested will be recorded in this file
|
The ltp man page says the usage is:
{{{#!YellowBox
ltp-pan -n tagname [-SyAehp] [-t #s|m|h|d time] [-s starts] [-x
nactive] [-l logfile] [-a active-file] [-f command-file] [-d debug-
level] [-o output-file] [-O buffer_directory] [-r report_type] [-C
fail-command-file] [cmd]
}}}
|
The ltp man page says the usage is:
{{{#!YellowBox
ltp-pan -n tagname [-SyAehp] [-t #s|m|h|d time] [-s starts] [-x
nactive] [-l logfile] [-a active-file] [-f command-file] [-d debug-
level] [-o output-file] [-O buffer_directory] [-r report_type] [-C
fail-command-file] [cmd]
}}}
|
NOTES: ltp-pan can run commands multiple times, or repeat commands. Fuego
calls ltp-pan with the name of a scenario file (a group of tests with their command-line arguments). The scenario file comes from the 'runtests' directory.
|
NOTES: ltp-pan can run commands multiple times, or repeat commands. Fuego
calls ltp-pan with the name of a scenario file (a group of tests with their command-line arguments). The scenario file comes from the 'runtests' directory.
|
ltp-pan keeps a list of the tests that are currently executing in a 'zoo' file.
Each line in the zoo file has: pid, tag, cmdline
|
ltp-pan keeps a list of the tests that are currently executing in a 'zoo' file.
Each line in the zoo file has: pid, tag, cmdline
|
ltp-pan tries very hard to clean up after misbehaved tests. When it exits, there should be no residual processes or process groups running.
|
ltp-pan tries very hard to clean up after misbehaved tests. When it exits, there should be no residual processes or process groups running.
|
|
=== output logs ===
Output is placed in the following files, on target (for regular tests):
* $TEST_HOME/output/LTP_RUN_ON-output.log.tconf
* $TEST_HOME/result/syscalls/failed.log - list of failed tests
* $TEST_HOME/result/syscalls/head.log - meta-data about full run
* $TEST_HOME/result/syscalls/output.log - full test output and info
* $TEST_HOME/result/syscalls/result.log - summary in machine-readable format
|
here)
* TPASS - Indicates that the test case had the expected result and passed
* TFAIL - Indicates that the test case had an unexpected result and failed.
* TBROK - Indicates that the remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available.
* TCONF - Indicates that the test case was not written to run on the current harware or software configuration such as machine type, or kernel version.
* TRETR - Indicates that the test cases has been retired and should not be executed any longer.
* TWARN - Indicates that the test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished.
* TINFO - Specifies useful information about the status of the test that does not affect the result and does not indicate a problem.
|
==== output key ====
Certain keywords are used in test program output:
(copied from [[http://www.lineo.co.jp/ltp/linux-3.10.10-results/result.html|here]])
* TPASS - Indicates that the test case had the expected result and passed
* TFAIL - Indicates that the test case had an unexpected result and failed.
* TBROK - Indicates that the remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available.
* TCONF - Indicates that the test case was not written to run on the current harware or software configuration such as machine type, or kernel version.
* TRETR - Indicates that the test cases has been retired and should not be executed any longer.
* TWARN - Indicates that the test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished.
* TINFO - Specifies useful information about the status of the test that does not affect the result and does not indicate a problem.
|
|
== 'posix' invocation lines ==
|
|
=== run-posix-option-group.sh ===
Is invoked with the name of the posix option group, which is one of:
* AIO MEM MSG SEM SIG THR TMR TPS
|
actual sub-tests are in $FUNCTIONAL_LTP_HOMEDIR/conformance/interfaces
with a directory for each test sub-group.
|
actual sub-tests are in $FUNCTIONAL_LTP_HOMEDIR/conformance/interfaces
with a directory for each test sub-group.
|
run-posix-option-group.sh knows which test directories belong to
which POSIX group, and runs the ones requested. Each test directory
is passed into run_option_group_test().
|
run-posix-option-group.sh knows which test directories belong to
which POSIX group, and runs the ones requested. Each test directory
is passed into run_option_group_test().
|
The function run_option_group_tests() inside this script, appears
to only process the first argument passed in.
|
The function run_option_group_tests() inside this script, appears
to only process the first argument passed in.
|
Each directory has a run.sh file (there are 208 of them).
|
Each directory has a run.sh file (there are 208 of them).
|
|
=== run.sh ===
Each run script executes run-tests.sh with a list of specific tests in
the current directory. The tests may be compiled programs or shell
scripts. run.sh is executed with no arguments.
|
|
=== run-tests.sh ===
Takes a list of tests, and does the following:
|
It uses $LOGFILE as the output log for an individual test (this is the stdout
of the individual test program). If $LOGFILE is not defined, it uses "logfile"
(in the current directory).
|
It uses $LOGFILE as the output log for an individual test (this is the stdout
of the individual test program). If $LOGFILE is not defined, it uses "logfile"
(in the current directory).
|
It uses $TIMEOUT_VAL as the timeout for an individual test. If not defined,
a TIMEOUT_VAL of 300 seconds is used.
|
It uses $TIMEOUT_VAL as the timeout for an individual test. If not defined,
a TIMEOUT_VAL of 300 seconds is used.
|
It uses the program $FUNCTIONAL_LTP_HOMEDIR/bin/t0 to run tests with a timeout.
This program uses t0.val to indicate the return code for a test that timed out.
|
It uses the program $FUNCTIONAL_LTP_HOMEDIR/bin/t0 to run tests with a timeout.
This program uses t0.val to indicate the return code for a test that timed out.
|
TIMEOUT_RET is the return code of t0 when a program timed out (that is, it
overran it's timeout).
|
TIMEOUT_RET is the return code of t0 when a program timed out (that is, it
overran it's timeout).
|
If there is a file named ${testname}.args, then the contents of that file
are used as arguments to the program at runtime.
|
If there is a file named ${testname}.args, then the contents of that file
are used as arguments to the program at runtime.
|
If there is a file named test_defs, then that file is sourced before the
tests are run.
|
If there is a file named test_defs, then that file is sourced before the
tests are run.
|
It outputs on standard out the counts of PASS/FAIL and total tests executed.
|
It outputs on standard out the counts of PASS/FAIL and total tests executed.
|
It's exit code is the number of tests that failed. So, 0 = all passed.
|
It's exit code is the number of tests that failed. So, 0 = all passed.
|
|
=== posix conformance interface sub-directories ===
{{{#!Table
||posix group||sub-group||
||AIO ||aio_* (including aio_cancel, aio_error, aio_fsync, aio_read, aio_return, aio_suspend, aio_write)||
||AIO ||lio_listio||
||SIG ||sig* (including sigaction, sigemptyset, sigismember, sigprocmask, sigsuspend, sigaddset, sigfillset, signal, sigqueue, sigtimedwait, sigaltstack, sighols, sigpause, sigrelse, sigwait, sigdelset, sigignore, sigpending, sigset, sigwaitinfo)||
||SIG ||raise||
||SIG ||kill||
||SIG ||killpg||
||SIG ||pthread_kill||
||SIG ||pthread_sigmask||
||SEM ||sem* (including sem_close, sem_getvalue, sem_open, sem_timedwait, sem_wait, sem_destroy, sem_init, sem_post, sem_unlink)||
||THR ||pthread_* (including ''95 tests - too many to list'')||
||TMR ||time* (including time, timer_create, timer_delete, timer_getoverrun, timer_gettime, timer_settime))||
||TMR ||*time (including asctime, clock_gettime, clock_settime, ctime, difftime, gmtime, localtime, mktime, strftime, time, timer_gettime, timer_settime)||
||TMR ||clock* (including clock clock_getcpuclockid clock_getres clock_gettime clock_nanosleep clock_settime)||
||TMR ||nanosleep||
||MSG ||mq_* (including mq_close mq_getattr mq_notify mq_open mq_receive mq_send mq_setattr mq_timedreceive mq_timedsend mq_unlink)||
||TPS ||*sched* (including a bunch of test dirs)||
||MEM ||m*lock||
||MEM ||m*map||
||MEM ||shm_*||
}}}
|
FIXTHIS - when running with an existing LTP, we don't erase the pts.log file, or any of the existing logfiles.
|
FIXTHIS - when running with an existing LTP, we don't erase the pts.log file, or any of the existing logfiles.
|
FIXTHIS - it appears that we only run the first item in a wildcard list of items (but the data for multiple aio_* dirs is there, so I'm not reading something right)
|
FIXTHIS - it appears that we only run the first item in a wildcard list of items (but the data for multiple aio_* dirs is there, so I'm not reading something right)
|
FIXTHIS - it appears that we don't run the posix behavior tests
|
FIXTHIS - it appears that we don't run the posix behavior tests
|
FIXTHIS - it appears that we don't run the posix definition tests
|
FIXTHIS - it appears that we don't run the posix definition tests
|
FIXTHIS - it appears we don't run the posix functional tests
|
FIXTHIS - it appears we don't run the posix functional tests
|
FIXTHIS - it appears we don't run the posix stress tests
|
FIXTHIS - it appears we don't run the posix stress tests
|
LTP posix parsing notes
|
== posix parsing ==
See [[LTP posix parsing notes]]
|
|
= LTP status =
== in Tim's lab on August 3, 2017 ==
|
How long to execute:
* bbb.default.Functional.LTP it hangs in inotify06
* some long tests:
* fsync02
* fork13
* min1.default.Functional.LTP - 45 minutes with build
* docker.docker.Functional.LTP - 22 minutes (not sure if it includes the build)
|
How long to execute:
* bbb.default.Functional.LTP it hangs in inotify06
* some long tests:
* fsync02
* fork13
* min1.default.Functional.LTP - 45 minutes with build
* docker.docker.Functional.LTP - 22 minutes (not sure if it includes the build)
|
|
= how to find the long-running tests =
* use: sed s/dur=// result.log | sort -k3 -n
|
Missing column output list for table
|
{{{#!Table:ltp_duration
||board||test ||duration ||user||system||test date||
||bbb||creat06 ||30 ||5 ||21 ||2017-08-03||
||bbb||gettimeofday02 ||30 ||647 ||2324 ||2017-08-03||
||bbb||ftruncate04_64 ||31 ||5 ||22 ||2017-08-03||
||bbb||chown04 ||32 ||6 ||23 ||2017-08-03||
||bbb||fchown04_16 ||32 ||3 ||14 ||2017-08-03||
||bbb||fchown04 ||32 ||2 ||12 ||2017-08-03||
||bbb||access04 ||33 ||5 ||26 ||2017-08-03||
||bbb||ftruncate04 ||33 ||5 ||23 ||2017-08-03||
||bbb||fchmod06 ||34 ||5 ||20 ||2017-08-03||
||bbb||chown04_16 ||35 ||4 ||25 ||2017-08-03||
||bbb||chmod06 ||36 ||4 ||21 ||2017-08-03||
||bbb||acct01 ||37 ||4 ||29 ||2017-08-03||
||bbb||clock_nanosleep2_01||51 ||2 ||1 ||2017-08-03||
||bbb||fsync02 ||292 ||1 ||42 ||2017-08-03||
||bbb||fork13 ||806 ||946||24970 ||2017-08-03||
}}}
|
|
== inotify06 oops ==
inotify06 causes the kernel to oops, with the following report:
{{{#!YellowBox
[57540.087504] Kernel panic - not syncing: softlockup: hung tasks
[57540.093608] [<c00111f1>] (unwind_backtrace+0x1/0x9c) from [<c04c8955>] (panic+0x59/0x158)
[57540.102136] [<c04c8955>] (panic+0x59/0x158) from [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc)
[57540.110862] [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc) from [<c0047b4b>] (__run_hrtimer+0x4b/0x154)
[57540.120307] [<c0047b4b>] (__run_hrtimer+0x4b/0x154) from [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc)
[57540.129842] [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc) from [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24)
[57540.140275] [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24) from [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188)
[57540.151246] [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188) from [<c0072f49>] (handle_irq_event+0x29/0x3c)
[57540.161505] [<c0072f49>] (handle_irq_event+0x29/0x3c) from [<c007489b>] (handle_level_irq+0x53/0x8c)
[57540.171026] [<c007489b>] (handle_level_irq+0x53/0x8c) from [<c00729ff>] (generic_handle_irq+0x13/0x1c)
[57540.180728] [<c00729ff>] (generic_handle_irq+0x13/0x1c) from [<c000d0df>] (handle_IRQ+0x23/0x60)
[57540.189902] [<c000d0df>] (handle_IRQ+0x23/0x60) from [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c)
[57540.199339] [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c) from [<c04cea9b>] (__irq_svc+0x3b/0x5c)
[57540.208679] Exception stack(0xcb7e5e98 to 0xcb7e5ee0)
[57540.213943] 5e80: de03ef54 df34f180
[57540.222478] 5ea0: 68836882 00000000 de03ef54 cb7e4000 de03ef54 ffffffff df34f180 df34f1dc
[57540.231008] 5ec0: 00000010 df016450 cb7e5ee8 cb7e5ee0 c00e2e33 c025eb88 60000033 ffffffff
[57540.239544] [<c04cea9b>] (__irq_svc+0x3b/0x5c) from [<c025eb88>] (do_raw_spin_lock+0xa4/0x114)
[57540.248534] [<c025eb88>] (do_raw_spin_lock+0xa4/0x114) from [<c00e2e33>] (fsnotify_destroy_mark_locked+0x17/0xec)
[57540.259240] [<c00e2e33>] (fsnotify_destroy_mark_locked+0x17/0xec) from [<c00e316b>] (fsnotify_clear_marks_by_group_flags+0x5
7/0x74)
[57540.271577] [<c00e316b>] (fsnotify_clear_marks_by_group_flags+0x57/0x74) from [<c00e2869>] (fsnotify_destroy_group+0x9/0x24)
[57540.283284] [<c00e2869>] (fsnotify_destroy_group+0x9/0x24) from [<c00e3c91>] (inotify_release+0x1d/0x20)
[57540.293182] [<c00e3c91>] (inotify_release+0x1d/0x20) from [<c00bbb69>] (__fput+0x65/0x16c)
[57540.301808] [<c00bbb69>] (__fput+0x65/0x16c) from [<c004323d>] (task_work_run+0x6d/0xa4)
[57540.310238] [<c004323d>] (task_work_run+0x6d/0xa4) from [<c000f3eb>] (do_work_pending+0x6f/0x70)
[57540.319402] [<c000f3eb>] (do_work_pending+0x6f/0x70) from [<c000c893>] (work_pending+0x9/0x1a)
[57540.328394] drm_kms_helper: panic occurred, switching back to text console
[57572.431529] CAUTION: musb: Babble Interrupt Occurred
[57572.503646] CAUTION: musb: Babble Interrupt Occurred
[57572.590984] gadget: high-speed config #1: Multifunction with RNDIS
[57577.609329] CAUTION: musb: Babble Interrupt Occurred
[57577.683922] CAUTION: musb: Babble Interrupt Occurred
[57577.772461] gadget: high-speed config #1: Multifunction with RNDIS
}}}
|
|
{{{#!YellowBox
[ 3592.037885] BUG: soft lockup - CPU#0 stuck for 23s! [inotify06:1994]
[ 3592.045691] BUG: scheduling while atomic: inotify06/1994/0x40010000
[ 3592.069728] Kernel panic - not syncing: softlockup: hung tasks
[ 3592.076169] [<c00111f1>] (unwind_backtrace+0x1/0x9c) from [<c04c8955>] (panic+0x59/0x158)
[ 3592.085129] [<c04c8955>] (panic+0x59/0x158) from [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc)
[ 3592.094308] [<c00726d9>] (watchdog_timer_fn+0xe5/0xfc) from [<c0047b4b>] (__run_hrtimer+0x4b/0x154)
[ 3592.104241] [<c0047b4b>] (__run_hrtimer+0x4b/0x154) from [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc)
[ 3592.114276] [<c00483f7>] (hrtimer_interrupt+0xcf/0x1fc) from [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24)
[ 3592.125246] [<c001e98b>] (omap2_gp_timer_interrupt+0x1f/0x24) from [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188)
[ 3592.136766] [<c0072dd3>] (handle_irq_event_percpu+0x3b/0x188) from [<c0072f49>] (handle_irq_event+0x29/0x3c)
[ 3592.147522] [<c0072f49>] (handle_irq_event+0x29/0x3c) from [<c007489b>] (handle_level_irq+0x53/0x8c)
[ 3592.157518] [<c007489b>] (handle_level_irq+0x53/0x8c) from [<c00729ff>] (generic_handle_irq+0x13/0x1c)
[ 3592.167710] [<c00729ff>] (generic_handle_irq+0x13/0x1c) from [<c000d0df>] (handle_IRQ+0x23/0x60)
[ 3592.177328] [<c000d0df>] (handle_IRQ+0x23/0x60) from [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c)
[ 3592.187235] [<c00085a9>] (omap3_intc_handle_irq+0x51/0x5c) from [<c04cea9b>] (__irq_svc+0x3b/0x5c)
[ 3592.197024] Exception stack(0xdf74dec8 to 0xdf74df10)
[ 3592.202568] dec0: de7f1e50 de3786c0 00000000 de7f1e54 de7f1e50 de7f1e50
[ 3592.211507] dee0: de378748 ffffffff de3786c0 de37871c 00000010 df016450 df74dee8 df74df10
[ 3592.220470] df00: c00e3171 c00e2d16 80000033 ffffffff
[ 3592.226025] [<c04cea9b>] (__irq_svc+0x3b/0x5c) from [<c00e2d16>] (fsnotify_put_mark+0xa/0x34)
[ 3592.235380] [<c00e2d16>] (fsnotify_put_mark+0xa/0x34) from [<c00e3171>] (fsnotify_clear_marks_by_group_flags+0x5d/0x74)
[ 3592.247195] [<c00e3171>] (fsnotify_clear_marks_by_group_flags+0x5d/0x74) from [<c00e2869>] (fsnotify_destroy_group+0x9/0x24)
[ 3592.259493] [<c00e2869>] (fsnotify_destroy_group+0x9/0x24) from [<c00e3c91>] (inotify_release+0x1d/0x20)
[ 3592.269880] [<c00e3c91>] (inotify_release+0x1d/0x20) from [<c00bbb69>] (__fput+0x65/0x16c)
[ 3592.278939] [<c00bbb69>] (__fput+0x65/0x16c) from [<c004323d>] (task_work_run+0x6d/0xa4)
[ 3592.287811] [<c004323d>] (task_work_run+0x6d/0xa4) from [<c000f3eb>] (do_work_pending+0x6f/0x70)
[ 3592.297458] [<c000f3eb>] (do_work_pending+0x6f/0x70) from [<c000c893>] (work_pending+0x9/0x1a)
[ 3592.306894] drm_kms_helper: panic occurred, switching back to text console
[ 3624.208005] CAUTION: musb: Babble Interrupt Occurred
[ 3624.280120] CAUTION: musb: Babble Interrupt Occurred
[ 3624.367738] gadget: high-speed config #1: Multifunction with RNDIS
[ 3629.389807] CAUTION: musb: Babble Interrupt Occurred
[ 3629.464524] CAUTION: musb: Babble Interrupt Occurred
[ 3629.552870] gadget: high-speed config #1: Multifunction with RNDIS
}}}
|
https://github.com/foss-for-synopsys-dwc-arc-processors/ltp/blob/master/README.ARC
|
= Examples of LTP analysis =
== ARC LTP instructions ==
See https://github.com/foss-for-synopsys-dwc-arc-processors/ltp/blob/master/README.ARC
|
It lists things like:
* tests that don't build with the ARC toolchain
* tests that use BSD signal calls, which are not configured for ARC by default
* tests that require different default parameters because they take too much time on an embedded system
* tests that hang the system
* tests that are not applicable to ARC, but fail, so should be disabled
|
It lists things like:
* tests that don't build with the ARC toolchain
* tests that use BSD signal calls, which are not configured for ARC by default
* tests that require different default parameters because they take too much time on an embedded system
* tests that hang the system
* tests that are not applicable to ARC, but fail, so should be disabled
|
It has a section of notes indicating requirements for the tests, including:
* tests that require a loopback device
* kernel config options (that enable a loopback device)
* binaries required (util-linux, e2fsprogs, bash)
* the buildroot config options to enable those binaries
|
It has a section of notes indicating requirements for the tests, including:
* tests that require a loopback device
* kernel config options (that enable a loopback device)
* binaries required (util-linux, e2fsprogs, bash)
* the buildroot config options to enable those binaries
|
http://www.lineo.co.jp/ltp/linux-3.10.10-results/result.html
|
= Examples of LTP visualization =
== Linaro ==
Linaro has some nice color-coded tables with LTP results:
http://www.lineo.co.jp/ltp/linux-3.10.10-results/result.html
|
|
= Bugs or issues =
* rmdir05 - has some tests that are not implemented for Linux (TCONF)
* f00f - only applies to i386 (TCONF)
* select01,2,3,4 - on bbb, expects a different C library than is present
* expects GLIBC_2.15 but bbb has GLIBC_2.13
* access02, access04 - requires root
|
|
= Questions =
* Q: Where do parameters for test command line come from?
* A: LTP docs say it comes from the runtest directory (see runtest/syscalls)
* runtest/syscalls fork13 entry has: "fork13 fork13 -i 1000000" (which is too long on bbb)
* A: target_run_ltp runs runltp which runs ltp-pan with the name of the "scenario" file - which provide similar functionality to Fuego's testplan files. The scenario files are found in the 'runtest' directory, and specify a 'tag', and a string of items to pass to a shell invocation.
* (usually this is the command program name and the arguments, but it can include multiple commands.)
|
|
= Environment variables used by runltp, ltp-pan, and tests =
Here is a list of environment variables that runltp exports:
* LTPROOT
* TMPBASE
* LTP_DEV_FS_TYPE
* INJECT_FAULT_LOOPS_PER_TEST
* INJECT_KERNEL_FAULT_PERCENTAGE
* TMPTEMPLATE
* TMP
* TMPDIR
* RHOST
* PASSWD
* LTP_BIG_DEV
* LTP_BIG_DEV_FS_TYPE
* LTP_DEV
* LTP_EXIT_VALUE
* TEST_START_TIME
* TEST_END_TIME
* TEST_OUTPUT_DIRECTORY
* TEST_LOGS_DIRECTORY
|