|
{{TableOfContents}}
|
Here is a collection of test results formats from different programs:
|
Here is a collection of test results formats from different programs:
|
https://api.kernelci.org/schema-test.html
* and https://api.kernelci.org/collection-test.html
|
= kernelCI =
* kernelCI
* see https://api.kernelci.org/schema-test.html
* and https://api.kernelci.org/collection-test.html
|
- overview:
* test_suite:
* version
* _id:
* name:
* created_on: date:
* lab_name:
* time: duration
* job:
* kernel:
* defconfig
* defconfig_full:
* arch:
* board:
* board_instance:
* job_id:
* build_id:
* boot_id:
* test_set: list of test sets
* test_set: can be more than one
* version:
* _id:
* name:
* created_on: date:
* time: duration
* test_suite_id
* test_case - list of test cases
* definition_uri, vcs_commit
* metadata
* parameters: key-value pairs used during the test
* test_job_url:
* test_job_path:
* test_job_id:
* defects: the defects associated with the test set
* test_case: list of test cases
* test_case: can be more than one
* version:
* _id:: oid
* name:
* created_on: date:
* test_set_id:
* test_suite_id:
* parameters: key-value pairs used during the test
* status:
* time: duration
* definition_uri, vcs_commit
* metadata
* measurements: list of measurements
* name
* time
* unit
* measure
* minimum:
* maximum:
* samples:
* samples_sum:
* samples_sqr_sum:
* attachments:
* metadata:
* definition_uri:
* vcs_commit:
|
* overview:
* test_suite:
* version
* _id:
* name:
* created_on: date:
* lab_name:
* time: duration
* job:
* kernel:
* defconfig
* defconfig_full:
* arch:
* board:
* board_instance:
* job_id:
* build_id:
* boot_id:
* test_set: list of test sets
* test_set: can be more than one
* version:
* _id:
* name:
* created_on: date:
* time: duration
* test_suite_id
* test_case - list of test cases
* definition_uri, vcs_commit
* metadata
* parameters: key-value pairs used during the test
* test_job_url:
* test_job_path:
* test_job_id:
* defects: the defects associated with the test set
* test_case: list of test cases
* test_case: can be more than one
* version:
* _id:: oid
* name:
* created_on: date:
* test_set_id:
* test_suite_id:
* parameters: key-value pairs used during the test
* status:
* time: duration
* definition_uri, vcs_commit
* metadata
* measurements: list of measurements
* name
* time
* unit
* measure
* minimum:
* maximum:
* samples:
* samples_sum:
* samples_sqr_sum:
* attachments:
* metadata:
* definition_uri:
* vcs_commit:
|
|
------
|
http://avocado-framework.readthedocs.io/en/53.0/ResultFormats.html
* by default, the machine-readable output format for avacado is xunit
* avacado can also output in it's own json format (but it doesn't claim that it will be a standard)
|
= avacado =
* avacado
* see http://avocado-framework.readthedocs.io/en/53.0/ResultFormats.html
* by default, the machine-readable output format for avacado is xunit
* avacado can also output in it's own json format (but it doesn't claim that it will be a standard)
|
Kinds of results:
* see http://avocado-framework.readthedocs.io/en/53.0/WritingTests.html
* PASS - test passed
* WARN - test passed, but something occurred which may need review
* SKIP - test was skipped
* CANCEL - test was canceled before execution
* FAIL - test failed
* ERROR - test encountered unexpected error (not a failure)
* INTERRUPTED - test was interrupted (timeout or ctrl-C) (not set by test but by framework)
|
Kinds of results:
* see http://avocado-framework.readthedocs.io/en/53.0/WritingTests.html
* PASS - test passed
* WARN - test passed, but something occurred which may need review
* SKIP - test was skipped
* CANCEL - test was canceled before execution
* FAIL - test failed
* ERROR - test encountered unexpected error (not a failure)
* INTERRUPTED - test was interrupted (timeout or ctrl-C) (not set by test but by framework)
|
Nomenclature:
* whiteboard = place for a test to save information that might be used by other tests or the system
* job id = sha1 hash
* variant id = like a spec
* test id = like our test name
*
|
Nomenclature:
* whiteboard = place for a test to save information that might be used by other tests or the system
* job id = sha1 hash
* variant id = like a spec
* test id = like our test name
*
|
|
------
|
http://help.catchsoftware.com/display/ET/JUnit+Format
* and a schema at: https://gist.github.com/erikd/4192748
|
= xunit =
* xunit
* see http://help.catchsoftware.com/display/ET/JUnit+Format
* and a schema at: https://gist.github.com/erikd/4192748
|
|
------
|
http://llg.cubic.org/docs/junit/
* overview:
* testsuites: summary of status from all testsuites
* disabled, errors, failures, tests: counts of tests with these conditions
* name:
* time: duration in seconds
* testsuite: one testsuite instance (can appear multiple times)
* disabled, errors, failures, tests, skipped: counts of these conditions
* name: name of the test
* hostname:
* id:
* package:
* time: duration
* timestamp: start time
* properties: list of properties
* property: name, value: values used during test
* testcase: can appear multiple times
* name
* assertions: count of assertions
* classname:
* status
* time
* skipped: message:
* error: message:
* failure: message:
* system-out: stdout text
* system-err: stderr text
* system-out: stdout text
* system-err:
|
= junit =
* junit
* see http://llg.cubic.org/docs/junit/
* overview:
* testsuites: summary of status from all testsuites
* disabled, errors, failures, tests: counts of tests with these conditions
* name:
* time: duration in seconds
* testsuite: one testsuite instance (can appear multiple times)
* disabled, errors, failures, tests, skipped: counts of these conditions
* name: name of the test
* hostname:
* id:
* package:
* time: duration
* timestamp: start time
* properties: list of properties
* property: name, value: values used during test
* testcase: can appear multiple times
* name
* assertions: count of assertions
* classname:
* status
* time
* skipped: message:
* error: message:
* failure: message:
* system-out: stdout text
* system-err: stderr text
* system-out: stdout text
* system-err:
|
Here's another presentation of the format:
{{{#!YellowBox
<testsuites> => the aggregated result of all junit testfiles
<testsuite> => the output from a single TestSuite
<properties> => the defined properties at test execution
<property> => name/value pair for a single property
...
</properties>
<error></error> => optional information, in place of a test case (e.g. if tests could not be ound etc.)
<testcase> => the results from executing a test method
<system-out> => data written to System.out during the test run
<system-err> => data written to System.err during the test run
<skipped/> => test was skipped
<failure> => test failed
<error> => test encountered an error
</testcase>
...
</testsuite>
...
</testsuites>
}}}
|
Here's another presentation of the format:
{{{#!YellowBox
<testsuites> => the aggregated result of all junit testfiles
<testsuite> => the output from a single TestSuite
<properties> => the defined properties at test execution
<property> => name/value pair for a single property
...
</properties>
<error></error> => optional information, in place of a test case (e.g. if tests could not be ound etc.)
<testcase> => the results from executing a test method
<system-out> => data written to System.out during the test run
<system-err> => data written to System.err during the test run
<skipped/> => test was skipped
<failure> => test failed
<error> => test encountered an error
</testcase>
...
</testsuite>
...
</testsuites>
}}}
|
Kinds of results:
* error = unanticipated problem with test (different from failure)
* failiure = test failed (condition that was tested resulted in wrong answer)
* skipped = test was skipped
* <nothing> = if a testcase is listed, but does not have a qualifying attribute of error, failure or skipped, then it passed.
|
Kinds of results:
* error = unanticipated problem with test (different from failure)
* failiure = test failed (condition that was tested resulted in wrong answer)
* skipped = test was skipped
* <nothing> = if a testcase is listed, but does not have a qualifying attribute of error, failure or skipped, then it passed.
|
Nomenclature:
* assertions = number of conditions checked during the test
|
Nomenclature:
* assertions = number of conditions checked during the test
|
|
------
|
|
= LTP =
* Each of the tests in the "main" body of LTP produces a report that looks like the following:
{{{#!YellowBox
unlink07 1 TPASS : unlink(<non-existent file>) Failed, errno=2
unlink07 2 TPASS : unlink(<path is empty string>) Failed, errno=2
unlink07 3 TPASS : unlink(<path contains a non-existent file>) Failed, errno=2
unlink07 4 TPASS : unlink(<address beyond address space>) Failed, errno=14
unlink07 5 TPASS : unlink(<path contains a regular file>) Failed, errno=20
unlink07 6 TPASS : unlink(<address beyond address space>) Failed, errno=14
unlink07 7 TPASS : unlink(<pathname too long>) Failed, errno=36
unlink07 8 TPASS : unlink(<negative address>) Failed, errno=14
}}}
|
- ltprun produces more data about each test that is run, like so:
|
* ltprun produces more data about each test that is run, like so:
|
|
{{{#!YellowBox
<<<test_start>>>
tag=unlink07 stime=1502315351
cmdline="unlink07"
contacts=""
analysis=exit
<<<test_output>>>
unlink07 1 TPASS : unlink(<non-existent file>) Failed, errno=2
unlink07 2 TPASS : unlink(<path is empty string>) Failed, errno=2
unlink07 3 TPASS : unlink(<path contains a non-existent file>) Failed, errno=2
unlink07 4 TPASS : unlink(<address beyond address space>) Failed, errno=14
unlink07 5 TPASS : unlink(<path contains a regular file>) Failed, errno=20
unlink07 6 TPASS : unlink(<address beyond address space>) Failed, errno=14
unlink07 7 TPASS : unlink(<pathname too long>) Failed, errno=36
unlink07 8 TPASS : unlink(<negative address>) Failed, errno=14
<<<execution_status>>>
initiation_status="ok"
duration=0 termination_type=exited termination_id=0 corefile=no
cutime=0 cstime=0
<<<test_end>>>
}}}
|
The data from several different tests is consolidated into a single
file by ltprun. There is one line per test program:
{{{#!YellowBox
startup='Wed Aug 9 21:48:52 2017'
tag=access01 stime=1502315332 dur=0 exit=exited stat=32 core=no cu=0 cs=0
tag=access03 stime=1502315332 dur=0 exit=exited stat=32 core=no cu=0 cs=0
tag=alarm01 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=alarm02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=alarm03 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=asyncio02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=brk01 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=chdir02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=chmod02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
...
}}}
|
The data from several different tests is consolidated into a single
file by ltprun. There is one line per test program:
{{{#!YellowBox
startup='Wed Aug 9 21:48:52 2017'
tag=access01 stime=1502315332 dur=0 exit=exited stat=32 core=no cu=0 cs=0
tag=access03 stime=1502315332 dur=0 exit=exited stat=32 core=no cu=0 cs=0
tag=alarm01 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=alarm02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=alarm03 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=asyncio02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=brk01 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=chdir02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
tag=chmod02 stime=1502315332 dur=0 exit=exited stat=0 core=no cu=0 cs=0
...
}}}
|
Key:
* tag = program name or test name
* stime = start time
* dur = test duration in seconds
* exit = exit status
* stat = exit code
* core = was a 'core' file produced (did the program fault)
* cu = user time of program
* cs = system time of program
|
Key:
* tag = program name or test name
* stime = start time
* dur = test duration in seconds
* exit = exit status
* stat = exit code
* core = was a 'core' file produced (did the program fault)
* cu = user time of program
* cs = system time of program
|
LTP Kinds of results:
; TPASS: test passed
; TFAIL: test case had an unexpected result and failed.
; TBROK: remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available.
; TCONF: test case was not written to run on the current hardware or software configuration such as machine type, or, kernel version.
; TRETR: test cases is retired and should not used
; TWARN: test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished.
; TINFO: Provides extra information about the test cast - does not indicate a problem.
|
LTP Kinds of results:
; TPASS: test passed
; TFAIL: test case had an unexpected result and failed.
; TBROK: remaining test cases are broken and will not execute correctly, because some precondition not met, such as a resource not being available.
; TCONF: test case was not written to run on the current hardware or software configuration such as machine type, or, kernel version.
; TRETR: test cases is retired and should not used
; TWARN: test case experienced an unexpected or undesirable event that should not affect the test itself such as being unable to cleanup resources after the test finished.
; TINFO: Provides extra information about the test cast - does not indicate a problem.
|
|
= TAP =
TAP stands for "test anything protocol", and it defines an output format
for tests that is simple and can be parsed easily by lots of different tools.
|
The specification is here:
* see https://testanything.org/tap-version-13-specification.html
|
The specification is here:
* see https://testanything.org/tap-version-13-specification.html
|
Other resources:
* Jenkins supports TAP format: https://wiki.jenkins-ci.org/display/JENKINS/TAP+Plugin
* Python has a TAP parser: https://pypi.python.org/pypi/tap.py
|
Other resources:
* Jenkins supports TAP format: https://wiki.jenkins-ci.org/display/JENKINS/TAP+Plugin
* Python has a TAP parser: https://pypi.python.org/pypi/tap.py
|
Kinds of results:
* ok - test passed (may have SKIP on line)
* not ok - test failed
|
Kinds of results:
* ok - test passed (may have SKIP on line)
* not ok - test failed
|
Nomenclature:
* Bail out - test was aborted without finishing (could be due to missing pre-requisites)
* SKIP - test was skipped due to missing pre-requisites
|
Nomenclature:
* Bail out - test was aborted without finishing (could be due to missing pre-requisites)
* SKIP - test was skipped due to missing pre-requisites
|
|
= autotest =
|
|
= ctest =
Someone commented that ctest output is good, but I can't find a specification
for the output format.
|