Pep8 and pyflakes compliance for new python code in fuego
* rename 'ftc' to 'fuego'?
* use Jenkins Job Builder to populate Jenkins jobs for Fuego tests
* allow ftc to invoke a single job phase
* in particular, allow running the build phase, without running other phases
* also, maybe allow running the test_processing and post_test phases, separate from other phases
|
== Ideas from January 2017 ==
* require [[https://pypi.python.org/pypi/pep8|Pep8]] and [[https://pypi.python.org/pypi/pyflakes|pyflakes]] compliance for new python code in fuego
* rename 'ftc' to 'fuego'?
* use [[http://docs.openstack.org/infra/jenkins-job-builder/|Jenkins Job Builder]] to populate Jenkins jobs for Fuego tests
* allow ftc to invoke a single job phase
* in particular, allow running the build phase, without running other phases
* also, maybe allow running the test_processing and post_test phases, separate from other phases
|
- add ideas from avacado
* implement a server mode (like avacado server)
* provide list of targets, specs, and tests
* allow scheduling a job on the server
* votes against: Jenkins does this, LAVA does this - this should be lower priority for Fuego
* implement matrix/grid job creation from YAML files of job variations
* output results to multiple formats:
* a standalone HTML page
* xunit output (for consumption by Jenkins)
|
* add ideas from avacado
* implement a server mode (like avacado server)
* provide list of targets, specs, and tests
* allow scheduling a job on the server
* votes against: Jenkins does this, LAVA does this - this should be lower priority for Fuego
* implement matrix/grid job creation from YAML files of job variations
* output results to multiple formats:
* a standalone HTML page
* xunit output (for consumption by Jenkins)
|
|
== Tim's ideas ==
=== Ideas from 2016-12 ===
* remove 'tarball template' in comments in scripts/functions.sh
* this name is confusing
* some should be $TESTDIR, and others should be something else
|
|
=== Ideas from 2016-11 ===
* make sure all tests use their correct name for the build directory
* I see LTP-poky-qemuarm and netperf-poky-qemuarm and OpenSSL-poky-qemuarm, instead of Functional.* for these tests
* remove need for default testplan and default spec
* edit overlay generator?
* (done) add option to ftc to delay test cleanup (or omit test cleanup), so that I can look at or copy the test directory on the target
* Daniel added Target_PostCleanup flag to system
* document the Target_cleanup flags
* why don't we build bzip2? There's a tar file in the test directory.
* (done) Remove {4} from test log names
* I don't think test logs should have {4} in their names
* this was a cogent bug - it was supposed to be $4 (for 'y' or 'n')
* Daniel fixed this
* I suspect this is supposed to be a macro. But it's super annoying.
* This should be called a parsed log or something.
* that is, switch '{4}' to 'parsed'
* parsed log file ({4}) for Functional.bzip2 test is empty, but it should have lines for each test.
* need to figure out why the parsed log files are empty
* are all of them empty, or just bzip2s?
* hello_world's parsed log is OK
* benchmark plots get messed up when you have test runs with duplicate test numbers
* this happens when you use the same userdata/logs from one docker container to the next (where the test numbering restarts inside the new Jenkins)
* should find a way to make the logs available from the Jenkins user interface
* use Jenkins artifact system?
* Benchmark.hackbench has the description for ebizzy (copy and paste error)
* automate adding the bbb-poky-sdk board for release automation
* cd ~/work/fuego/release.../fuego/userdata/conf
* cp ~/work/fuego/fuego/userdata/conf/boards/bbb-poky-sdk.board boards
* cp ~/work/fuego/fuego/userdata/conf/config.xml .
* or patch -p1 add-bbb.patch config.xml
* or some command?
|
Fuego guide notes
* the guide needs some kind of "hello test" instructions
* the guide needs detailed instructions for how to integrate yocto and buildroot sdks
|
=== Ideas from 2016-07 ===
* improve the guide
* The guide is pretty sad - it needs a lot more detail
* what does it need? see [[Fuego guide notes]]
* the guide needs some kind of "hello test" instructions
* the guide needs detailed instructions for how to integrate yocto and buildroot sdks
|
----
* add support for serial interface to target boards - see Issue_0002
|
----
* add support for serial interface to target boards - see [[Issue_0002]]
|
----
* add support for adb interface to target boards
* this should be trivial for the .board file - but possibly less so for the distro file?)
|
----
* add support for adb interface to target boards
* this should be trivial for the .board file - but possibly less so for the distro file?)
|
----
* convert documentation to asciidoc see Issue_0037
|
----
* convert documentation to asciidoc see [[Issue_0037]]
|
----
* pre-install test binaries, for multiple architectures
* this should lower the barrier to entry
* there's no need to install toolchains (toolchains are needed for new tests only)
* there's no need to rebuild the tests constantly
|
----
* pre-install test binaries, for multiple architectures
* this should lower the barrier to entry
* there's no need to install toolchains (toolchains are needed for new tests only)
* there's no need to rebuild the tests constantly
|
----
* support using existing test program binaries already on the target
* suggested by Yves Vandervennet
|
----
* support using existing test program binaries already on the target
* suggested by Yves Vandervennet
|
----
* make a wrapper script, to avoid duplicating lines in every test job command sequence
* right now, every test has shell lines to make the job directory, and to save last_used_testplan
|
----
* make a wrapper script, to avoid duplicating lines in every test job command sequence
* right now, every test has shell lines to make the job directory, and to save last_used_testplan
|
----
* test using ttc
* add target_test as a test
|
----
* test using ttc
* add target_test as a test
|
----
* write hello test tutorial for qemuarm
|
----
* write hello test tutorial for qemuarm
|
----
* write hello test tutorial for qemux86 (this is even easier??)
|
----
* write hello test tutorial for qemux86 (this is even easier??)
|
----
* get rid of extraneous jenkins-isms in the interface
* eg. "Build a Maven 2/3 project" on the New Job page. - what does this mean?, do I care? It's needless junk in our interface.
|
----
* get rid of extraneous jenkins-isms in the interface
* eg. "Build a Maven 2/3 project" on the New Job page. - what does this mean?, do I care? It's needless junk in our interface.
|
----
* add test dependencies - see Issue 0026
|
----
* add test dependencies - see [[Issue 0026]]
|
----
* respond on mailing list within 24 hours.
* response times from cogent are very bad (over a week, and sometimes not at all)
|
----
* respond on mailing list within 24 hours.
* response times from cogent are very bad (over a week, and sometimes not at all)
|
----
* move the project to github? (I don't like the bitbucket hosting)
|
----
* move the project to github? (I don't like the bitbucket hosting)
|
----
* pre_test happens before test_build.
* It should happen after a build. some of the pre_test stuff could go stale (like dropping caches) during the build on the host, which could be quite long.
|
----
* pre_test happens before test_build.
* It should happen after a build. some of the pre_test stuff could go stale (like dropping caches) during the build on the host, which could be quite long.
|
----
* make a fuego command line tool
* ability to start a test, stop a test, look at test results
|
----
* make a fuego command line tool
* ability to start a test, stop a test, look at test results
|
----
* ability to use existing tests on the target
* scan for tests and avoid build step in fuego
* see [[Issue
|
----
* ability to use existing tests on the target
* scan for tests and avoid build step in fuego
* see [[Issue
|
|
== Bugs? ==
* if you click on "graph" to see the results of the test, then go "back" in the browser, it re-runs the test again.
* I think this is just a display oddity. It doesn't seem to actually re-run it, but just shows the progress bar for a moment.
|
----
* When working with beaglebone, I had a test failure with the error:
|
----
* When working with beaglebone, I had a test failure with the error:
|
+ sshpass -e ssh -o ServerAliveInterval=30 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=15 -p root@10.0.1.33 \
'mkdir -p /tmp/jta.Benchmark.bc && cd /tmp/jta.Benchmark.bc && \
cat /var/log/messages > bbb.2016-01-26_18-07-01.6.after'
Bad port 'root@10.0.1.33'
+ abort_job 'Error while ROOTFS_LOGREAD command execution on target'
}}}
|
{{{
+ sshpass -e ssh -o ServerAliveInterval=30 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o ConnectTimeout=15 -p root@10.0.1.33 \
'mkdir -p /tmp/jta.Benchmark.bc && cd /tmp/jta.Benchmark.bc && \
cat /var/log/messages > bbb.2016-01-26_18-07-01.6.after'
Bad port 'root@10.0.1.33'
+ abort_job 'Error while ROOTFS_LOGREAD command execution on target'
}}}
|
- The error comes from a mis-configured SSH_PORT (if empty, the -p parameter uses the value from the next field, which is the user name and ip address)
* Should not use -p in sshpass command line if SSH_PORT is empty
|
* The error comes from a mis-configured SSH_PORT (if empty, the -p parameter uses the value from the next field, which is the user name and ip address)
* Should not use -p in sshpass command line if SSH_PORT is empty
|
----
* I tried to enter a periodic line of "/5 * * * *" (without the quotes), and the system gave me an error:
* Invalid input: "/5 * * * *": line 1:1: unexpected token: /
* this should be an allowed syntax according to the online help
* I had to modify the test to cause it to be run on target 'bbb' as a periodic test.
* I set the periodic line to "5,10,15,20,25,30,35,40,45,50,55 * * * *"
* When the test ran, it ran on target "template-dev"
* In order to get it to run on "bbb", I modifed the parameters
* I renamed "Device" to "Device_queried" - to remove the query operation
* This was a Dynamic Choice Parameter
* I added a new text Parameter, called "Device", with value "bbb" and description "target".
* It now ran the test every 5 minutes on target bbb
|
----
* I tried to enter a periodic line of "/5 * * * *" (without the quotes), and the system gave me an error:
* Invalid input: "/5 * * * *": line 1:1: unexpected token: /
* this should be an allowed syntax according to the online help
* I had to modify the test to cause it to be run on target 'bbb' as a periodic test.
* I set the periodic line to "5,10,15,20,25,30,35,40,45,50,55 * * * *"
* When the test ran, it ran on target "template-dev"
* In order to get it to run on "bbb", I modifed the parameters
* I renamed "Device" to "Device_queried" - to remove the query operation
* This was a Dynamic Choice Parameter
* I added a new text Parameter, called "Device", with value "bbb" and description "target".
* It now ran the test every 5 minutes on target bbb
|
----
* the time and date in the docker container is UTC
* I have had the docker container running for at least overnight
* the current time on my desktop (and the real wall time) is: Tue Jan 26 15:37:58 PST 2016
* the current time in the docker container is: Tue Jan 26 23:38:16 UTC 2016
* This is throwing off my periodic tests, which I want to run in my time zone
|
----
* the time and date in the docker container is UTC
* I have had the docker container running for at least overnight
* the current time on my desktop (and the real wall time) is: Tue Jan 26 15:37:58 PST 2016
* the current time in the docker container is: Tue Jan 26 23:38:16 UTC 2016
* This is throwing off my periodic tests, which I want to run in my time zone
|
----
* problem running jta container, if user runs another container
* do "docker run hello-world", then jta-host-scripts/docker-start-container.sh, and it will run the hello-world container
* this script runs the latest container, instead of the jta container (it uses 'docker ps -l -q' to get the container id
* some other method should be used to set the CONTAINER_ID
|
----
* problem running jta container, if user runs another container
* do "docker run hello-world", then jta-host-scripts/docker-start-container.sh, and it will run the hello-world container
* this script runs the latest container, instead of the jta container (it uses 'docker ps -l -q' to get the container id
* some other method should be used to set the CONTAINER_ID
|
----
* Benchmark.lmbench2 fails to build - it is missing the 'cr' command, but this appears to be an artifact of the AR variable not being defined.
{{{
...
arm-linux-gnueabihf-gcc -O -DRUSAGE -DHAVE_uint=1 -DHAVE_int64_t=1 \
-DHAVE_DRAND48 -DHAVE_SCHED_SETAFFINITY=1 -c lib_sched.c \
-o ../bin/arm-linux-gnueabihf/lib_sched.o
/bin/rm -f ../bin/arm-linux-gnueabihf/lmbench.a
cr ../bin/arm-linux-gnueabihf/lmbench.a ../bin/arm-linux-gnueabihf/lib_tcp.o \
../bin/arm-linux-gnueabihf/lib_udp.o ../bin/arm-linux-gnueabihf/lib_unix.o \
../bin/arm-linux-gnueabihf/lib_timing.o ../bin/arm-linux-gnueabihf/lib_mem.o \
../bin/arm-linux-gnueabihf/lib_stats.o ../bin/arm-linux-gnueabihf/lib_debug.o \
../bin/arm-linux-gnueabihf/getopt.o ../bin/arm-linux-gnueabihf/lib_sched.o
make[2]: cr: Command not found
Makefile:237: recipe for target '../bin/arm-linux-gnueabihf/lmbench.a' failed
make[2]: *** [../bin/arm-linux-gnueabihf/lmbench.a] Error 127
make[2]: Leaving directory '/userdata/buildzone/Benchmark.lmbench2-qemu-armv7hf/src'
Makefile:114: recipe for target 'lmbench' failed
make[1]: *** [lmbench] Error 2
make[1]: Leaving directory '/userdata/buildzone/Benchmark.lmbench2-qemu-armv7hf/src'
Makefile:20: recipe for target 'build' failed
make: *** [build] Error 2
+++ build_error 'error while building test'
+++ touch build_failed
+++ abort_job 'Build failed: error while building test'
+++ set +x
|
----
* Benchmark.lmbench2 fails to build - it is missing the 'cr' command, but this appears to be an artifact of the AR variable not being defined.
{{{
...
arm-linux-gnueabihf-gcc -O -DRUSAGE -DHAVE_uint=1 -DHAVE_int64_t=1 \
-DHAVE_DRAND48 -DHAVE_SCHED_SETAFFINITY=1 -c lib_sched.c \
-o ../bin/arm-linux-gnueabihf/lib_sched.o
/bin/rm -f ../bin/arm-linux-gnueabihf/lmbench.a
cr ../bin/arm-linux-gnueabihf/lmbench.a ../bin/arm-linux-gnueabihf/lib_tcp.o \
../bin/arm-linux-gnueabihf/lib_udp.o ../bin/arm-linux-gnueabihf/lib_unix.o \
../bin/arm-linux-gnueabihf/lib_timing.o ../bin/arm-linux-gnueabihf/lib_mem.o \
../bin/arm-linux-gnueabihf/lib_stats.o ../bin/arm-linux-gnueabihf/lib_debug.o \
../bin/arm-linux-gnueabihf/getopt.o ../bin/arm-linux-gnueabihf/lib_sched.o
make[2]: cr: Command not found
Makefile:237: recipe for target '../bin/arm-linux-gnueabihf/lmbench.a' failed
make[2]: *** [../bin/arm-linux-gnueabihf/lmbench.a] Error 127
make[2]: Leaving directory '/userdata/buildzone/Benchmark.lmbench2-qemu-armv7hf/src'
Makefile:114: recipe for target 'lmbench' failed
make[1]: *** [lmbench] Error 2
make[1]: Leaving directory '/userdata/buildzone/Benchmark.lmbench2-qemu-armv7hf/src'
Makefile:20: recipe for target 'build' failed
make: *** [build] Error 2
+++ build_error 'error while building test'
+++ touch build_failed
+++ abort_job 'Build failed: error while building test'
+++ set +x
|
*** ABORTED ***
|
*** ABORTED ***
|
JTA error reason: Build failed: error while building test
}}}
|
JTA error reason: Build failed: error while building test
}}}
|
- the problem is a missing AR definition in tools.sh for the arm-linux-gnueabihf platform
|
* the problem is a missing AR definition in tools.sh for the arm-linux-gnueabihf platform
|
----
* abort of a job takes a long time
* I tried Functional.LTP.Open_Posix on bbb, and it aborted with SDKROOT not defined, but the test console spun for a very long time (it's still spinning as I write this)
* I don't recall test aborts taking this long - maybe this has to do with the URL change?
|
----
* abort of a job takes a long time
* I tried Functional.LTP.Open_Posix on bbb, and it aborted with SDKROOT not defined, but the test console spun for a very long time (it's still spinning as I write this)
* I don't recall test aborts taking this long - maybe this has to do with the URL change?
|
|
== Artemi's items ==
Here is a list that Artemi sent me in October, 2015
* Tim's hardware lab... integration/support of Sony debug board
* Consider synchronization/update to align with latest versions of Jenkins
* Update LTP test suite
* Consider update/synchronize with latest public versions of other test suites (e.g. iperf/netperf and other benchmarks)
* Split into separate repositories and make tests separate from the framework
* Qemu target support
* Consider adding more elegant/straightforward way of running separate test on target, command-line interface
|
|
== Other Ideas ==
* Support native installation (deferred)
* this was discussed/proposed for JTA-AGL by Nuohan Qiao of Fujitsu.
* it appears that this was just desired for a single AGL server instance, and is not a generally desired feature
|
Jan-Simon Moller wrote:
{{{#!YellowBlock
Regarding docker installation vs. native - it really depends on your use-case
and existing environment:
* Yes, the docker variant gets you up-to speed relatively painfree and I used
it to create/run my own instance here. Docker/containers in general only
partly fix the problem of updates to the containerized system - apart from
rebuilding and stopping/restarting the container (hopefully with all needed
data still in place).
* Our servers run under CentOS and use puppet (and they might also be a VMs
for the given use-case of a hosted AGL-JTA instance). So to keep things
straightforward, we'd like to stay on CentOS and puppet and dont do a
nested setup like CentOS&puppet->docker->debian->Jenkins .
We'd rather skip docker and the debian env.
We have templates that bring-up jenkins, so the modifications needed for
AGL-JTA can end up there.
* Docker and firewalls are a nice science of its own. If you install docker
and e.g. use UFW, then your dockerized hosts are wide-open even with
firewall active.
|
Jan-Simon Moller wrote:
{{{#!YellowBlock
Regarding docker installation vs. native - it really depends on your use-case
and existing environment:
* Yes, the docker variant gets you up-to speed relatively painfree and I used
it to create/run my own instance here. Docker/containers in general only
partly fix the problem of updates to the containerized system - apart from
rebuilding and stopping/restarting the container (hopefully with all needed
data still in place).
* Our servers run under CentOS and use puppet (and they might also be a VMs
for the given use-case of a hosted AGL-JTA instance). So to keep things
straightforward, we'd like to stay on CentOS and puppet and dont do a
nested setup like CentOS&puppet->docker->debian->Jenkins .
We'd rather skip docker and the debian env.
We have templates that bring-up jenkins, so the modifications needed for
AGL-JTA can end up there.
* Docker and firewalls are a nice science of its own. If you install docker
and e.g. use UFW, then your dockerized hosts are wide-open even with
firewall active.
|
The docker *the* approach is the way to go for tests, development and small
labs. I'd even go further and automate the container creation for AGL-JTA and
e.g. standardize on debian jessie (or ubuntu 16.04).
|
The docker *the* approach is the way to go for tests, development and small
labs. I'd even go further and automate the container creation for AGL-JTA and
e.g. standardize on debian jessie (or ubuntu 16.04).
|
For productive systems, we should be able to follow 'exactsteps' to do it on a
physical or virtual hosts.
}}}
|
For productive systems, we should be able to follow 'exactsteps' to do it on a
physical or virtual hosts.
}}}
|
----
* support test categories
* proposed by Nuohan Qiao
|
----
* support test categories
* proposed by Nuohan Qiao
|
|
----
|
|
== From LCJ LTSI meeting ==
Here is Tim's summary of the items desired in Fuego, from the July 2016
meetings at LinuxCon Japan.
There are two categories of material I would like to list:
First is the list of ideas I presented at the LTSI workgroup meeting. The second set of material is additional ideas that people discussed with me after the meeting, that I think are worth considering. I have some short-term action items based on the discussions in Japan, that I'll put at the end.
|
|
=== Ideas Tim presented at LTSI meeting ===
Here is the list of ideas for future work on Fuego that I presented at the LTSI meeting. This is a short list of projects that could be done in the near term.
|
- 1. improve Fuego documentation
* some work has progressed on this since April (ELC), notably the documentation on the fuego wiki at http://fuegotest.org/fuego/FrontPage
* in particular, the shell functions available to the base script should be documented to make sure developers know how to use them properly
* 2. work on additional transports
* support for serial console - see Issue 0002
* upstreaming the 'ttc' patches (ttc is Sony's board management tool)
* examining the work done by AGL to integrate lava board management into AGL-JTA
* 3. use existing tests on target - see Issue 0009
* Siemens mentioned this at ELC, and AGL people mentioned this at LCJ
* need a mechanism (minimally) to skip build
* would be nice to add a mechanism to autodetect the test binary on the target
* 4. use latest Jenkins - see Issue 0005
* this is a high priority to heal the fork between AGL-JTA and Fuego
* 5. decouple the test abstraction from the CI user interface - see Issue 0006
* work is already in progress to create a command-line test launcher (ftc)
* 6. add more tests
* in particular, would like to add kselftest tests to Fuego - see Issue 0007
* would be very nice to have a 'kernelci in a box' feature.
* should add test to check for pre-requisites on the target for Fuego operation - see Issue 0008
|
* 1. improve Fuego documentation
* some work has progressed on this since April (ELC), notably the documentation on the fuego wiki at http://fuegotest.org/fuego/FrontPage
* in particular, the shell functions available to the base script should be documented to make sure developers know how to use them properly
* 2. work on additional transports
* support for serial console - see [[Issue 0002]]
* upstreaming the 'ttc' patches (ttc is Sony's board management tool)
* examining the work done by AGL to integrate lava board management into AGL-JTA
* 3. use existing tests on target - see [[Issue 0009]]
* Siemens mentioned this at ELC, and AGL people mentioned this at LCJ
* need a mechanism (minimally) to skip build
* would be nice to add a mechanism to autodetect the test binary on the target
* 4. use latest Jenkins - see [[Issue 0005]]
* this is a high priority to heal the fork between AGL-JTA and Fuego
* 5. decouple the test abstraction from the CI user interface - see [[Issue 0006]]
* work is already in progress to create a command-line test launcher (ftc)
* 6. add more tests
* in particular, would like to add kselftest tests to Fuego - see [[Issue 0007]]
* would be very nice to have a 'kernelci in a box' feature.
* should add test to check for pre-requisites on the target for Fuego operation - see [[Issue 0008]]
|
|
=== Other issues from LCJ ===
Here are other issues raised in the Fuego presentation session, and in hallway conversations at LCJ:
|
- should consider the license for the test results
* there may be legal issues with publishing benchmark data
* would be nice to simplify pushing back changes to the fuego-core respository from inside the fuego docker container
* should have demos or demo videos of how to use the tool
* is it possible to record the bootloader log also?
* would be nice to have dependency checking, so that test which needs a particular kernel config, or some other feature on the target, can detect if those are present before trying to run. Debugging tests that fail due to lack of dependencies is difficult.
|
* should consider the license for the test results
* there may be legal issues with publishing benchmark data
* would be nice to simplify pushing back changes to the fuego-core respository from inside the fuego docker container
* should have demos or demo videos of how to use the tool
* is it possible to record the bootloader log also?
* would be nice to have dependency checking, so that test which needs a particular kernel config, or some other feature on the target, can detect if those are present before trying to run. Debugging tests that fail due to lack of dependencies is difficult.
|
|
=== More CEWG issues ===
Here were some other CEWG discussed at the LTSI meeting:
- Cogent is not very responsive - should we fund (or find) a dedicated maintainer
- Does the Linux Foundation (CE group) have funding for Fuego projects?
|
|
=== Action Items from July 2016 ===
- I want to get more information about how AGL-JTA is using lava to manage target boards
- determine status of CELP funding
- make dedicated mail list for fuego
- take over maintainer job for fuego
- get list of people working on fuego/jta
- prioritize and start working on the roadmap items
- determine next outreach opportunity
- I have a CELP booth reserved for ELCE, and plan to add Fuego to the technical showcase
|
The 'list of people' action item is based on my losing track of the various people at LCJ who talked to me about Fuego.
I'd like to get a list of the people actively working on it. Possibly, creating the mailing list will help.
|
The 'list of people' action item is based on my losing track of the various people at LCJ who talked to me about Fuego.
I'd like to get a list of the people actively working on it. Possibly, creating the mailing list will help.
|
|
== Resolved ==
* Benchmark.uptime doesn't show the plot on the test page (it says "NO DATA")
* I've done something wrong, but I don't know what it is. It's config.xml is pretty close to that of Benchmark.Dhrystone, which works.
* this was my error. I didn't modify /home/jenkins/logs/tests.info with a line for my benchmark
|
----
* when creating a new board, the environment variable "DISTR" was set to distribs/nologger.dist, which caused a missing "DISTRIB" message
* it appears that this environment variable name should be DISTRIB
* in the release I'm testing in January 2016 (jta-core last commit Nov 30 2015, and jta-public last commit Nov 30 2015)
* I was making userdata/conf/boards/bbb.board (copied from template-dev.board) and neither has either DISTR or DISTRIB.
* adding DISTRIB="distribs/nologger.dist" to bbb.board still resulted in the error
* This doesn't go in the board file, it goes in the jenkins target configuration (/userdata/conf/config.xml)
* need to send a patch for template-dev in this file (does it come from jta-core or jta-public?)
|
----
* when creating a new board, the environment variable "DISTR" was set to distribs/nologger.dist, which caused a missing "DISTRIB" message
* it appears that this environment variable name should be DISTRIB
* in the release I'm testing in January 2016 (jta-core last commit Nov 30 2015, and jta-public last commit Nov 30 2015)
* I was making userdata/conf/boards/bbb.board (copied from template-dev.board) and neither has either DISTR or DISTRIB.
* adding DISTRIB="distribs/nologger.dist" to bbb.board still resulted in the error
* This doesn't go in the board file, it goes in the jenkins target configuration (/userdata/conf/config.xml)
* need to send a patch for template-dev in this file (does it come from jta-core or jta-public?)
|
----
* in the file /home/jenkins/tests/Benchmark.bc/bc-scripts/bc-device.sh, it sets BC_EXPR1=$1 and also!! BC_EXPR2=$1
* Shouldn't that be BC_EXPR2=$2 ??
* (RESOLUTION) These were renamed to FUNCTIONAL_BC_EXPR and FUNCTIONAL_BC_RESULT
|
----
* in the file /home/jenkins/tests/Benchmark.bc/bc-scripts/bc-device.sh, it sets BC_EXPR1=$1 and also!! BC_EXPR2=$1
* Shouldn't that be BC_EXPR2=$2 ??
* (RESOLUTION) These were renamed to FUNCTIONAL_BC_EXPR and FUNCTIONAL_BC_RESULT
|
----
* in the file /home/jenkins/tests/Benchmark.bc/bc-script.sh, the function test_run has the line "report "cd $JTA_HOME/jta.$TESTDIR; ./bc-device.sh $BENCHMARK_BC_EXPR1 $BENCHMARK_BC_EXPR1"
* Shouldn't that second parameter be $BENCHMARK_BC_EXPR2 (!!)
* in my log I was seeing both arguments written as "2+2", although I was using the default bc spec.
|
----
* in the file /home/jenkins/tests/Benchmark.bc/bc-script.sh, the function test_run has the line "report "cd $JTA_HOME/jta.$TESTDIR; ./bc-device.sh $BENCHMARK_BC_EXPR1 $BENCHMARK_BC_EXPR1"
* Shouldn't that second parameter be $BENCHMARK_BC_EXPR2 (!!)
* in my log I was seeing both arguments written as "2+2", although I was using the default bc spec.
|
----
* the sample specs for the BC example seem to have poor names
* instead of the specs being named bc_exp1 and bc_exp2, they should be bc_mult and bc_add, to describe what they do.
* this is expecially true since exp1 and exp2 are used inside the script themselves to indicate the first and second expressions to evaluate
* (RESOLUTION) The spec names are now bc-mult and bc-add, and the testplans are: testplan_bc_add and testplan_bc_mult.
|
----
* the sample specs for the BC example seem to have poor names
* instead of the specs being named bc_exp1 and bc_exp2, they should be bc_mult and bc_add, to describe what they do.
* this is expecially true since exp1 and exp2 are used inside the script themselves to indicate the first and second expressions to evaluate
* (RESOLUTION) The spec names are now bc-mult and bc-add, and the testplans are: testplan_bc_add and testplan_bc_mult.
|
----
* get a new name - JTA is OK, but how about:
* JELTS - Jenkins-based Embedded Linux Test System
* FELT - Framework for Embedded Linux Testing ("It's smooth")
* Fuego - no meaning, just a cool name (reference to Tierra Del Fuego, a penguin habitat) (spanish for "fire")
* This is my favorite.
* It looks like there's already a "fuego" package for linux (C++ libraries for 'Go' programs??)
* testbook - combination of "test" and "facebook" - already used in usb-testbook - see http://testbook.net/usb-trial.html
* testbox - combination of "test" and "box" (which is popular for embedded linux tools - busybox, toybox, etc.)
|
----
* get a new name - JTA is OK, but how about:
* JELTS - Jenkins-based Embedded Linux Test System
* FELT - Framework for Embedded Linux Testing ("It's smooth")
* Fuego - no meaning, just a cool name (reference to Tierra Del Fuego, a penguin habitat) (spanish for "fire")
* This is my favorite.
* It looks like there's already a "fuego" package for linux (C++ libraries for 'Go' programs??)
* testbook - combination of "test" and "facebook" - already used in usb-testbook - see http://testbook.net/usb-trial.html
* testbox - combination of "test" and "box" (which is popular for embedded linux tools - busybox, toybox, etc.)
|
----
==
|
----
==
|