FrontPage 

Fuego wiki

Login or create account

Fuego guide notes in split format

Here is feedback on the Fuego users guide.
Here is feedback on the Fuego users guide.
The guide has an overall good structure, but it missing important details to explain the system and help people do basic tasks.
The guide has an overall good structure, but it missing important details to explain the system and help people do basic tasks.
The user's guide is at:
The user's guide is at: 

Tasks overview [edit section]

== Tasks overview ==
 * install the system
  * connect to the system interface
 * architecture overview
 * goals overview
 * review of all screens and options
  * see list of targets
  * see list of tests
 * run a simple test (to test the system and see how it operates)
  * should support some qemu target (qemuarm?) for examples
  * see results from a test
  * see results from a series of tests
  * see test results visualization
 * add their own hardware target
 * add their own test
  * adding a new simple test
  * adding an existing test program
   * adding a program that requires compilation
    * adding a toolchain and cross-build environment
 * share their test configuration
 * share their new test
 * share their test results

Issues/Feedback [edit section]

== Issues/Feedback ==
Here are some issues with the guide:
 * I don't like the fonts.  Using Acroread on Ubuntu 12.04, the font was very thin and I had a hard time reading it.  It is possible to use a heavier-weight font?  Or is this an Acroread problem? (Tim: it appears to be an Acroread problem, in Document Viewer, the fonts are a bit better - but it's still a bit hard to read.)
 * The use of dark red boxes for inter-document links make the table of contents hard to read.  Can this be changed to something more subtle? (like a different color or just underline?)
 * Throughout the document, there are subtle misuses of English, and a number of typos. Sometimes, articles (like 'a', or 'the') are missing.  I will try to send patches to fix the ones I found.
Some basic architecture and design information is missing from the document.
Some basic architecture and design information is missing from the document.
Here are some questions that should be answered: * what are the steps performed during a test? * for each step, is it performed on host or on target? * there seem to be the steps pre_test, build, deploy, test_run, and test_processing (from test.sh) * versions of these steps for a specific test appear to be defined in the <test>-script.sh file. * what does "assert_define" mean? * how are the variable definitions transferred from the host to the target * what does "report" mean in the test_run function? * how are "cmd", "put" and "get" translated into the ov_transport_* functions? * what is the full list of commands you can put in your test_run function? * apparently you can put "target_reboot" in there (see tests/Benchmark.reboot/ * what is the language or conventions used in the reference.log file? * is this a JTA-ism or a Jenkins-ism? * Comment in Benchmark.reboot/reference.log seems to indicate that the threshold values can be generated automatically - is this true? * if so, please document how to do this * what operators are allowed in the second field? (it looks like at least ge, and le, for "greater than or equal" or "less than or equal") * can any variable name be used in the first field? * the overall directory structure should be documented. * some parts of the bc test are in tests/Benchmark.bc, while the specs are in overlays/test_specs * what are buildzone, work, etc. used for? * is there a way to have the system retain intermediary files (to turn off test cleanup, so you can look at the files on target to see if you got what you were expecting there)? * Yes - set the value of Target_PostCleanup to 'false' * if people are going to write their own parser.py programs, they need to know what process_data() does.
Here are some questions that should be answered:
 * what are the steps performed during a test?
 * for each step, is it performed on host or on target?
  * there seem to be the steps pre_test, build, deploy, test_run, and test_processing (from test.sh)
  * versions of these steps for a specific test appear to be defined in the <test>-script.sh file.
 * what does "assert_define" mean?
  * how are the variable definitions transferred from the host to the target
 * what does "report" mean in the test_run function?
  * how are "cmd", "put" and "get" translated into the ov_transport_* functions?
 * what is the full list of commands you can put in your test_run function?
  * apparently you can put "target_reboot" in there (see tests/Benchmark.reboot/
 * what is the language or conventions used in the reference.log file?
  * is this a JTA-ism or a Jenkins-ism?
  * Comment in Benchmark.reboot/reference.log seems to indicate that the threshold values can be generated automatically - is this true?
   * if so, please document how to do this
  * what operators are allowed in the second field?  (it looks like at least ge, and le, for "greater than or equal" or "less than or equal")
  * can any variable name be used in the first field?
 * the overall directory structure should be documented.
  * some parts of the bc test are in tests/Benchmark.bc, while the specs are in overlays/test_specs
  * what are buildzone, work, etc. used for?
 * is there a way to have the system retain intermediary files (to turn off test cleanup, so you can look at the files on target to see if you got what you were expecting there)?
  * Yes - set the value of Target_PostCleanup to 'false'
 * if people are going to write their own parser.py programs, they need to know what process_data() does.

intro [edit section]

=== intro ===
 * What does "it does not impose any demands on board or distributions" mean?

install (chapter 2) [edit section]

http://localhost:8080/ * I don't understand this fuego-ro,fuego-rw,fuego-core stuff. I assume these are inside the container. * where is this stuff outside the container? - in ~/work/fuego/fuego-ro? * when you start the container (when you run docker-start-container.sh) you are put inside the container at a shell prompt, running as the root user (for the container)
=== install (chapter 2) ===
 * why should the user build the container?
  * Shouldn't we do that, and just let the user customize it??
  * answer: when docker builds a container, it records information about the host it was built on (such as network interfaces, proxies, etc.)
 * 2.2 should specifically give the URL for the local web interface: http://localhost:8080/
 * I don't understand this fuego-ro,fuego-rw,fuego-core stuff.  I assume these are inside the container.
  * where is this stuff outside the container? - in ~/work/fuego/fuego-ro?
 * when you start the container (when you run docker-start-container.sh) you are put inside
the container at a shell prompt, running as the root user (for the container)
The README mentions connecting to the container with an ssh server running on port 222. What is the command to connect to the container?
The README mentions connecting to the container with an ssh server running on port 222.
What is the command to connect to the container?
  • should we document the default FUEGO_ENGINE_PATH? * can the end-user change this, or is this something that the Debian Jenkins package expects?
 * should we document the default FUEGO_ENG''''''INE_PATH?
  * can the end-user change this, or is this something that the Debian Jenkins package expects?
  • Are the 'fuego-*' directories for putting stuff into the container or getting stuff out of the container? * Do I need to restart the container if I change userdata/conf/config.xml (the Jenkins configuration script?)?
 * Are the 'fuego-*' directories for putting stuff into the container or getting stuff out of the container?
 * Do I need to restart the container if I change userdata/conf/config.xml (the Jenkins configuration script?)?
  • 2.3.1 - OK, there's a yocto layer for building toolchains for the test programs. How do I build the tools and install them into the container?
 * 2.3.1 - OK, there's a yocto layer for building toolchains for the test programs.  How do I build the tools and install them into the container?

board_config (chapter 3) [edit section]

=== board_config (chapter 3) ===
 * I don't understand what a front-end Jenkins entity is?  Is this a process on the host?
  * So, to add a new target, you define a .board file, and then create a Target in the Jenkins interface?
 * the board config overlay is confusing.  I haven't been introduced to the notion of overlays yet.
  * this needs more introduction or explanation
 * template-dev should be called "template-board" or "board-template" or "board-skeleton", IMHO
 * I created a Jenkins target, and re-wrote the BOARD_OVERLAY environment variable to be boards/beaglebone-black.board - is this right?
  * Jenkins didn't create the .board file for me?  Do I do this manually?  If so, this should be described in the doc.
  * I manually copied template-dev.board to beaglebone-black.board in /home/jenkins/boards - is this right?

running_tests (chapter 4) [edit section]

=== running_tests (chapter 4) ===
=== adding_test (chapter 5) ===
=== overlays (chapter 6) (Base classes and overlays) ===
=== testplans (chapter 7) ===
 * section 7.3 says that specs are read from the overlays/specs directory - but I think this should be overlays/test_specs

reports (chapter 8) [edit section]

=== reports (chapter 8) ===
 * this chapter is empty

listings (chapter 9) [edit section]

=== listings (chapter 9) ===
TBWiki engine 1.8.3 by Tim Bird