Continuous_Integration_Notes 

Fuego wiki

Login or create account

Continuous Integration Notes

Fuego is intended to be used for continuous integration of a Linux system.

This page documents continuous integration options for testing Fuego itself.

Alternatives [edit section]

There are different approaches available for implementing this system:
  • Fuego testplan approach
  • Fuego test approach
  • Jenkins pipeline approach

pros, cons, issues [edit section]

Fuego testplan [edit section]

  • requires editing a json file
  • is executed by Jenkins without indicating pass or fail
    • there's no visualization of sub-results
  • already have full management of this on the command line
    • can create Jenkins batch job from a plan
  • is assigned to a single node
    • I want to test on multiple nodes, with different tests per node
      • e.g. fuego-test has fuego tests, and bbb has nightly tests
      • disable some tests on particular nodes
  • baseline is missing for Functional.fuego_compare_reports

Questions:

  • can I control the order of test execution?
    • I think so.
  • can I repeat a test in the list?
  • can I stop when a test fails?
    • no - the jobs are started as post-build actions
  • can I use a job in more than one plan?
    • not with different test settings (reboot, timeouts, cleanup flags, etc.)
    • you can use different specs, though

Notes:

  • a Jenkins job can start on the completion of another build, so these could be expressed as job dependencies (with upstream and downstream jobs)
    • however, I can't see a way to use a job in more than one plan this way

Fuego test [edit section]

  • is very flexible
    • maybe too flexible - is imperative instead of declarative, so cannot be easily parsed itself for meta-analysis
  • language (shell scripting) is already used in Fuego
  • doesn't have visualization?
    • depends on parser and Jenkins visualization
  • already have full management of this on the command line
    • treat like a regular Fuego test

Jenkins pipeline [edit section]

  • still dependent on Jenkins
  • requires learning yet-another-domain-specific-language
  • has visualization inside Jenkins?

Plan [edit section]

  • Make one of each and compare difficulty and features
  • DEFER Need to have bbb change systems (kernel?)
  • DEFER Need to have min1 change systems (kernel?)

things that were painful when building release table [edit section]

See Release_test_results

  • finding the thing that failed for each board
    • not too bad in Jenkins - used Jenkins history and sorted by failure
    • not too bad on the command line - used ftc list-runs and gen-report
      • commands are difficult for average user
      • results can't be browsed in the Jenkins interface
    • can't tell aborted test from not-run test
  • finding the reason for the failure for each board
    • it takes a lot of work to drill down and see the reason for a failure
      • have to look at console log and diagnose test failures
        • could be build failure, runtime failure, dependency abort, etc.
  • comparing the failure for this release cycle with the failure for last release cycle, and seeing if the failure is new
    • that's what the CI cycle is supposed to check
    • Functional.fuego_compare_reports is supposed to be useful for that, but there was no baseline to compare with
  • can't see summary table with multiple different boards
  • have to click through to see test history
  • ftc gen-report doesn't scale well - on 50 tests or more, it starts to get really slow
    • no listing of "expected failures" that I can ignore unless I'm interested.
      • could take the test out of the plan, but then can't see if it's status changes
      • would be good if 'xfail' had a different color

What do I want to see? [edit section]

Notes on making and using the different CI approaches [edit section]

fuego testplan approach [edit section]

Here's what I did for to create fuego_nightly.batch:

I created testplan_fuego_nightly1.json and testplan_fuego_nightly2.json by copying from testplan_default.json and testplan_fuego_test.json

     $ cd fuego-core/engine/overlays/testplans
     $ cp testplan_default.json testplan_fuego_nightly1.json
     $ cp testplan_fuego_test.json testplan_fuego_nightly2.json

Note that I added 'reboot': 'true' to Functional.OpenSSL, because it follows LTP, and I want the board restarted after that test. I'm not sure this will work, because likely <board>.default.Functional.OpenSSL already exists, and won't be created with that flag set on the job.

Then I added jobs for these tests:

     $ ftc add-jobs -b bbb,min1,ren1 -p testplan_fuego_nightly1
     $ ftc add-jobs -b fuego-test -p testplan_fuego_nightly2

Note that this creates the sub-jobs, as well as the batch jobs for fuego_nightly* operations.

Then I created a single "cover" job called fuego_nightly.batch. This jobs had a post-build action of:

  • bbb.testplan_fuego_nightly1.batch,min1.testplan_fuego_nightly1.batch,ren1.testplan_fuego_nightly1.batch,fuego-test.testplan_fuego_nightly2.batch

I added a build trigger of "Build periodically", with a schedule line of:

  • 0 0 * * *

Another option would have been to schedule each of the <board>.testplan_fuego_nightly* jobs with the schedule "H H(0-7) * * *", instead of making a cover job that runs at midnight.

I changed the schedule to "0 7 * * *", to be 7 am UTC, which corresponds to 11:00 pm on my host machine (Pacific Standard Time).

I added, at the front of testplan_fuego_nightly1 a reference to Benchmark.reboot.

I created a view called 'nightly' so I'd have easy access to the batch jobs.

ideas to do [edit section]

  • add one-off criteria definition (e.g. xfail for a test or testcase) as a board variable
    • e.g. ftc add-critera -b bbb "Functional.fuego_test_variables.precedence_of_class_vars_over_env_vars=FAIL"
    • e.g. ftc set-var -b bbb "FUNCTIONS_FUEGO_TEST_VARIABLES_CRITERIA1=Functional.fuego_test_variables.precedence_of_class_vars_over_env_vars=FAIL"
      • not as good, requires use to know criteria number
    • see Update Criteria for a possible 'ftc set-criteria' command (in progress)
  • we should record the phase and reason string for an abort in the run.json file
  • we should allow running a step to translate a regular expression into a short description
    • I understand the Jenkins description setter module better now
    • allows you to associate a short string with a particular pattern found in the log
    • is useful for abbreviating the reason for a failing result
    • could collect such patterns from users, and use them to build a database for "reason" parsing

TBWiki engine 1.8.3 by Tim Bird