LLVM Test Results for $DateString

Sections:
Overview
Changes
Trends
Programs
Feature
Regression
Dejagnu Tests

Previous:
$PrevDaysList

Back to:
Test Results
LLVM Page

Today's Test Results Overview

Lines Of Code over Time
Click for larger view

Nightly Test Overview:

  • Start: $TestStartTime GMT
  • Finish: $TestFinishTime GMT
  • Platform: $TestPlatform

CVS Tree Overview:

  • CVS Checkout Log
      $NumDirsInCVS dirs, $NumFilesInCVS files, $LOC lines of code, checked out in $CVSCheckoutTime seconds
  • Compilation Log
    ItemCPU TimeWall Clock
    Configure CVS Tree$ConfigTime$ConfigWallTime
    Build CVS Tree$BuildTime$BuildWallTime
    Run Feature Tests$FeatureTime$FeatureWallTime
    Run Regression Tests$RegressionTime$RegressionWallTime
    Run Dejagnu Tests$DejagnuTime$DejagnuWallTime
  • Number of object files compiled: $NumObjects
  • Number of libraries linked: $NumLibraries
  • Number of executables linked: $NumExecutables
  • Build Error: $BuildError

Warnings during the build:

    $WarningsList



Changes from Yesterday

Changes to CVS:

  • Users who committed to CVS: $UserCommitList
  • Users who updated from CVS: $UserUpdateList
  • Added Files: $AddedFilesList
  • Modified Files: $ModifiedFilesList
  • Removed Files: $RemovedFilesList

Changes to Warnings:

  • Warnings Added: $WarningsAdded
  • Warnings Removed: $WarningsRemoved

Changes in the test suite:

  • New Tests: $TestsAdded
  • Removed Tests: $TestsRemoved
  • Newly passing tests: $TestsFixed
  • Newly failing tests: $TestsBroken


Changes Over Time

Here are some charts showing how the LLVM optimizer and code generators are changing over time. For now we use the Olden benchmark suite to measure this, but eventually we will switch to using SPEC CPU2000. All programs are run with "LARGE_PROBLEM_SIZE" enabled. Click on any of the charts to get a larger version.

Compilation Measurements:


Size of LLVM bytecode files

Size of native machine code for each program (generated by the JIT)

Time to run the LLVM optimizer on each program

Program Execution Measurements:


Execution time for CBE generated executable

Execution time for the LLC generated executable

Execution time for program in the JIT


Program Tests

This section tests LLVM on a variety of programs in the test suite. This includes benchmark suites like the Olden, McCat, Ptrdist, and SPEC benchmarks as well as a few random programs with test inputs. This section is meant to track how stable LLVM is as a whole. A failure in the execution of any test is marked with an asterisk: `*'. The columns of the tables are:

  1. Program - The name of the program for that row.
  2. GCCAS - Time to run LLVM optimizers on the program.
  3. Bytecode - The size of the bytecode for the program
  4. Instrs - The number of LLVM instructions in the compiled bytecode
  5. LLC compile - The time taken compile with LLC (the static backend)
  6. JIT codegen - The amount of time spent in the JIT itself, instead of executing the program.
  7. Machine code - The number of bytes of machine code generated by the JIT.
  8. GCC - The time taken to execute the program when compiled with GCC -O2.
  9. CBE - The time taken to execute the program after compilation through the C backend, compiled with -O2.
  10. LLC - How long does the program generated by the static backend LLC take to execute
  11. JIT - The amount of time spent running the program with the JIT; this includes the code generation phase (listed above) and actually running the program.
  12. GCC/LLC - The speed-up of the LLC output vs the native GCC output: greater than 1 is a speedup, less than 1 is a slowdown.
  13. GCC/CBE - The speed-up of the CBE output vs the native GCC output: greater than 1 is a speedup, less than 1 is a slowdown.
  14. LLC-BETA - How long does the program generated by the static backend LLC take to execute the program, when compiled with new experimental features. This is temporary, for tuning.

A complete log of testing SingleSource, MultiSource, and External programs are available for further analysis.

Programs/External

$ExternalProgramsTable

Programs/MultiSource

$MultiSourceProgramsTable

Programs/SingleSource

$SingleSourceProgramsTable


Feature Test Results

$FeatureTestResults

Regression Test Results

$RegressionTestResults

Dejagnu Test Results

$DejagnuTestResults

A complete log of testing Feature and Regression is available for further analysis.