Hello every body,
With the ideas and suggestions that I got from you all I came up with the proposal for the project of improving the llvm-test testsuite. I am posting my proposal here, if you have any feedback(anything to be added or anything to be removed ) please let me know, so that I can improve the proposal before I upload it in Google web app. Thanks in advance.!
Proposal for summer of code 2008, Rajika Kumarasiri
1. Project title:
Extend the llvm-test testsuite to include new programs and benchmaraks, and give a web service interface to run a test build remotely.
LLVM is a collection of libraries and header files of a virtual machine and has a GCC-front end in which C/C++ programs can be compiled into LLVM bitcode.
The aim of this project is to improve the testsuite of LLVM, which is an extremely useful task. So the tesuite of the llvm is extended by including this new programs and benchmarks. The test programs would be a part of the already existing testsuite with ability to check new features and correctness and also benchmarks. The test cases would reproduce any failures when LLVM fails to build the third party codes.
Test programs would be either small, independent ones(located in llvm/test) which check correctness and features or large whole programs(located in llvm/project/llvm-test) which check the benchmarks. In both cases test cases will be programs which are more CPU intensive (to get the most effect out of machine time) and will have a few library dependencies(to avoid the version mismatch problem and to run on wide range of platforms).
At the other end project include a web service interface in which user can submit a test build for a particular target and get the result out put.
1.Individual test programs in source code format.
2.A small HOWTO associate with each and every test program(which includes how to use, which input parameters should be supplied and the detailed outputs). This is the documentation associated with the each of the test case.
3.A web service interface in which a user can submit a test and receive the result.
4.A small HOWTO which describe the inputs, outputs and how to use the web service interface.
5.Documentation to support the continuation of the project (if required)
4. Benefits to the llvm community
After successful completion of the project following will be available to the LLVM community.
1.An improved test suite with new benchmarks programs.
2.A web service interface, in which a user can submit a test build and get the result.
My work in this project will cover two parts.
First part is the improvement which is going to be added to the LLVM test suite. The LLVM test suite comes in two forms in which one is a set of independent code fragment (under llvm/test) and whole programs (under llvm/project/llvm-test). Having a good test suite for the LLVM is an extremely important task, since it gives a lot of coverage of programs and enables to spot and improve any problem in the compiler. The implementation details are as follows (in abstract form).
1.Compile the benchmarks programs with LLVM compiler.
2.If this successful update the relevant make files in llvm/project/llvm-test to include the new benchmark to the llvm test suite.
3.If this fails, derive(rewrite or modify the existing one) a small test case program(this entirely depend on the benchmark program use) and add it to the test suite. Extend the build system(modify the makefiles in llvm/test and llvm/project/llvm-test) to include the new test cases to the LLVM testsuite.
For example consider the BioPerf. I should take the existing benchmarks and appropriately update its makefiles so that it builds on the whole program testsuite (in llvm/project/llvm-test), if BioPerf fails to build with LLVM I need to derive(modify an existing one or write a new test case) small test case to reproduce the failures. These independent tests are then added to llvm/test together with the relevant updates in makefiles of the build system.
When developing benchmarks input data set must be adjusted appropriately so that it doesn’t take hours to run the this benchmark and it takes enough time to notice the performance changes.
My second work on the project involves developing a web service interface so that a user can submit test build and get the result remotely. Once a user submits a test build with a specific target the build result (any possible deviation from that target) will be given.
Having this kind of a web service interface can be very useful, since it avoids the need of maintaining two trees of code base for testing purposes. And also if this web service can be exposed to the llvm-user community they will also get the benefit to run a test build remotely for a particular target.
Implementation of the web service is straight forward, the user of the web service will be given a client(which takes the payload to the service) developed using client API and it will communicate with the service(which actually runs the test suite) developed with the service API of the Apache Axis2/C web services frame work. Axis2/C written in C(giving us the ability to embed it to LLVM easily) and come under Apache license (giving us the ability to ship with LLVM due to the Apache license).
6. Project Plan:
I have broken down the project under 4 steps as follows. Weekly status updates will be given in the llvm-dev list.
Step1: Initial Planning and Designing
I would look into the build system of LLVM testsuite, how a simple test can be written for the testsuite under this step. And also I’ll look into how a C/C++ program can be compiled using LLVM compiler in this time. And also I will design the web service interface(together with inputs, out puts etc…,)
Deliverable: Prototype test case.
Estimated Completion: 21st May 2008
Step2: Implementation of initial test suite programs
I would focus on implementing the testsuite programs in this step. The selection of the benchmarks program priority will depend on factors such as pointer intensive programs, memory intensive program etc.,I hope to add programs to the test suite as far as I can within this time from the list.
Deliverable(s): Completed test cases from the list and the documentation for mid evaluation. Estimated Completion: 2nd July 2008
Step3: Improvements and Implementing web service interface and the rest of the benchmarks programs
Modifications or improvements suggested at the mid evaluation would be completed in this step. And also the web service will be developed. And also the rest of the benchmarks will be completed from the list.
Deliverable(s): Web service and the test programs for the rest of the list. Estimated Completion: 3th August 2008
Step4: Final Product and Documents
This step would complete the project in which. Necessary documents would also be present with the final product.
Deliverable(s): Final product and documentation Estimated Completion: 11th August 2008
Other information about me including my projects work can be found in my resume.
 - http://nondot.org/sabre/LLVMNotes/#benchmarks
 - http://llvm.org/
 - http://llvm.org/docs/TestingGuide.html
 - http://ws.apache.org/axis2/c/
 - http://www.apache.org/licenses/LICENSE-2.0
 - http://www.cse.mrt.ac.lk/
 - http://rajikacc.googlepages.com/mail_trasnport.tar.gz
 - http://rajikacc.googlepages.com/scanner.tar.gz
 - http://rajikacc.googlepages.com/pretty_printer.tar.gz
 - http://rajikacc.googlepages.com/resume_rajika.pdf