[Proposal] GSoC 2008 project proposal for improving the llvm-test testsuite

Hello every body,
With the ideas and suggestions that I got from you all I came up with the proposal for the project of improving the llvm-test testsuite. I am posting my proposal here, if you have any feedback(anything to be added or anything to be removed ) please let me know, so that I can improve the proposal before I upload it in Google web app. Thanks in advance.!

Proposal for summer of code 2008, Rajika Kumarasiri

1. Project title:
Extend the llvm-test testsuite to include new programs and benchmaraks[1], and give a web service interface to run a test build remotely.

2. Abstract:
LLVM[2] is a collection of libraries and header files of a virtual machine and has a GCC-front end in which C/C++ programs can be compiled into LLVM bitcode.
The aim of this project is to improve the testsuite[3] of LLVM, which is an extremely useful task. So the tesuite of the llvm is extended by including this new programs and benchmarks[1]. The test programs would be a part of the already existing testsuite with ability to check new features and correctness and also benchmarks. The test cases would reproduce any failures when LLVM fails to build the third party codes[1].
Test programs would be either small, independent ones(located in llvm/test) which check correctness and features or large whole programs(located in llvm/project/llvm-test) which check the benchmarks. In both cases test cases will be programs which are more CPU intensive (to get the most effect out of machine time) and will have a few library dependencies(to avoid the version mismatch problem and to run on wide range of platforms).
At the other end project include a web service interface in which user can submit a test build for a particular target and get the result out put.

3. Deliverables:
1.Individual test programs in source code format.
2.A small HOWTO associate with each and every test program(which includes how to use, which input parameters should be supplied and the detailed outputs). This is the documentation associated with the each of the test case.
3.A web service interface in which a user can submit a test and receive the result.
4.A small HOWTO which describe the inputs, outputs and how to use the web service interface.
5.Documentation to support the continuation of the project (if required)

4. Benefits to the llvm community
After successful completion of the project following will be available to the LLVM community.
1.An improved test suite with new benchmarks programs.
2.A web service interface, in which a user can submit a test build and get the result.

5. Overview:
My work in this project will cover two parts.
First part is the improvement which is going to be added to the LLVM test suite. The LLVM test suite comes in two forms in which one is a set of independent code fragment (under llvm/test) and whole programs (under llvm/project/llvm-test). Having a good test suite for the LLVM is an extremely important task, since it gives a lot of coverage of programs and enables to spot and improve any problem in the compiler. The implementation details are as follows (in abstract form).
1.Compile the benchmarks programs[1] with LLVM compiler.
2.If this successful update the relevant make files in llvm/project/llvm-test to include the new benchmark to the llvm test suite.
3.If this fails, derive(rewrite or modify the existing one) a small test case program(this entirely depend on the benchmark program use[1]) and add it to the test suite. Extend the build system(modify the makefiles in llvm/test and llvm/project/llvm-test) to include the new test cases to the LLVM testsuite.
For example consider the BioPerf[1]. I should take the existing benchmarks and appropriately update its makefiles so that it builds on the whole program testsuite (in llvm/project/llvm-test), if BioPerf fails to build with LLVM I need to derive(modify an existing one or write a new test case) small test case to reproduce the failures. These independent tests are then added to llvm/test together with the relevant updates in makefiles of the build system.
When developing benchmarks input data set must be adjusted appropriately so that it doesn’t take hours to run the this benchmark and it takes enough time to notice the performance changes.

My second work on the project involves developing a web service interface so that a user can submit test build and get the result remotely. Once a user submits a test build with a specific target the build result (any possible deviation from that target) will be given.
Having this kind of a web service interface can be very useful, since it avoids the need of maintaining two trees of code base for testing purposes. And also if this web service can be exposed to the llvm-user community they will also get the benefit to run a test build remotely for a particular target.
Implementation of the web service is straight forward, the user of the web service will be given a client(which takes the payload to the service) developed using client API and it will communicate with the service(which actually runs the test suite) developed with the service API of the Apache Axis2/C web services frame work[4]. Axis2/C written in C(giving us the ability to embed it to LLVM easily) and come under Apache license[5] (giving us the ability to ship with LLVM due to the Apache license).

6. Project Plan:
I have broken down the project under 4 steps as follows. Weekly status updates will be given in the llvm-dev list.
Step1: Initial Planning and Designing
I would look into the build system of LLVM testsuite, how a simple test can be written for the testsuite under this step. And also I’ll look into how a C/C++ program can be compiled using LLVM compiler in this time. And also I will design the web service interface(together with inputs, out puts etc…,)
Deliverable: Prototype test case.
Estimated Completion: 21st May 2008
Step2: Implementation of initial test suite programs
I would focus on implementing the testsuite programs in this step. The selection of the benchmarks program priority[1] will depend on factors such as pointer intensive programs, memory intensive program etc.,I hope to add programs to the test suite as far as I can within this time from the list[1].
Deliverable(s): Completed test cases from the list[1] and the documentation for mid evaluation. Estimated Completion: 2nd July 2008
Step3: Improvements and Implementing web service interface and the rest of the benchmarks programs
Modifications or improvements suggested at the mid evaluation would be completed in this step. And also the web service will be developed. And also the rest of the benchmarks will be completed from the list.
Deliverable(s): Web service and the test programs for the rest of the list. Estimated Completion: 3th August 2008
Step4: Final Product and Documents
This step would complete the project in which. Necessary documents would also be present with the final product.
Deliverable(s): Final product and documentation Estimated Completion: 11th August 2008

7. Biography:
I am a final year computer science and engineering undergraduate of the Department of Computer Science and Engineering of University of Moratuwa, Sri Lanka[6]. I have involved in Open source development regarding web services. I developed a firefox extension which introduced a new Javascript object, WSRequest which can be used for consume web services. It is free and open source in which the code is available here[7]. And also I participated in GSoC 2007 program and developed a part of the mail transport for Apache Axis2/C[8].
I have a great interest in compiler technology and have experience in C/C++ programming. I took the one semester course on compiler theory at the university and developed a lexical analyzer in C for a subset of Javascript language [9] and a pretty printer for a subset of Javascript in python[10]. I was a looking for a compiler project to work on and I found LLVM, and I’ll continuously work on it to make any contribution to the llvm community.
Other information about me including my projects work can be found in my resume[11].

[1] - http://nondot.org/sabre/LLVMNotes/#benchmarks
[2] - http://llvm.org/
[3] - http://llvm.org/docs/TestingGuide.html
[4] - http://ws.apache.org/axis2/c/
[5] - http://www.apache.org/licenses/LICENSE-2.0
[6] - http://www.cse.mrt.ac.lk/
[7] - https://wso2.org/repos/wso2/trunk/wsf/javascript/xpi/
[8] - http://rajikacc.googlepages.com/mail_trasnport.tar.gz
[9] - http://rajikacc.googlepages.com/scanner.tar.gz
[10] - http://rajikacc.googlepages.com/pretty_printer.tar.gz
[11] - http://rajikacc.googlepages.com/resume_rajika.pdf

Hi, sounds like a nice project. Some comments below.

*1. Project title:*
Extend the llvm-test testsuite to include new programs and benchmaraks[1],

benchmaraks -> benchmarks

and give a web service interface to run a test build remotely.

Also, this is not a title, it is a summary. How about calling it:
"Improve the llvm testsuite"

*2. Abstract:*
LLVM[2] is a collection of libraries and header files of a virtual machine
and has a GCC-front end in which C/C++ programs can be compiled into LLVM

How about: LLVM[2] is a set of libraries and tools for code optimization,
including a a virtual machine and a GCC-front end capable of compiling
C, C++, ObjectiveC, Ada and Fortran programs.

The aim of this project is to improve the testsuite[3] of LLVM, which is an

improve the testsuite[3] of LLVM -> improve the LLVM testsuite[3]

extremely useful task.

In general it is a bad idea to make this kind of unsupported claim: saying
that something is "extremely useful" without explaining *why* it is extremely
useful. You would do better to say: This would be extremely useful because
X, Y and Z.

So the tesuite of the llvm is extended by including
this new programs and benchmarks[1].

-> Firstly, by expanding the coverage of the testsuite by adding new programs
and benchmarks[1].

At this point I had to go and do something else - I may be able to comment more

Best wishes,


Thanks for the input, I’ll do the necessary arrangements.