Running a Local Buildbot

We're thinking about running our own buildbot against the upstream llvm
and clang sources. I'm talking to our build folks and seeing what they
can allow. Maybe we can contribute some build slaves to the osuosl
buildbot but I'm not sure.

If we do end up having to run our own buildbot, what's the best way to
integrate that into the upstream web page and status notifier?
Basically, we want the buildbot to monitor the activity upstream and
look for things that break certain build configs (-Werror is one of
them), letting people know when there is a problem.

Our machine might not be able to keep up with a build for every commit
but could do regular timed builds. The blame list won't be as precise
but at least we'll all know if something goes wrong.

Would we create a directory in zorg along the lines of the osuosl,
smooshlab, etc. directories under zorg/buildbot?

Hopefully we can just integreate into osuosl but we might not be able to
due to security considerations, among other things.

                           -David

We're thinking about running our own buildbot against the upstream llvm
and clang sources. I'm talking to our build folks and seeing what they
can allow. Maybe we can contribute some build slaves to the osuosl
buildbot but I'm not sure.

If we do end up having to run our own buildbot, what's the best way to
integrate that into the upstream web page and status notifier?

You can't, really. The buildmaster contains both the webpage, source
control change detection, and slave management/scheduling. If you can
have your slaves connected to that buildmaster you can integrate with
those features. If you can't, then you'd be running your own
buildmaster which would have its own status page, etc.

The "integration" at that point would probably amount to the nag
emails you see from the current buildmaster - though I'm not sure what
problems you might run into emailing from other hosts, etc (people
might miss your automated emails/dismiss them as non-authoritative,
accidentally classify them as spam, etc).

Basically, we want the buildbot to monitor the activity upstream and
look for things that break certain build configs (-Werror is one of
them), letting people know when there is a problem.

Our machine might not be able to keep up with a build for every commit
but could do regular timed builds. The blame list won't be as precise
but at least we'll all know if something goes wrong.

Minor point: Every commit or timed aren't the only options - usually
the way this works (& the way the zorg builders are setup) is that
they try to build on every commit, if commits come in while a build is
running, those will be rolled up into a build starting immediately
after the current one finishes.

Obviously the fewer commits per

Would we create a directory in zorg along the lines of the osuosl,
smooshlab, etc. directories under zorg/buildbot?

I just checkout the zorg repository & run my local buildmaster out of
that. I don't contribute this code upstream - there doesn't seem to be
much point in doing so.

Hopefully we can just integreate into osuosl but we might not be able to
due to security considerations, among other things.

The best way is to integrate into the regular lab.llvm.org. I don't
think there's a way to re-export results from other buildmaster.

Our machine might not be able to keep up with a build for every commit
but could do regular timed builds. The blame list won't be as precise
but at least we'll all know if something goes wrong.

Honestly, running a fast buildbot is not rocket science, it is just
things done right. clang-x86_64-debian-fast buildbot is just Intel(R)
Core(TM) i7-2700K CPU @ 3.50GHz with 8 Gb of RAM and a "green" series
HDD -- just regular off-the-shelf hardware. Without any tweaks it
does a build in 9min. With ccache (nothing fancy -- apt-get install
ccache, set CC/CXX variables), time drops to 1:50--3:00 min on
average.

Also, you don't need to do a 2-step bootstrap to catch warnings from
gcc. You also don't need a debug build, just an optimized one --
since it is faster to link, and gcc emits more warnings when
optimization is no.

Dmitri

ARM buildbots do that. Since each bot takes 4h to run, we end up testing
10/20 patches during peak time.

Not ideal, but that'll have to do for now. We're working on ideas to
improve that, and they all involve tweaking, ccache, distcc, etc. None
without risks...

--renato

does a build in 9min. With ccache (nothing fancy -- apt-get install
ccache, set CC/CXX variables), time drops to 1:50--3:00 min on
average.

Depending on the hardware (development boards tend to be very sensitive and
the kernel very raw), things that you take for granted are not that simple.

For example, ccache raised some I/O problems on some ARM dev boards, and
since we're always dealing with fresh hardware (sometimes pre-release), a
multitude of things will fail even before you get to LLVM.

Also, you don't need to do a 2-step bootstrap to catch warnings from

gcc. You also don't need a debug build, just an optimized one --
since it is faster to link

Yes, this does make a big difference. Keep the assertions, leave the debug
symbols out. I believe this is the default CMake build type, but not the
configure one, so be careful.

Also, I got different failures with CMake and configure, so try both.
Again, development kernels/hardware, so not worthy reporting to the list
just yet.

cheers,
--renato

I understand that. But IIRC, David is interested in IA-32 or amd64
buildbot, that has a particular gcc version, hence my suggestion.

Dmitri

Sorry, it wasn't clear from the original message that this was the case.

cheers,
--renato

Dmitri Gribenko <gribozavr@gmail.com> writes:

Also, you don't need to do a 2-step bootstrap to catch warnings from
gcc. You also don't need a debug build, just an optimized one --
since it is faster to link, and gcc emits more warnings when
optimization is no.

Actually, there are cases a debug build catches that an optimized build
doesn't because different code is built. I have seen this happy many
times.

                            -David

We don't #ifdef in LLVM and Clang outside portability headers, so this
should not be a big issue.

Dmitri

Dmitri Gribenko <gribozavr@gmail.com> writes:

Actually, there are cases a debug build catches that an optimized build
doesn't because different code is built. I have seen this happy many
times.

We don't #ifdef in LLVM and Clang outside portability headers, so this
should not be a big issue.

Well, it is because I see it. :slight_smile:

                         -David

Theoretically this should be the case, but in practice it is not. For
example, there is the "enable dump" stuff, and platform-specific
#ifdef's in quite a few places outside of the System library. There
are also plenty of #ifndef NDEBUG for places where assert() (just one
expression) is too limiting.

-- Sean Silva

Yes, this does make a big difference. Keep the assertions, leave the debug
symbols out. I believe this is the default CMake build type, but not the
configure one, so be careful.

No. by default, CMake generates minimal flags, w/o -g, -On, nor -DNDEBUG.
"Assertions is not disabled, non-debug, non-optimized."

CMAKE_CXX_FLAGS is used, but CMAKE_CXX_FLAGS_${CMAKE_BUILD_TYPE}(s)
are not used.

On CMake, CMAKE_BUILD_TYPE should be chosen.
(could we fill it with "Debug" by default?)

Also, I got different failures with CMake and configure, so try both. Again,
development kernels/hardware, so not worthy reporting to the list just yet.

It is the point that we should investigate and improve, anyways.
I dare to say, it might be a good coverage to build the tree with both
autoconf and cmake. :wink:

...Takumi

Hello

We are migrating from 2.9 to 3.2

Here is some code that does not compile any more

     llvm::PassManager *pm;
     llvm::FunctionPassManager *fpm;

     module = llvm::ParseBitcodeFile(mb,context,&err_str);
     if (!module) {
       error() <<"Failed to load module from bitcode file: " <<err_str <<endl;
       exit(1);
     }
     pm = new PassManager();
     pm->add(createAlwaysInlinerPass());

     fpm = new FunctionPassManager(module);
     fpm->add(new TargetData(module));
     fpm->add(createGVNPass());
     fpm->add(createInstructionCombiningPass());
     fpm->add(createDeadCodeEliminationPass());
     fpm->add(createCFGSimplificationPass());
     fpm->add(createDeadStoreEliminationPass());

     InitializeNativeTarget();
     atexit(llvm_shutdown);
     EngineBuilder builder(module);

We have problems because TargetData does not seem to exist anymore.
Where is the documentation describing the changes regarding TargetData
and what we should do now ?

I did not see anything in 3.0 Release Notes in that regard

Thanks for your help
Vania

Hello

We are migrating from 2.9 to 3.2

Here is some code that does not compile any more

    llvm::PassManager *pm;
    llvm::FunctionPassManager *fpm;

    module = llvm::ParseBitcodeFile(mb,**context,&err_str);
    if (!module) {
      error() <<"Failed to load module from bitcode file: " <<err_str
<<endl;
      exit(1);
    }
    pm = new PassManager();
    pm->add(**createAlwaysInlinerPass());

    fpm = new FunctionPassManager(module);
    fpm->add(new TargetData(module));
    fpm->add(createGVNPass());
    fpm->add(**createInstructionCombiningPass**());
    fpm->add(**createDeadCodeEliminationPass(**));
    fpm->add(**createCFGSimplificationPass())**;
    fpm->add(**createDeadStoreEliminationPass**());

    InitializeNativeTarget();
    atexit(llvm_shutdown);
    EngineBuilder builder(module);

We have problems because TargetData does not seem to exist anymore.
Where is the documentation describing the changes regarding TargetData
and what we should do now ?

I did not see anything in 3.0 Release Notes in that regard

Did you check the other 2 sets of release notes?

https://www.google.com/search?q=llvm+3.2+release+notes+TargetData

Seems to show the relevant section...