lldb python test run on 64-bit Fedora Core 20

Folks,

I've managed to tweak things (i.e. get llvm/lldb .sos and python site-packages under build/lib) in order that I can run dotest.py as follows:

$ /mydir/llvm/tools/lldb/test

LD_LIBRARY_PATH=/mydir/build/lib/ PYTHONPATH=/mydir/build/lib/python2.7/site-packages/ python dotest.py --executable=/mydir/build/bin/lldb -v -l --compiler=gcc -q .

I note that the results say:

Ran 1083 tests in 633.125s

FAILED (failures=1, skipped=619, expected failures=54, unexpected successes=15)

Since I've only just managed to get the tests working, are the above results reasonable?

That is are expected failures=54, unexpected successes=15 ok?

The only actual failure I saw was:

FAIL: test_stdcxx_disasm (TestStdCXXDisassembly.StdCXXDisassembleTestCase)
       Do 'disassemble' on each and every 'Code' symbol entry from the std c++ lib.

I guess as I run the tests more often, I'll get more of a feel for it, but I just wondered if the above was a reasonable baseline.

All tips/feedback welcome,
Matt

Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
More information can be found at www.csr.com. Keep up to date with CSR on our technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people, YouTube, www.youtube.com/user/CSRplc, Facebook, www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at www.twitter.com/CSR_plc.
New for 2014, you can now access the wide range of products powered by aptX at www.aptx.com.

Folks,

I've managed to tweak things (i.e. get llvm/lldb .sos and python
site-packages under build/lib) in order that I can run dotest.py as follows:

Great!

$ /mydir/llvm/tools/lldb/test

LD_LIBRARY_PATH=/mydir/build/lib/ PYTHONPATH=/mydir/build/lib/python2.7/site-packages/
python dotest.py --executable=/mydir/build/bin/lldb -v -l --compiler=gcc
-q .

Okay, glad you found a command line that worked. I'll get a FC VM up and
work on a fix for that environment.

I note that the results say:

Ran 1083 tests in 633.125s

Reasonable - how many cores are you using? (This was a VM, right?)

FAILED (failures=1, skipped=619, expected failures=54, unexpected
successes=15)

Since I've only just managed to get the tests working, are the above
results reasonable?

Yes, that's reasonable for Linux. The skipped are generally Darwin/MacOSX
tests --- there are nearly 2 tests total for every one that can run on
Linux. The others are generally a variant of debuginfo packaging that is
only available on MacOSX.

The expected failures represent the tests that we don't have working on
Linux (often paired with FreeBSD) that are code and/or test bugs that need
to be addressed. (If you're ever just feeling like doing some LLDB
spelunking, these are great learning opportunities for one to pick up!)

That is are expected failures=54, unexpected successes=15 ok?

The unexpected successes represent one of two things:

1. tests marked XFAIL that are intermittent, and so sometimes pass, falling
into this bucket. This is the best we can do with these for now until we
get rid of the intermittent nature of the test. Note the multi-core test
running that the build systems do stress the tests more heavily than when
they are run individually.

2. tests marked XFAIL that always pass now, which should no longer be
marked XFAIL. The majority do not fall into this category, but it does
represent a state that can occur when we do fix the underlying race and/or
timing issue that caused it to be intermittent in the first place.

The only actual failure I saw was:

FAIL: test_stdcxx_disasm (TestStdCXXDisassembly.StdCXXDisassembleTestCase)
      Do 'disassemble' on each and every 'Code' symbol entry from the std
c++ lib.

This is really the nugget of result your test run is showing. I'm not
entirely sure why that one is failing. It could be a legitimate failure
with changes in your code, or it could be something that surfaces in FC 20
that doesn't elsewhere. The test run should have made a directory called
"lldb-test-traces". They go in different places depending on ninja vs.
make builds. In ninja builds, it will be in your root build dir. In make
builds it will be in the {my-build-dir}/tools/lldb/test dir. In that
directory, you get a trace log file (*) for every test run that did not
succeed - either because it was skipped, it failed (i.e. test assertion
failed), it had an error (i.e. it failed but not because of an assert -
something happened that was entirely unexpected like an i/o issue, seg
fault, etc.), or it unexpectedly passed - marked xfail but succeeded. So -
you should have a file called something like
"Failed*test_stdcxx_disasm*.log" in that directory. You could look at the
contents of that and see what failed.

I guess as I run the tests more often, I'll get more of a feel for it, but
I just wondered if the above was a reasonable baseline.

Generally the tests are in a state where if it fails, that represents an
issue. I've spent quite a bit of time trying to get the test suite into
that state, so that an issue represents a real problem. In your case, it
could be a FC environment issue where it is really a test that - for that
environment - is just not going to ever pass. In which case we need to
either fix it or annotate it as a known issue and file a bug for it. For
your particular case, the way to figure that out is to do a build and a
test run against a clean-slate top of tree sync (essentially shelve any
changes you have locally) and see what a clean-slate test run produces. If
you always see that error, it's a tip-off that the test is broken in your
environment.

All tips/feedback welcome,
Matt

Happy testing!

-Todd

Okay, glad you found a command line that worked. I'll get a FC VM up and work on a fix for that environment.

Yes. Regarding the lib/lib64 thing, I googled using "fedora python lib lib64" and came across python - fedora /usr/lib vs /usr/lib64 - Server Fault - amongst many other stories. So it seems that Fedora apply a patch to vanilla python, other distros (e.g. ArchLinux?) do similar things (I think I read that ubuntu symlink lib and lib64 together). The nub of it is that python packages which use pure python (no C extensions) go in lib, whereas the stuff that depends on one's bitness/platform-architecture should go in lib64.

So as far as Fedora's rationale is concerned, my initial/quick fix is not great (though it works)

Index: scripts/Python/finish-swig-Python-LLDB.sh

Todd Fiala wrote:

    The only actual failure I saw was:

    FAIL: test_stdcxx_disasm
    (TestStdCXXDisassembly.StdCXXDisassembleTestCase)
          Do 'disassemble' on each and every 'Code' symbol entry from
    the std c++ lib.

This is really the nugget of result your test run is showing. I'm not entirely sure why that one is failing. It could be a legitimate failure with changes in your code, or it could be something that surfaces in FC 20 that doesn't elsewhere. The test run should have made a directory called "lldb-test-traces". They go in different places depending on ninja vs. make builds. In ninja builds, it will be in your root build dir. In make builds it will be in the {my-build-dir}/tools/lldb/test dir. In that directory, you get a trace log file (*) for every test run that did not succeed - either because it was skipped, it failed (i.e. test assertion failed), it had an error (i.e. it failed but not because of an assert - something happened that was entirely unexpected like an i/o issue, seg fault, etc.), or it unexpectedly passed - marked xfail but succeeded. So - you should have a file called something like "Failed*test_stdcxx_disasm*.log" in that directory. You could look at the contents of that and see what failed.

Happily, now, building/testing against the fresh sync I got before I went home last night, this failure has now gone.

Matt

Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
More information can be found at www.csr.com. Keep up to date with CSR on our technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people, YouTube, www.youtube.com/user/CSRplc, Facebook, www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at www.twitter.com/CSR_plc.
New for 2014, you can now access the wide range of products powered by aptX at www.aptx.com.

Todd Fiala wrote:

doesn't elsewhere. The test run should have made a directory called "lldb-test-traces". They go in different places depending on ninja vs. make builds. In ninja builds, it will be in your root build dir. In make builds it will be in the {my-build-dir}/tools/lldb/test dir. In that directory, you get a trace log file (*) for every test run that did not succeed - either because it was skipped, it failed (i.e. test assertion failed), it had an error (i.e. it failed but not because of an assert - something happened that was entirely unexpected like an i/o issue, seg fault, etc.), or it unexpectedly passed - marked xfail but succeeded. So - you should have a file called something like "Failed*test_stdcxx_disasm*.log" in that directory. You could look at the contents of that and see what failed.

I don't get this directory called "lldb-test-traces" in my root build dir (with a ninja build). My build directory is ~/src/staging/build/ and I just ran some tests as follows:

~/src/staging/llvm/tools/lldb/test
$LD_LIBRARY_PATH=/home/mg11/src/staging/build/lib/ PYTHONPATH=/home/mg11/src/staging/build/lib/python2.7/site-packages/ python dotest.py --executable=/home/mg11/src/staging/build/bin/lldb -v --compiler=gcc -q .

Instead my log files, it seems, are under here:

~/src/staging/llvm/tools/lldb/test/2014-07-23-09_19_57

Perhaps the destination of the log-files is effected by certain dotest.py arguments and environment variables. I do see some *LOG* environment variables inspected in the python, but I've not pursued this in anger.

Matt

Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom
More information can be found at www.csr.com. Keep up to date with CSR on our technical blog, www.csr.com/blog, CSR people blog, www.csr.com/people, YouTube, www.youtube.com/user/CSRplc, Facebook, www.facebook.com/pages/CSR/191038434253534, or follow us on Twitter at www.twitter.com/CSR_plc.
New for 2014, you can now access the wide range of products powered by aptX at www.aptx.com.

I don’t get this directory called “lldb-test-traces” in my root build dir (with a ninja build). My build directory is ~/src/staging/build/ and I just ran some tests as follows:

Ah ok - the lldb-test-traces directory is really a manifestation of a flag added to the ‘ninja check-lldb’ target or the ‘make -C tools/lldb/test’ target. If you don’t use those, and you don’t use another flag to dotest.py (I think -s off the top of my head), then it will default to a date/timestamp-based directory. You are looking in the right place for the way you ran it (although those dirs will add up unless you do a ‘rm -rf 2014-*’ occasionally).

Glad the error went away after a fresh sync!