test results look typical?

Hi all,

I’m just trying to get a handle on current lldb test failures across different platforms.

On Linux on non-virtualized hardware, I currently see the failures below on Ubuntu 14.04.2 using a setup like this:

  • stock linker (ld.bfd),
  • g++ 4.9.2
  • cmake
  • ninja
  • libstdc++

ninja check-lldb output:

Ran 394 test suites (15 failed) (3.807107%)
Ran 474 test cases (17 failed) (3.586498%)
Failing Tests (15)
FAIL: LLDB (suite) :: TestCPPThis.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestMiGdbSetShowPrint.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestStaticVariables.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestStepNoDebug.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestTypedefArray.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestVectorTypesFormatting.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)

Unexpected Successes (10)
UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux rad 3.13.0-57-generic #95-Ubuntu SMP Fri Jun 19 09:28:15 UTC 2015 x86_64 x86_64)
ninja: build stopped: subcommand failed.

On a similar setup, although bumped up to Ubuntu 14.04.3 and now on a VMWare VM, everything else the same, I see a similar report from ‘ninja check-lldb’:

Ran 394 test suites (17 failed) (4.314721%)
Ran 474 test cases (19 failed) (4.008439%)
Failing Tests (17)
FAIL: LLDB (suite) :: TestAttachResume.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestCPPThis.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibccIterator.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibccMap.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibccMultiMap.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibcxxMultiSet.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibcxxSet.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterLibcxxString.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterSkipSummary.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestDataFormatterUnordered.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestMiGdbSetShowPrint.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestRegisterVariables.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestStaticVariables.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestStepNoDebug.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestStepOverWatchpoint.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestTypedefArray.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
FAIL: LLDB (suite) :: TestVectorTypesFormatting.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)

Unexpected Successes (11)
UNEXPECTED SUCCESS: LLDB (suite) :: TestBatchMode.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestEvents.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestExitDuringStep.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestFdLeak.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestInferiorAssert.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiGdbSetShow.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiInterpreterExec.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestMiSyntax.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestRaise.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestStubSetSID.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
UNEXPECTED SUCCESS: LLDB (suite) :: TestWatchedVarHitWhenInScope.py (Linux lldb 3.19.0-26-generic #28~14.04.1-Ubuntu SMP Wed Aug 12 14:09:17 UTC 2015 x86_64 x86_64)
ninja: build stopped: subcommand failed.

Do these more or less match the current state of affairs for Linux tests? If not, are there any suggestions on best practices for upgrades? A while back I had tried things like clang and libc++ but I had arrived at the above setup (albeit typically with the gold linker) simply due to what worked, what was easy to debug and what built fast.

Thanks for any confirmation on that!

Can’t comment on the failures for Linux, but I don’t think we have a good handle on the unexpected successes. I only added that information to the output about a week ago, before that unexpected successes were actually going unnoticed.

It’s likely that someone could just go in there and remove the XFAIL from those tests.

Can't comment on the failures for Linux, but I don't think we have a good
handle on the unexpected successes. I only added that information to the
output about a week ago, before that unexpected successes were actually
going unnoticed.

Okay, thanks Zachary. A while back we had some flapping tests that would
oscillate between unexpected success and failure on Linux. Some of those
might still be in that state but maybe (!) are fixed.

Anyone on the Linux end who happens to know if the fails in particular look
normal, that'd be good to know.

Thanks!

The TestDataFormatterLibcc* tests require libc+±dev:

$ sudo apt-get install libc+±dev

We should have a decorator like:

@skipLinuxUnlessInstalled("/usr/lib/libc++.so")

or something that tells us to install this library and fails the test suite before you run anything.

The TestDataFormatterLibcc* tests require libc++-dev:

$ sudo apt-get install libc++-dev

Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
Pre-14.04 I gave up on it.

Will cmake automatically choose libc++ if it is present? Or do I need to
pass something to cmake to use libc++?

Thanks, Chaoren!

-Todd

We should have a decorator like:

@skipLinuxUnlessInstalled("/usr/lib/libc++.so")

or something that tells us to install this library and fails the test
suite before you run anything.

Yeah, I like that idea, Greg. We shouldn't report failures on scenarios
that we know cannot work due to missing dependencies.

I'll give them a run with libc++ to see how that goes.

Ah okay, so we are working with libc++ on Ubuntu, that’s good to hear. Pre-14.04 I gave up on it.

We’re still using libstdc++ for lldb itself. libc++ is used to compile inferiors for the TestDataFormatterLibcc* tests. I don’t actually know if libc++ works with lldb. Sorry to get your hopes up. :frowning:

Ah drats! Okay. Baby steps :smiley:

I don't think you can't trust the reporting of dosep.py's "Ran N test
cases", as it fails to count about 500 test cases. The only way I've
found to get an accurate count is to add up all the Ns from "Ran N tests
in" as follows:

./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee test_out.log
export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk '{count+=$2} END {print count}'`

(See comments in http://reviews.llvm.org/rL238467.)

> On Linux on non-virtualized hardware, I currently see the failures below on
> Ubuntu 14.04.2 using a setup like this:
> [...]
>
> ninja check-lldb output:

FYI, ninja check-lldb actually calls dosep.

> Ran 394 test suites (15 failed) (3.807107%)
> Ran 474 test cases (17 failed) (3.586498%)

I don't think you can trust the reporting of dosep.py's "Ran N test
cases", as it fails to count about 500 test cases. The only way I've
found to get an accurate count is to add up all the Ns from "Ran N tests
in" as follows:

./dosep.py -s --options "-v --executable $BLDDIR/bin/lldb" 2>&1 | tee test_out.log
export total=`grep -E "^Ran [0-9]+ tests? in" test_out.log | awk '{count+=$2} END {print count}'`

Of course, these commands assume you're running the tests from the lldb/test directory.

(See comments in http://reviews.llvm.org/rL238467.)

I've pasted (and tweaked) the relavent comments from that review here, where I describe a narrowed case showing how dosep fails to count all the test cases from one test suite in test/types. Note that the tests were run on OSX, so your counts may vary.

The final count from:
    Ran N test cases .*
is wrong, as I'll explain below. I've done a comparison between dosep and dotest on a narrowed subset of tests to show how dosep can omit the test cases from a test suite in its count.

Tested on subset of lldb/test with just the following directories/files (i.e. all others directories/files were removed):
    test/make
    test/pexpect-2.4
    test/plugins
    test/types
    test/unittest2
# The .py files kept in test/types are as follows (so test/types/TestIntegerTypes.py* was removed):
    test/types/AbstractBase.py
    test/types/HideTestFailures.py
    test/types/TestFloatTypes.py
    test/types/TestFloatTypesExpr.py
    test/types/TestIntegerTypesExpr.py
    test/types/TestRecursiveTypes.py

Tests were run in the lldb/test directory using the following commands:
    dotest:
  ./dotest.py -v
    dosep:
  ./dosep.py -s --options "-v"

Comparing the test case totals, dotest correctly counts 46, but dosep counts only 16:
    dotest:
  Ran 46 tests in 75.934s
    dosep:
  Testing: 23 tests, 4 threads ## note: this number changes randonmly
  Ran 6 tests in 7.049s
  [PASSED TestFloatTypes.py] - 1 out of 23 test suites processed
  Ran 6 tests in 11.165s
  [PASSED TestFloatTypesExpr.py] - 2 out of 23 test suites processed
  Ran 30 tests in 54.581s ## FIXME: not counted?
  [PASSED TestIntegerTypesExpr.py] - 3 out of 23 test suites processed
  Ran 4 tests in 3.212s
  [PASSED TestRecursiveTypes.py] - 4 out of 23 test suites processed
  Ran 4 test suites (0 failed) (0.000000%)
  Ran 16 test cases (0 failed) (0.000000%)

With test/types/TestIntegerTypesExpr.py* removed, both correctly count 16 test cases:
    dosep:
  Testing: 16 tests, 4 threads
  Ran 6 tests in 7.059s
  Ran 6 tests in 11.186s
  Ran 4 tests in 3.241s
  Ran 3 test suites (0 failed) (0.000000%)
  Ran 16 test cases (0 failed) (0.000000%)

Note: I couldn't compare the test counts on all the tests because of the concern raised in http://reviews.llvm.org/rL237053. That is, dotest can no longer complete the tests on OSX, as all test suites fail after test case 898: test_disassemble_invalid_vst_1_64_raw_data get ERRORs. I don't think that issue is related to problems in dosep.

Thanks,
-Dawn

Going back to the original question I think you have more test failures then expected. As Chaoren mentioned all TestDataFormatterLibc* tests are failing because of a missing dependency, but I think the rest of the tests should pass (I wouldn’t expect them to depend on libc+±dev).

You can see the up to date list of failures on the Linux buildbot here:
http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake

The buildbot is running in “Google Compute Engine” with Linux version: “Linux buildbot-master-ubuntu-1404 3.16.0-31-generic #43~14.04.1-Ubuntu SMP Tue Mar 10 20:13:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux”

LLDB is compiled by Clang (not sure about witch version but can find out if somebody thinks it matters) and the inferiors are compiled with clang-3.5, clang-tot, gcc-4.9.2. In all tested configuration there should be no failure (all failing tests should be XFAIL-ed).

For the flaky tests we introduced an “expectedFlaky” decorator what executes the test twice and expects it to pass at least once, but it haven’t been applied to all flaky test yet. The plan with the tests passing with “unexpected success” at the moment is to gather statistics about them and based on that mark them as “expected flaky” or remove the “expected failure” based on the number of failures we seen in the last few hundreds runs.

Tamas

Thanks for the details on dosep.py, Dawn.

For counting I will probably go back to my old method of parsing the output of a serial dotest run, since IIRC I can get skip counts accurately there as well. (Or perhaps that should be added to dosep.py, it’s been a while since I last heavily modified that script).

-Todd

Going back to the original question I think you have more test failures
then expected. As Chaoren mentioned all TestDataFormatterLibc* tests are
failing because of a missing dependency,

Thanks, Tamas. I'm going to be testing again today with libc++ installed.

but I think the rest of the tests should pass (I wouldn't expect them to
depend on libc++-dev).

I'll get a better handle on what's failing once I get rid of that first
batch.

You can see the up to date list of failures on the Linux buildbot here:
http://lab.llvm.org:8011/builders/lldb-x86_64-ubuntu-14.04-cmake

Ah yes,that'll be good to cross reference.

The buildbot is running in "Google Compute Engine" with Linux version:
"Linux buildbot-master-ubuntu-1404 3.16.0-31-generic #43~14.04.1-Ubuntu SMP
Tue Mar 10 20:13:38 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux"

LLDB is compiled by Clang (not sure about witch version but can find out
if somebody thinks it matters) and the inferiors are compiled with
clang-3.5, clang-tot, gcc-4.9.2. In all tested configuration there should
be no failure (all failing tests should be XFAIL-ed).

Ah okay good to know. In the past IIRC I did get different failures using
clang-built vs. gcc-built lldb on Ubuntu 14.04. The clang-built lldbs at
the time were harder to debug on Linux for one reason or another (I think
particularly if any optimizations were enabled due to loss of debuginfo,
but there might have been more). Are you using a clang-built lldb and
debugging it reasonably well on Linux? If so I'd just assume move over to
using clang so there's one less difference when I'm looking across
platforms.

For the flaky tests we introduced an "expectedFlaky" decorator what
executes the test twice and expects it to pass at least once,

Ah that's a good addition. We had talked about doing something to watch
tests over time to see when it might be good to promote an XFAIL test that
is consistently passing to a static "expect success" test. The flaky flag
sounds handy for those that flap.

but it haven't been applied to all flaky test yet. The plan with the tests
passing with "unexpected success" at the moment is to gather statistics
about them and based on that mark them as "expected flaky" or remove the
"expected failure" based on the number of failures we seen in the last few
hundreds runs.

Ah yes that :slight_smile: Love it.

Thanks, Tamas!

The TestDataFormatterLibcc* tests require libc++-dev:

$ sudo apt-get install libc++-dev

Ah okay, so we are working with libc++ on Ubuntu, that's good to hear.
Pre-14.04 I gave up on it.

Will cmake automatically choose libc++ if it is present? Or do I need to
pass something to cmake to use libc++?

Hmm it appears I need to do more than just install libc++-dev. I did a
clean build with that installed, then ran the tests, and I still have the
Libcxc/Libcxx tests failing. Is there some flag expected, either to pass
along for the compile options to dotest.py to override/specify which c++
lib it is using?

There is no separate option, it should just work. :slight_smile:

I'm betting you are still missing some package there (we should
document the prerequisites better). Could you send the error message
you are getting so we can have a look.

cheers,
pl

Hi Todd,

I am using a clang-3.5 build release LLDB to debug an other clang-3.5 build debug LLDB on Linux x86_64 and it works pretty well for me (works better then using GDB). The most issue I am hitting is around expression evaluation when I can’t execute very small functions in std:: objects, but I can get around it with accessing the internal data representation (primarily for shared_ptr, unique_ptr and vector). We are still using gcc for compiling lldb-server for android because the android clang have some issues (atomic not supported) but I don’t know anybody who testing a gcc built LLDB on Linux.

Tamas

Thanks, Pavel! I’ll dig that up and get back.

There's no need to do anything fancy (yet :slight_smile: ). For initial diagnosis
the output of `./dotest.py $your_usual_options -p SomeLibcxxTest.py
-t` should suffice.

pl

Here are a couple of the failures that came up (the log output from the full dosep.py run).

Let me know if that is not sufficient!

Failure-TestDataFormatterLibcxxSet.LibcxxSetDataFormatterTestCase.test_with_dwarf_and_run_command-x86_64-_usr_bin_cc.log (4.91 KB)

Failure-TestDataFormatterLibcxxString.LibcxxStringDataFormatterTestCase.test_with_dwarf_and_run_command-x86_64-_usr_bin_cc.log (11.1 KB)