Problem running the test suite on Linux.

On Linux I get the following test results:

UNEXPECTED SUCCESS: test_and_run_command_dwarf (lang/c/const_variables/
UNEXPECTED SUCCESS: test_and_run_command_dwo (lang/c/const_variables/
UNEXPECTED SUCCESS: test_command_script_immediate_output_dwarf (functionalities/command_script_immediate_output/
UNEXPECTED SUCCESS: test_command_script_immediate_output_dwo (functionalities/command_script_immediate_output/
UNEXPECTED SUCCESS: test_fd_leak_basic_dwarf (functionalities/avoids-fd-leak/
UNEXPECTED SUCCESS: test_fd_leak_basic_dwo (functionalities/avoids-fd-leak/
UNEXPECTED SUCCESS: test_fd_leak_log_dwarf (functionalities/avoids-fd-leak/
UNEXPECTED SUCCESS: test_fd_leak_log_dwo (functionalities/avoids-fd-leak/
UNEXPECTED SUCCESS: test_fd_leak_multitarget_dwarf (functionalities/avoids-fd-leak/
UNEXPECTED SUCCESS: test_fd_leak_multitarget_dwo (functionalities/avoids-fd-leak/
UNEXPECTED SUCCESS: test_file_scope_lookup_with_run_command_dwarf (lang/cpp/namespace/
UNEXPECTED SUCCESS: test_file_scope_lookup_with_run_command_dwo (lang/cpp/namespace/
UNEXPECTED SUCCESS: test_lldbmi_gdb_set_target_async_off (tools/lldb-mi/
UNEXPECTED SUCCESS: test_lldbmi_process_output (tools/lldb-mi/syntax/
UNEXPECTED SUCCESS: test_lldbmi_settings_set_target_run_args_after (tools/lldb-mi/interpreter/
UNEXPECTED SUCCESS: test_lldbmi_settings_set_target_run_args_before (tools/lldb-mi/interpreter/
UNEXPECTED SUCCESS: test_restart_bug_dwarf (functionalities/signal/raise/
UNEXPECTED SUCCESS: test_restart_bug_dwo (functionalities/signal/raise/
UNEXPECTED SUCCESS: test_scope_lookup_before_using_with_run_command_dwo (lang/cpp/namespace/
TIMEOUT: test_qThreadInfo_matches_qC_attach_llgs_dwo (tools/lldb-server/
TIMEOUT: test_watchpoint_delay_watchpoint_one_breakpoint_dwarf (functionalities/thread/concurrent_events/

This is a ton of unexpected successes. Does anyone regularly run the test suite on Linux? Is this normal? I also notice that it takes almost 30 minutes to complete, and I get these timeouts:

TIMEOUT: test_qThreadInfo_matches_qC_attach_llgs_dwo (tools/lldb-server/
TIMEOUT: test_watchpoint_delay_watchpoint_one_breakpoint_dwarf (functionalities/thread/concurrent_events/

Are these known issues? I’m using Ubuntu 14.04 and building tests with Clang 3.6

Our bot is running on Ubuntu 14.04 and is green:

One thing though, the bot does not run the testsuite with clang-3.6.
About the unexpected successes, they are very likely tests which were
found to be flaky and marked as expectedFailure (or something similar)
to keep the bot green. Even the bot logs show these unexpected

We’ve occasionally discussed the concept of unexpected successes being an issue. Last time we landed on this being the best we can do for now as we want the code to be executed, even if it fails occasionally and/or under load. The alternative (aside from fixing it) is to skip it, which then prevents us from seeing if something horrible like a crash occurs when going down that code path.

In my logs I’m seeing this:

UNSUPPORTED: LLDB (/usr/local/google_ssd/src/llvm/build/ninja_release/bin/clang-3.9-x86_64) :: test_inferior_print_exit_debugserver_dwo (TestLldbGdbServer.LldbGdbServerTestCase) (debugserver tests)
File “/usr/local/google/home/zturner/ssd/src/llvm/tools/lldb/test/”, line 7, in
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/”, line 1089, in run_suite
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 162, in run
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 65, in call
return*args, **kwds)
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 85, in run
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 115, in _wrapped_run
test._wrapped_run(result, debug)
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 117, in _wrapped_run
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 433, in call
return*args, **kwds)
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 361, in run
success = self.runMethod(testMethod, result)
File “/usr/local/google_ssd/src/llvm/tools/lldb/third_party/Python/module/unittest2/unittest2/”, line 391, in runMethod
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/”, line 1900, in dwarf_test_method
return attrvalue(self)
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/”, line 112, in wrapper
func(*args, **kwargs)
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/tools/lldb-server/”, line 250, in test_inferior_print_exit_llgs
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/tools/lldb-server/”, line 237, in inferior_print_exit
context = self.expect_gdbremote_sequence()
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/tools/lldb-server/”, line 549, in expect_gdbremote_sequence
return expect_lldb_gdbserver_replay(self, self.sock, self.test_sequence, timeout_seconds, self.logger)
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/tools/lldb-server/”, line 252, in expect_lldb_gdbserver_replay
context[“O_content”] = pump.get_accumulated_output()
File “/usr/local/google_ssd/src/llvm/tools/lldb/packages/Python/lldbsuite/test/tools/lldb-server/”, line 81, in exit
lldb-server exiting…

Could this be related to the timeout I’m seeing? Has anyone seen this before? It doesn’t appear flaky, happens every time.

Hmm I wonder if your lldb-server is able to attach to processes? Siva, we used to have some kind of kernel flag or something that would allow attaching to a process that was launched by something else. I don’t recall exactly what it was off the top of my head, but I wonder if Zachary needs that?


(Security around ptrace).

Yes, there is something like that but I am unable to recollect.
However, I do not think Zach's problem is that. He is able to get all
but 2 of the tests passing.

Zach, is it possible for you to run with clang-3.5?

Hey Zachary,

For the test listed above, it is failing trying to match output from the inferior process being debugged by lldb-server. First, it is trying to get a hello, world string to be printed. Then, it is expecting the process to exit without failure.

If you go in that directory and make/run the a.out program, it should print hello world and exit with an exit value of 0. You may find that it doesn’t print, perhaps? Or maybe your terminal is set differently, so that the text isn’t matching as expected? (I would expect to have heard others with this issue).

Pavel just added some gdb remote logging that is easier to access than the way I had it rigged up before. If you end up getting stuck, if you get the output log from either the lldb-server side, that would probably help figure out what’s getting stuck. But I wouldn’t bother with that if you can rule out that something with the a.out is going wrong first.


You also need to pass “hello, world” as a launch arg (in quotes). That is what will make it get echoed back.

I don’t recall exactly what it was off the top of my head, but I wonder if Zachary needs that?

That is the lldb_enable_attach() call that I make in the beginning of the inferior test driver, defined in packages/Python/lldbsuite/test/make/test_common.h. This is already called, so shouldn’t be the issue.

In addition to flaky tests, I think some of these are just decorated
too broadly (e.g. it's marked expectedFailureLinux, but fails only on
i386 with gcc). I occasionally enable tests that I see are passing
consistently, but I am currently more worried about tests failing
unexpectedly than succeeding.

The 30 minutes for running the test seems very long, something must
have gone wrong there. If you do a "ps" after 5 minutes, which
processes do you still see running? What about after 15? What are the
specs of the machine you are running this on? What is the exact
command line you launching the tests with?

I wouldn't be too worried about the timeouts, these are the two of our
longest-running tests, so I think they are just getting killed for
running too slow. We need to figure out what is causing the whole
suite to run so slowly. (Unless you see them constantly timing out at
the exact same test, in which case it could be interesting.)