unit testing C++ code in LLDB

Hey guys,

Sometimes I have smaller bits of code I’d like to test in LLDB as I’m developing them (i.e. TDD-style) that are C++ and won’t be exposed directly via Python. I’m not sure I’ve seen any facilities in the LLDB tests for adding such tests. Essentially I’d want to do something like a gtest or cppunit test.

Do we have any mechanism for doing that currently? If we do, what is it? If we don’t, how about adding some mechanism to do it after we figure out how we’d like to approach it? Or, if you have thoughts on a good, simple way to do it from Python that doesn’t require extra Python bindings just to do it, that’d be fine by me as well.

If we want to take a concrete example, here is one: I’m adding a NamedPipe class under the host dir. I’d like to make some simple tests for it, and test it under Linux, Windows and MacOSX. In the case of Windows, it would be the only real way for me to test that it’s behaving exactly as I want at this point. This isn’t the only time I’ve wanted C+±level tests at a fairly fine granularity, but it’s a good example of it.

Any thoughts?

We should probably use gtest for this. LLVM already has lots of examples of how to do this. Check the llvm\unittests folder.

Ah cool. Yeah I like those.

I’ll check out how llvm is doing it.

I'm a little leery about this. We don't test at the lldb_private layer because those tests are likely to be fragile and easily broken. For utility classes like NamedPipe I guess I don't so much mind, but I'm not sure its a great idea to do this more generally.


I suspect there are only a handful of cases where I’d like to do this. Frankly I can even do it outside the source base proper, I suppose, but I figured I’d throw the question out here since maybe this is something we care about longer term. I’m not really interested in hitting the app at multiple layers for everything — the existing python framework is great for most things. It’s just for some niggly details on lower level pieces of code that I care about it.

I may fiddle with it a bit. If it looks gnarly in the end, I won’t push it further. If it looks reasonable, I may throw up a patch to look at.


If it makes you feel any better LLVM is leery of it too, and it’s only used, as you said, in specialized circumstances. It’s especially useful for testing abstract data types, where you just want to test the interface to a self-contained, reusable class.

We could expose a new static function in SBDebugger:

class SBDebugger {

static void
UnitTest (const char *args);


Then internally it can parse any arguments and run tests as needed? Each class that wants to register unit tests with the debugger during SBDebugger::Initialize() which must be called prior to doing anything with the LLDB API:


  NamedPipe:: Initialize();

Then in the static NamedPipe::Initiazize() you could register your unit tests:

    Debugger::RegisterUnitTest(NamedPipe::InitTest1, "NamedPipe::InitTest1"));

Then you could just run:

SBDebugger::UnitTest(NULL); // Run all tests

Or run individual ones:

SBDebugger::UnitTest("--test NamedPipe::InitTest1"); // Run just "NamedPipe::InitTest1"

Of course then the LLDB test suite could run these unit tests first to make sure everything is good.

Hmm interesting, I’ll have to play around with that.

FWIW I’m kind of in favor of bringing in gtest with limited use. When I first started digging into LLDB, probably the first 3-4 bugs I fixed were all in IRMemoryMap, and they would all have been caught if the class were unit tested properly.

At risk of being burnt at the stake for practicing the the necromantic arts:
Jim’s point that internal APIs are somewhat fragile is well taken, and I don’t think there’s much value in using internal tests as “more convenient” substitutes for external tests.

That said, I think internal unit tests add several benefits that improve code:

  • Modular design. If clients of an internal object have to do a lot of extra work to make the object work properly, this may indicate a design issue.
  • State encapsulation. If an object changes behaviors depending on outside state, then that makes it much harder to use.
  • Code readability. Test cases can demonstrate how an object is intended to be used, and act as compelling witnesses that that use case works.

I would be interested in using the expression parser as a guinea pig to introduce test-driven methods.
Todd/Zach, did you ever get a gtest-based unit test system working?


I gave it a brief look, but I didn’t get it working. Clang and LLD both already do this, so I figured it would be as simple as copying their CMake for the gtest stuff and fixing it up with LLDB paths. Unfortunately, I was getting tons of linker errors, and I’m not really sure why, as there didn’t appear to be much magic there. I didn’t investigate it further after that, but it would definitely be a good thing to tackle.


what I might try to do is get a basic test runner in the context of the Xcode build, and then send my patch over to you to see if we can get that same runner building with Makefiles. Then if you have linker problems, we can puzzle over them together.

What do you think?


Waaaaitasecond, I’m seeing the gtest subproject…
it’s a bit embarrassing that I haven’t checked out top of tree in so long!
I’m going to take a look around. Still interested in collaborating on this.


Cool! What you might try doing is either looking at the CMake that clang uses, or just generate an Xcode build of clang with CMake, open up the generated Xcode project, and see if you can figure out from tinkering around in Xcode how it’s bringing in gtest.

Hey Sean!

Yeah I’ve got a gtest subproject in lldb now. It’s not wired into any main test process (i.e. check-lldb or test targets don’t run it, but if you run the “gtest” project in the lldb workspace in Xcode, or go to the gtest dir and run “do-gtest.py”, you’ll get it running).

I’ve got it rigged up to put Xcode error markers at the point in gtests that fail at the failing assertion point after doing some output munging to make it work.

The actual gtest build/runner works on MacOSX and Linux. It should work on all platforms since I’m actually using the llvm-embedded gtest framework, so technically if llvm builds, so should the gtest setup.

However, I do some weak (and need to be improved) inference of directory structure/build structure to find some necessary files in the gtest/make/Makefile.rules. They can be overridden by environment or make variable settings, but they’re definitely not going to work for everyone and there are probably more robust methods for figuring out certain directory locations.

I run it fine on MacOSX with the canonical Xcode directory layout and using the Xcode Preferences | Locations set to put project output in a project-relative DerivedData directory.

On Linux, I assume the build directory is a sibling directory to a llvm, llvm/tools/clang, llvm/tools/lldb directory structure and (here’s the not-reasonable-default) I assume the build directory is called build-debug. (So, I have a build-drectory alongside the top-level llvm directory). Again, all these can be overridden by environment variable/make variable settings, and I think I have the Makefile emit an error if it can’t resolve the directories it uses and tells what variable to override to fix it. But, likely it needs some love.

For me, the source/Plugins/Process/Linux/ThreadStateCoordinator.{h/cpp} was my guinea pig. I’m using that to replace some guts in the llgs signal state transition logic that requires putting threads through some ptrace state magic before it should tell lldb about some thread stops. And so I’m testing that all the deferred callbacks (i.e. essentially GCD-like constructs) fire off in the right circumstances. That’s an example of a small piece of code in C++ that I just need to know is working right and so I introduced the gtest framework around it.

Hope that gets you started! Feel free to improve and/or offer suggestions.

For now, we had settled on using gtest/unittest as the root of 1-1 correspondence directory structure mirroring for testing single classes.

If we want to do collaboration tests (integration tests, etc.), we’re probably into the “should be in python category”, but we might have a few low-level multi-class testing scenarios where we might want a different gtest/functional, gtest/integration or something similar directory structure to handle those. Would be good to have discussion around that if we find a valid use for it.


The other bit to know about it is my first test subject (ThreadStateCoordinator) is entirely not dependent on anything other than standard C++, so I didn’t need to pull in a ton of lldb code to make that work. In all likelihood (e.g. with expression parser) we’ll need to grab something like the lldb-core output or something, so a bit more make logic will need to be worked out.

The do-gtest.py script right now just scans for directories with a Makefile, and executes make in those directories. It always runs a make clean after a make. If it is called with a ‘clean’ command-line arg, it will just do the clean step and skip the make. The do-gtest.py script skips output from the clean step, but will reflect stdout/stderr from the normal make step. If any of the makes fail, the do-gtest.py reports the exit failure code from the most recent failing make execution but continues until all have been called.


Does CMake build it?

Not at this point. I went for proof of concept first, and since I primarily develop on MacOSX/Xcode and rsync/remote build to Linux & FreeBSD, starting with cmake would have prevented me from working in my primary environment.

I’m all for making it robust now that I’ve proven out the workflow. Please have at it if you want to get it working with cmake - I think that’s totally valid. And with Dawn doing some work on getting cmake building better on MacOSX (my my team interested in it as well for matrix testing), I think that’d be a great improvement.

At the very least, the existing Makefile-based system has to be extended to be smarter about figuring out where llvm source, llvm build directories and lldb source is located, and once we need to include the whole shebang of lldb in with it, it’ll need to become a bit smarter for that as well.

Net: have at it and make it better if you see ways to do so. We’re definitely going to want to hook it into our Linux build, which will mean cmake as a preference.

Not at this point. I went for proof of concept first, and since I primarily develop on MacOSX/Xcode and rsync/remote build to Linux & FreeBSD, starting with cmake would have prevented me from working in my primary environment.

Also - I’m fully supportive if it’s useful to add separate modes to the output filtering of go-gtest.py to adjust the output parsing/emitting to match what would be needed to get it to plug into other IDEs as well - I just did what I needed to get it working for Xcode native handling and it does that unconditionally. We could easily add a switch to do-gtest.py to be smarter and do other transformations as needed.

Right now the Xcode integration essentially does the following:

  • Adjust file paths for failures to be relative to the {lldb-root}/gtest directory, which is the Xcode project base dir. From that, Xcode is able to map adjusted error message file names to project files.

  • Coalesces up to the next 3 lines after a failure, separating them by commas and putting them all on one line. This allows the single-line scoop-up behavior that Xcode does for this style of failure to show the actual assertion/expectation failure to display on the Xcode in-line error flag within the gest test file.

If we need more IDE integration modes than that, we can just abstract those transformations and support multiple ones as needed.

Hey Sean!

Thanks for the introduction! It looks like this is definitely in the direction of what I want.

If we want to do collaboration tests (integration tests, etc.), we're probably into the "should be in python category", but we might have a few low-level multi-class testing scenarios where we might want a different gtest/functional, gtest/integration or something similar directory structure to handle those. Would be good to have discussion around that if we find a valid use for it.

One thing I would like to be able to do for the expression parser is unit test in the context of a stopped process.
I’m thinking of scenarios where I’d like to test e.g. the Materializer’s ability to read in variable data and make correct ValueObjects.

One way to achieve this that comes to mind is to have a hook into the unit tests from the Python test suite, so we can set up the program’s state appropriately using our normal Python apparatus and then exercise exactly the functionality we want.

Once we’ve got that kind of hook, we could just run all unit tests right from the Python test suite and avoid having another entry point.

If you want IDE-friendly output, you could have an IDE-level target that runs test/dotest.py but singles out the unit tests.

What do you think?