unit testing C++ code in LLDB

I don’t think the unit tests should depend on the python tests. They should be self contained. In other words, the unit tests must be useful to someone who is compiling without support for embedded python. I wouldn’t want to have a unit test which is only useful if it’s called from Python which has already done some initial setup. Still, if you want to avoid having another entry poitn for convenience, you could expose something from the public API that allows you to just say “run all the unittests”. But there shouldn’t be any setup in the python. All the setup necessary to run a given test should happen in C++.

Zach,

I can live with two entry points – one without the Python dependency, one accessible through Python. As you (and Greg, in the past) suggest, we can have a special public API for running unit tests – probably only in debug builds – and use that API from Python.

I’m not sure that all internal unit tests should do their setup in C++. I think it makes the test more fragile – and wastes a lot of the machinery we already have – to write a bunch of process-control logic in C++ when what I actually want to test is something specific in an unrelated class. LLDB is pretty closely tied to Python – for the test cases I write for the expression parser, I think I’d be willing to mandate that Python be available rather than make setup more challenging.

So that both use cases can coexist, we can just make sure that both the gtest runner and the SB API have the ability to run a subset of the unit tests; the gtest runner runs all those that don’t require external setup, and the SB API can select the tests that need to run with a specific initial setup.

Is that something that gtest would support?

Sean

Is that something that gtest would support?

I think gtest is engineered to support having the test runner piece kick off in multiple scenarios. I’m using it right now in the mode where gtest itself provides the “main” and kicks off all the test runs, but I suspect we can get our hands dirty and specify which test cases to run and control the run loop for it. If that’s true (which just takes some digging or just more knowledge of gtest than my standard C+±only use-the-gtest-provided-main-loop approach has required), seems like something we could get going.

I am in favor of allowing both those modes fwiw if we’re doing more collaboration-style tests as is indicated by the “let python set it up.” For straight unit tests, I’d not want to do that if the setup isn’t complicated (canonical example: testing a single class in isolation), but for configuration of a set of classes the way the lldb is going to be using them, definitely so. It’s too easy to misconfigure or get false positives on tests passing when state is configured, and also makes it more brittle to adjust those setups when we really change the way something is configured in lldb-proper.

-Todd

I think it diminishes their usefulness if they’re only available to people willing to run them a specific way. The python support on Windows isn’t as rosy as it is on other platforms, and it’s still very difficult to build LLDB with python support on Windows. I might be the only person doing it. I’m trying to improve it, but I don’t see it being in the same place as it is on other platforms for a while.

Even ignoring that though, I think if your test needs to do setup in python, it should just be a regular python test of the public API like everything else. Regardless, the functionality available to you from C++ is a superset of that available to you from python. You can even use the actual public API from C++, which is the same as what you’d be doing in python. If you actually need to piggyback off of lots of already-written python code, then I’m wondering why this particular test is better suited for a gtest. Why not just make it a python test?

Why not just make it a python test?

I think I see the usefulness for it. You really want to test a C++ class at a low level and make sure it’s working right. But the state machine needed to feed it inputs and outputs is complex enough that it would take a lot of code to set that up right. And you want it to always reflect what lldb is doing, not some non-real-world static test environment where it can get out of sync with the real lldb code.

-Todd

But we’re not talking about only one or only the other.

I’m as much as possible going to only use gtests when I want to verify a class does what I want, typically doing it in isolation from everything else.

If/when I need to deal with some real world lldb class configuration doing something complex, I might be interested in the python setup, gtest test case side. Not entirely sure how we’d wire that all up but that’s something we can investigate.

If that’s the case, I would argue that maybe the code needs to be written to be more easily testable. If you can’t test a class without writing a ton of setup code first, then it’s a high level test, not a low level test.

It’s hard to talk in the abstract though, maybe a concrete example would help, so we can see the code being tested, and the setup required.

The answer might be simply that what I’m thinking of isn’t so much a “unit test” as a fancier kind of assertion – one that requires a significant amount of extra computation.
Such an assertion might be enabled by a setting, and then run in situ whenever LLDB is in the right state.
E.g., when we happen to be dematerializing an expression result, run a bunch of extra tests to make sure the variable is in the state we expect it to be in.

Sean

Any chance you could whip up an actual example? Like the setup necessary, the exact expression to test, a few opeartions that change the state of the expression, and the expected result after each operation?

then it’s a high level test, not a low level test.

Totally. I wouldn’t expect us to be calling that a unit test. That’s why I keep calling it a collaboration test (where multiple live components are being tested together, either as part of setup or actually being tested). Definitely doesn’t qualify as a simple unit test.

The answer might be simply that what I’m thinking of isn’t so much a “unit test” as a fancier kind of assertion…

That’s kinda how I was viewing it.

If that’s the case then I’m leaning even more away from using gtest for this. gtest is just for producing a standalone executable that can be run in isolation and check that your classes behave the way you expect them to behave.

How about just adding a setting to LLDB:

setting set verify-expression-dematerialization true

Or, if it’s something you want to explicitly run and say “verify this now for this specific expression”, then make it a subcommand or an option of the “expr” command?

Zach,

that’s what I’m thinking. Then we can turn that checking on as part of specific expressions in the Python test suite.
If there are simple, class-level tests I can run without any setup, though, I’ll try putting them into gtest.

Sean

BTW, another way to do "gtest" sort of things in the lldb Python test suite is to make a python module that you SWIG against the lldb_private API's that is JUST for the internal testsuite, but which pokes at the internal details of whatever objects you want to poke at. That way you can make API's that poke at internals of the system, but still have the convenience of running and analyzing the test results in the context of the larger Python testsuite.

Jim