Currently our folder structure looks like this:
lldbsuite
– test
– dotest.py
– dosep.py
– lldbtest.py
– …
– functionalities
– lang
– expression_command
– …
etc
I’ve been thinking about organizing it like this instead:
lldbsuite
– test
– functionalities
– lang
– expression_command
– …
– engine
– dotest.py
– dosep.py
– lldbtest.py
– …
Anybody have any thoughts on this? Good idea or bad idea? The main reason I want to do this is because as we start breaking up some of the code, it makes sense to start having some subpackages under the engine
folder (or the test
folder in our current world). For example, Todd and I have discussed the idea of putting formatter related stuff under a formatters
subpackage. In the current world, there’s no way to differentiate between folders which contain tests and folders which contain test infrastructure, so when we walk the directory tree looking for tests we end up walking a bunch of directories that are used for test infrastructure code and not actual tests. So I like the logical separation this provides – having the tests themselves all under a single subpackage.
Thoughts?
Sounds like a reasonable thing to do. A couple of tiny remarks:
- when you do the move, you might as well rename dotest into something
else, just to avoid the "which dotest should I run" type of
questions...
- there is nothing that makes it obvious that "engine" is actually a
"test running engine", as it sits in a sibling folder. OTOH,
"test_engine" might be too verbose, and messes up tab completion, so
that might not be a good idea either...
pl
I’m fine with the idea.
FWIW the test events model will likely shift a bit, as it is currently a single sink, whereas I am likely to turn it into a test event filter chain shortly here. Formatters still make sense as they’ll be the things at the end of the chain.
Minor detail, result_formatter.py should be results_formatter.py - they are ResultsFormatter instances (plural on Results since it transforms a series of results into coherent reported output). I’ll rename that at some point in the near future, but if you shift a number of things around, you can do that.
I’m just about done with the multi-pass running. I expect to get an opt-in version of that running end of day today or worst case on Sunday. It would be awesome if you can hold off on any significant change like that until this little bit is done as I’m sure we’ll collide, particularly since this hits dosep.py pretty significantly.
Thanks!
-Todd
Yea wasn’t planning on doing this today, just throwing the idea out there.
One thing I want to make sure we can do is have a sane way of storing and running tests that test the test execution engine. Those are tests that should not run as part of an “lldb test run”. These are tests that maintainers of the test system run to make sure we’re not breaking stuff when we touch the test system.
I would be writing more of those if I had a semi-sane way of doing it. (Part of the reason I broke out the python-based timeout logic the way I did, before the major packaging changes, was so I had an obvious spot to add tests for the process runner logic).
Presumably those tests use an entirely different, hand-rolled test running infrastructure?
It just requires running the test file as a python script.
The runner is fired off like this:
if name == “main”:
unittest.main()
which is typically added to the bottom of all test files so you can call it directly.
-Todd
The tests end up looking substantially similar to our lldb test suite tests, as they were based on unittest2, which is/was a relative of unittest that now lives in Python. The docs for unittest in python 2.x have generally been accurate for the unittest2 lib we use. At least, for the areas I use.
Not sure I follow. Are you trying to test the execution engine itself (dotest.py, lldbtest.py, etc) or are you trying to have another alternative to running individual tests? The
if name == “main”:
unittest.main() stuff
was deleted deleted from all tests a few months ago as part of the package re-organization, and I thought I had general consensus at the time that that was ok to do.
Also at some point I will probably want to kill unittest2 and move to the upstream unittest. AFAICT we only use unittest2 because it works on 2.6 and unittest doesn’t. But now that we’re ok with saying 2.6 is unsupported, we can in theory go to the upstream unittest.
Not sure I follow. Are you trying to test the execution engine itself
(dotest.py, lldbtest.py, etc)
This. Test the lldb highly-specialized test runner internals.
or are you trying to have another alternative to running individual tests?
Not this.
The
if __name__ == "__main__":
unittest.main() stuff
was deleted deleted from all tests a few months ago as part of the package
re-organization, and I thought I had general consensus at the time that
that was ok to do.
Totally fine for lldb test suite tests, of which almost everything is.
I think we can do this, and I’d like us to do this unless it’s proven to break something we’re not aware of. I think you did some research on this after we discussed last, but something (maybe in the decorators) didn’t just work. Was that right?
If I remember correctly it was in the way we had implemented one of the expected fail decorators. We were manually throwing some kind of exception to indicate an xfail or a skip, and that exception doesn’t exist in the upstream unittest. Basically, we were relying on an implementation detail of unittest2
Okay. Sounds like something we can work around one way or another, either by introducing the correct exception name for unittest, or introducing our own if we need to do so.
I think the correct fix is to just not throw the exception, and do it the right way. unittest doesn’t even use an exception for that anymore, but has some kind of method to tag the test as xfail or skip, whereas our unittest2 uses an exceptionf or the same purpose.
Basically, we need to look at the docs for unittest, and update all of our code to only use methods / classes / etc that are publicly documented.