Process spawning and shutdown

The last major piece of the Host layer I’d like to address is process launching and cleanup. From looking over the code it seems we have the following different ways to launch a process:

MacOSX:

  • Applescript
  • XPC
  • posix_spawn

Other posix variants:

  • posix_spawn

Windows:

  • Native windows launcher

Among these, there are a couple of different ways to reap processes on exit and/or “join” on them. These are:

MacOSX:

  • Applescript [ No process reaping or monitoring occurs. ]
  • XPC [ Uses MacOSX-specific dispatch library for monitoring ]
  • posix_spawn [ Uses MacOSX-specific dispatch library for monitoring ]

Other posix variants:

  • posix_spawn [ Launches a background thread to monitor for process exit, join on the thread to join on the process ]

Windows:

  • Native windows launcher [ WaitForSingleObject ]

A few questions:

  1. Is Join() on a process a useful operation that people would be interested in seeing implemented for all platforms?

  2. On Linux at least, if you don’t waitpid() on a process you’ll end up with zombies. It seems this is true on MacOSX as well, because I see waitpid() in StartMonitoringChildProcess. Is this not true for the Applescript launcher? Why doesn’t the Applescript launching code path call StartMonitoringChildProcess() anywhere?

  3. Speaking of the Applescript launcher, what is it and what is it used for?

I’ll probably have more questions as I wrap my head around this a little more. Thanks

Don’t forget we also have launching via Platform classes, which can end up kicking off a gdb-remote request to a lldb-platform or gdb-remote stub (llgs/debugserver).

Also, ProcessGDBRemote is capable of kicking those off if it’s using gdb-remote locally.

So, if nothing else, there’s the concept of a “launch via another mechanism.”

Yea, I saw that as well. For now, the abstraction I’m creating deals only with local processes. However, it has an interface that would in theory allow a RemoteProcess to derive from it, so that local and remote processes could be managed transparently.

Just a follow-up. It seems like there are three use cases for the StartMonitoringChildProcess code-path (poorly named, since FreeBSD and linux process plugins also have a ProcessMonitor class leading to great confusion).

  1. Some places want to Join() on a process exiting. They currently do this by calling Join() on a HostThread returned by StartMonitoringChildProcess, but the important thing is just that they want to join, not how it’s implemented.

  2. Processes need to be reaped when they exit so as not to leave zombie processes.

  3. Some places want to have a user-specified callback executed asynchronously in an arbitrary thread context when a process exits.

Does this cover everything? There’s currently alot of built in assumptions about which platforms want what subset of the above functionality, but I think I’m starting to get a pretty good handle on this code and have a good idea of how to restructure it to be more platform-agnostic. Want to make sure I understand all the primary use cases first though.

For now, the abstraction I’m creating deals only with local processes.

For MacOSX, the local process uses debugserver for debugging (i.e. it is always remote w/r/t process startup — lldb will launch and debugserver will attach).

Linux is moving this way as well once we (1) get llgs passing the local test suite with llgs in place of ProcessLinux/ProcessMonitor, and (2) get llgs supporting the existing set of ProcessLinux/ProcessMonitor cpu architectures. It will be on a PlatformLinux switch that defaults to current TOT behavior until then. (See http://github.com/tfiala/lldb in the dev-llgs-local-launch branch for current state).

Not sure that affects anything but wanted to be clear on it.

Sure, but debugserver still has to launch processes locally w/r/t itself, so this code would jsut be used there instead of in lldb. Also, it’s not clear what the timeline is and waiting would jsut be lost time, so even if some of what I do now has to be redone in the debugserver world, it’s still better than being blocked now and not being able to do anything :slight_smile:

I’d like to add another use case. On Hexagon, we talk to a simulator using gdb-remote. We hacked gdb to have “run” launch and connect to the simulator, and “start” do a “run”, then break at main and continue. I’ve implemented this in our lldb solution using python.

I’d like to be able to have “run” be able to launch a supplied executable, then connect to it via gdb-remote (or another protocol).

Just a follow-up. It seems like there are three use cases for the StartMonitoringChildProcess code-path (poorly named, since FreeBSD and linux process plugins also have a ProcessMonitor class leading to great confusion).

1) Some places want to Join() on a process exiting. They currently do this by calling Join() on a HostThread returned by StartMonitoringChildProcess, but the important thing is just that they want to join, not how it's implemented.

Join is not the right word. Reap() is the correct word. Join is just seen because you might spawn a thread whose sole purpose in life is to reap a child process. When you launch a shell command, you want to spawn the process and wait for it to get reaped. If you debug a process, you need to reap your child process if you launch it, but you won't ever call join on a thread that is waiting for it.

2) Processes need to be reaped when they exit so as not to leave zombie processes.

Yes.

3) Some places want to have a user-specified callback executed asynchronously in an arbitrary thread context when a process exits.

Yes. This is true for processes we debug. When we launch llgs or debugserver we want to know if the process ever dies so we can kill our debug session in case it does. These are both user specific callbacks.

Does this cover everything? There's currently alot of built in assumptions about which platforms want what subset of the above functionality, but I think I'm starting to get a pretty good handle on this code and have a good idea of how to restructure it to be more platform-agnostic. Want to make sure I understand all the primary use cases first though.

We need to:
1 - always reap child processes we spawn
2 - allow a specific function to optionally get called when a process does get reaped so the exit status of the process can be passed along

The AppleScript is so we can implement:

(lldb) process launch --tty -- /bin/ls ....

The "--tty" option launches the process in its own terminal window. The only way to do this right now is via AppleScript on MacOSX. Since it runs in a terminal, it will get reaped by the terminal when it exits, thus the "no need to reap these processes".

We launch processes currently via the host layer right now:

    static Error
    LaunchProcess (ProcessLaunchInfo &launch_info);

The launch_info contains the ability to provide a reap callback with SetMonitorProcessCallback().

Launching might allow for launching via a specified launch method (fork/exec, posix_spawn, LaunchServices (MacOSX), backboard (iOS specific), Windows launching, etc. So the launch methods should be pluggable and a default method should be used on a host when no options are specified.

I would really like to see the Host::LaunchProcess fixed for windows and fix the reaping to just work for windows. I would like to avoid large changes that aren't needed. All of the AppleScript stuff and other details are fine to stay hidden within the MacOSX specific version of Host::LaunchProcess() as long as the contents of the ProcessLaunchInfo are obeyed.

Greg

I know Join isn’t really the right word, but there’s no concept of reaping a process on Windows. My understanding is that on posix, “reaping” specifically refers to ensuring that zombie processes don’t linger and waste system resources which is a logically different operation than “wait for this process to exit”, even though certain operations that reap also wait for the process to exit as a means to an end. So I’m using join for lack of a better term to refer specifically to “wait for this process to exit”.

Sorry, hit send too soon.

Exposing native OS primitive types like lldb::thread_t, lldb::process_t, and raw file descriptors to generic code means that people will use them in ways that aren’t actually generic, and this can already been seen in many places. The biggest example is currently llgs, which has posix-y stuff all over and which will be quite difficult to port to Windows as a result, if and when we get there. There are other examples too though. Platform-specific ideas have made it into the public API, such as SBHostOS::ThreadDetach, and are used in other places as well, such as a reliance on TLS destructors (Timer) and thread cancellation. There’s also select() being used on file descriptors, and probably many things I haven’t even found yet.

I can go fix all of these things on a case-by-case basis, and originally that was my strategy. But as I found more and more examples of it, I started thinking that I want to prevent this type of code from showing up in the future. I searched for the thread but was unable to find it, where Jim (apologies if I’m misquoting or misremembering) said that when the Host layer was originally written, there was not sufficient time to sit down and design something future-proof, and that you guys just had to get it working. So what I was (and have been) trying to accomplish was exactly that. Obviously, such large changes are not without risk, and can create headaches and introduce bugs, although I think that once the bugs are resolved the entire codebase and all platforms will benefit from improved code health as a result.

Ultimately if you don’t think these type of changes add value, or you don’t think it’s an improvement, then that’s that. It’s much easier for me to write code just for Windows and not have to worry about getting stuff working on 3 different platforms that I have varying levels of familiarity with. I don’t think it’s necessarily easier in the long term though, as there will still be no clear separation between generic and platform specific code, and new things will continue to turn up where an assumption was made that something was generic when it wasn’t.

I’d like to be able to have “run” be able to launch a supplied executable, then connect to it via gdb-remote (or another protocol).

That sounds a lot like running an lldb-platform for a platform, and having the lldb-platform launch the stub. That scenario sounds like it would map well to the ‘platform select remote-{something}’, ‘platform connect connect://{remote-address}:{remote-port}’ paradigm?

The biggest example is currently llgs, which has posix-y stuff all over and which will be quite difficult to port to Windows as a result

I think at least some of that issue is going to revolve around it’s adherence to the gdb-remote RSP protocol. That has a number of elements in it (particularly stop notifications) that are inherently POSIX-focused.

-Todd

Elaborating a bit more (and brainstorming at the same time):

I think if Windows wants to consider using llgs for remote debugging a Windows target, it would be valuable to look at the gdb-remote protocol (+ our extensions) and figure out how that might map to Windows. My guess is there may find the need to add extra protocol messages (implementation aside) to cover all the concepts you want to cover. That kind of analysis at a higher level might help you figure out which bits you would want to flow through with existing messages (and possibly a POSIX-y flavor) or would need new messages and code for. That might give you a better idea of how you’d attack the implementation side of it.

Yes, except if the target is set up to use a platform, “run” should launch the platform with some supplied args, then connect to it. Perhaps have platform args and target args; launch the platform with platform args, and pass target args to it to use when launching the inferior.

+Jim, since he looked at my previous HostThread patch and some others, and may have additional context.

Just to be explicit, are you saying that I should not refactor any of this code, or that it’s fine as long as the Apple stuff remains functionally equivalent?

Zach

+Jim, since he looked at my previous HostThread patch and some others, and may have additional context.

Just to be explicit, are you saying that I should not refactor any of this code, or that it's fine as long as the Apple stuff remains functionally equivalent?

I was out of of the office during the last patch for the threads. I know Jim did work with you on that one. A few things from the last patch that made me worry about a Process equivalent:

class HostThread
{
  public:
    HostThread();
    HostThread(lldb::thread_t thread);

    Error Join(lldb::thread_result_t *result);
    Error Cancel();
    void Reset();
    lldb::thread_t Release();

    void SetState(ThreadState state);
    ThreadState GetState() const;
    HostNativeThreadBase &GetNativeThread();
    const HostNativeThreadBase &GetNativeThread() const;
    lldb::thread_result_t GetResult() const;

    bool EqualsThread(lldb::thread_t thread) const;

  private:
    std::shared_ptr<HostNativeThreadBase> m_native_thread;
};

Why is there a separate HostNativeThreadBase class? Why not just make have each host implement the calls required in HostThread in their own respective HostThread.cpp class? Now we just indirect all calls through another class when there really isn't a valid reason to do so IMHO.

Why is there a SetState() and GetState() here? We set this to running when the thread handle is valid, even though it has no bearing on if the thread is actually running or suspended? Can we remove this?

Everything else is ok.

The fallout from this was now any code that used to just check if a thread was valid, are now doing:

bool
Communication::JoinReadThread (Error *error_ptr)
{
    if (m_read_thread.GetState() != eThreadStateRunning)
        return true;

    Error error = m_read_thread.Join(nullptr);
    m_read_thread.Reset();
    return error.Success();
}

Since this is now a class, we should probably just do:

bool
Communication::JoinReadThread (Error *error_ptr)
{
    Error error = m_read_thread.Join(nullptr);
    m_read_thread.Reset();
    return error.Success();
}

And have join return success if the thread handle wasn't valid. The other reason I don't like the state in the HostThread is that it seems to indicate that this thread is actually running. I could have already exited, but if you ask the HostThread its state it will tell you "running" when it really isn't. Tracking when a thread goes away can get tricky so it would be hard to keep this state up to date. Can we change it back to a "bool IsValid() const" that returns true if the thread handle is valid? I am not sure if you were thinking of adding more to this class. But this class isn't really designed to represent a thread from another process, so it isn't very useful in that respect. I seem to remember Jim saying that you thought that this class might be able to be reused in for other threads in other processes, and then backed back to your design, so there might just be stuff left over?

This is the main reason for my concern with the next patch.

Greg

>
> +Jim, since he looked at my previous HostThread patch and some others,
and may have additional context.
>
> Just to be explicit, are you saying that I should not refactor any of
this code, or that it's fine as long as the Apple stuff remains
functionally equivalent?

I was out of of the office during the last patch for the threads. I know
Jim did work with you on that one. A few things from the last patch that
made me worry about a Process equivalent:

class HostThread
{
  public:
    HostThread();
    HostThread(lldb::thread_t thread);

    Error Join(lldb::thread_result_t *result);
    Error Cancel();
    void Reset();
    lldb::thread_t Release();

    void SetState(ThreadState state);
    ThreadState GetState() const;
    HostNativeThreadBase &GetNativeThread();
    const HostNativeThreadBase &GetNativeThread() const;
    lldb::thread_result_t GetResult() const;

    bool EqualsThread(lldb::thread_t thread) const;

  private:
    std::shared_ptr<HostNativeThreadBase> m_native_thread;
};

Why is there a separate HostNativeThreadBase class? Why not just make have
each host implement the calls required in HostThread in their own
respective HostThread.cpp class? Now we just indirect all calls through
another class when there really isn't a valid reason to do so IMHO.

Sorry, long story incoming.

Something like that was actually my first choice, and how I originally
implemented it. There was HostThreadWindows, HostThreadPosix,
HostThreadMac, etc. And then HostThread was typedefed to the appropriate
implementation. We actually debated at length about whether it was the
best approach, because people didn't like the typedef. Two alternative
approaches were:

1) Use inheritance, make HostThreadBase provide only the generic
operations, derived implementations can provide additional methods if they
like
2) Make a single HostThread.h file with the interface, make multiple
HostThread.cpp files each which implement the methods in HostThread.h
differently.

I argued against 1 because I think it leads to some ugly code (for example,
copy becomes clone, requires storing by pointer, requires null checking,
requires factory-based creation, etc). We never really discussed 2, but I
think the same arguments that people had against my original option with
the typedef would have applied to #2 as well.

But it probably helps to take a step back. From a high level perspective
and ignoring any implementation details, what I'm trying to accomplish with
this and other refactors is to create a strong separation between generic
code and platform specific code. I want it to be *actually difficult* to
call into platform-specific code from generic code. With the current
design of Host, it's extremely easy. If you need a method, you add it to
Host, and do something like this:

#if defined(MY_PLATFORM)
void Host::MyMethod()
{
    // Implementation
}
#else
void Host::MyMethod()
{
    // Nothing
}
#endif

This creates a problem in that a platform-specific operation is exposed
through a supposedly generic interface.

The main argument against my original implementation with the typedef was
that while it solves the problem somewhat (If you have
HostThreadMacOSX::FooBar() but don't have HostThreadWindows::FooBar(), then
writing HostThread::FooBar() on MacOSX will compile locally but break the
buildbots, instead of silently passing and potentially introducing a bug in
other platforms where the method was stubbed out), it could be made better
by just never, for any platform, exposing a method that is not generic
enough to actually work on every platform. This was the basis for Jim
arguing for #1, and have everything use the base interface. This way it
becomes very difficult to call a platform specific method. You have to
include a platform specific header and cast to a platform specific type.

We arrived at the existing solution as sort of a compromise. The syntax is
still nice, it's easily copyable just like an lldb::thread_t, and it
doesn't expose any method unless that method can be implemented on every
platform.

Why is there a SetState() and GetState() here? We set this to running when
the thread handle is valid, even though it has no bearing on if the thread
is actually running or suspended? Can we remove this?

I can probably remove it. I provided it because it is possible to
construct a HostThread with an arbitrary lldb::thread_t, and while in most
cases it's safe to assume that the initial state is running, you could do
some platform specific stuff with the handle before constructing it, and
the class would be confused about the state.

Everything else is ok.

The fallout from this was now any code that used to just check if a thread
was valid, are now doing:

bool
Communication::JoinReadThread (Error *error_ptr)
{
    if (m_read_thread.GetState() != eThreadStateRunning)
        return true;

    Error error = m_read_thread.Join(nullptr);
    m_read_thread.Reset();
    return error.Success();
}

Since this is now a class, we should probably just do:

bool
Communication::JoinReadThread (Error *error_ptr)
{
    Error error = m_read_thread.Join(nullptr);
    m_read_thread.Reset();
    return error.Success();
}

And have join return success if the thread handle wasn't valid. The other
reason I don't like the state in the HostThread is that it seems to
indicate that this thread is actually running. I could have already exited,
but if you ask the HostThread its state it will tell you "running" when it
really isn't. Tracking when a thread goes away can get tricky so it would
be hard to keep this state up to date. Can we change it back to a "bool
IsValid() const" that returns true if the thread handle is valid?

Thinking about this, I kind of agree with you. In fixing up the code to
use HostThread, I actually didn't notice anywhere where we care about
anything other than whether or not the thread is running. So the other
states don't seem to be that useful in practice. But is IsValid() really
what we want to check? We really want to check if the thread is running.
We own the thread routine because it's the ThreadCreateTrampoline. Can't
we detect this by just having the HostThread look at the value of a shared
variable?

I am not sure if you were thinking of adding more to this class. But this
class isn't really designed to represent a thread from another process, so
it isn't very useful in that respect. I seem to remember Jim saying that
you thought that this class might be able to be reused in for other threads
in other processes, and then backed back to your design, so there might
just be stuff left over?

I was convinced the other way on that point, and now think that this class
should not be used to represent a thread running in another process. The
requirements and interface are too different.

This is the main reason for my concern with the next patch.

After we reach an agreement about the HostThread patch, I can go into more
detail about my plan for HostProcess and how it would be integrated.

Thanks!
Zach

>
> +Jim, since he looked at my previous HostThread patch and some others, and may have additional context.
>
> Just to be explicit, are you saying that I should not refactor any of this code, or that it's fine as long as the Apple stuff remains functionally equivalent?

I was out of of the office during the last patch for the threads. I know Jim did work with you on that one. A few things from the last patch that made me worry about a Process equivalent:

class HostThread
{
  public:
    HostThread();
    HostThread(lldb::thread_t thread);

    Error Join(lldb::thread_result_t *result);
    Error Cancel();
    void Reset();
    lldb::thread_t Release();

    void SetState(ThreadState state);
    ThreadState GetState() const;
    HostNativeThreadBase &GetNativeThread();
    const HostNativeThreadBase &GetNativeThread() const;
    lldb::thread_result_t GetResult() const;

    bool EqualsThread(lldb::thread_t thread) const;

  private:
    std::shared_ptr<HostNativeThreadBase> m_native_thread;
};

Why is there a separate HostNativeThreadBase class? Why not just make have each host implement the calls required in HostThread in their own respective HostThread.cpp class? Now we just indirect all calls through another class when there really isn't a valid reason to do so IMHO.

Sorry, long story incoming.

Something like that was actually my first choice, and how I originally implemented it. There was HostThreadWindows, HostThreadPosix, HostThreadMac, etc. And then HostThread was typedefed to the appropriate implementation. We actually debated at length about whether it was the best approach, because people didn't like the typedef. Two alternative approaches were:

1) Use inheritance, make HostThreadBase provide only the generic operations, derived implementations can provide additional methods if they like
2) Make a single HostThread.h file with the interface, make multiple HostThread.cpp files each which implement the methods in HostThread.h differently.

I argued against 1 because I think it leads to some ugly code (for example, copy becomes clone, requires storing by pointer, requires null checking, requires factory-based creation, etc). We never really discussed 2, but I think the same arguments that people had against my original option with the typedef would have applied to #2 as well.

I can see your point. If we thing about option 2 where we use different HostThread.cpp for each platform where only one is compiled on each host. This would require that all typedefs for the lldb::host::thread_t and such be done correctly and each platform would then be required to only use those defines and all systems would need to use the same instance variables. This may or may not work (MacOSX has 3 kinds of thread IDs we could actually cache for a thread. So the current approach allows for this and is probably the cleanest. I pull my reservations on the current approach..

But it probably helps to take a step back. From a high level perspective and ignoring any implementation details, what I'm trying to accomplish with this and other refactors is to create a strong separation between generic code and platform specific code. I want it to be actually difficult to call into platform-specific code from generic code. With the current design of Host, it's extremely easy. If you need a method, you add it to Host, and do something like this:

#if defined(MY_PLATFORM)
void Host::MyMethod()
{
    // Implementation
}
#else
void Host::MyMethod()
{
    // Nothing
}
#endif

This creates a problem in that a platform-specific operation is exposed through a supposedly generic interface.

The main argument against my original implementation with the typedef was that while it solves the problem somewhat (If you have HostThreadMacOSX::FooBar() but don't have HostThreadWindows::FooBar(), then writing HostThread::FooBar() on MacOSX will compile locally but break the buildbots, instead of silently passing and potentially introducing a bug in other platforms where the method was stubbed out), it could be made better by just never, for any platform, exposing a method that is not generic enough to actually work on every platform. This was the basis for Jim arguing for #1, and have everything use the base interface. This way it becomes very difficult to call a platform specific method. You have to include a platform specific header and cast to a platform specific type.

We arrived at the existing solution as sort of a compromise. The syntax is still nice, it's easily copyable just like an lldb::thread_t, and it doesn't expose any method unless that method can be implemented on every platform.

I agree and understand the logic and organization now. Thanks for the info and background.

Why is there a SetState() and GetState() here? We set this to running when the thread handle is valid, even though it has no bearing on if the thread is actually running or suspended? Can we remove this?
I can probably remove it. I provided it because it is possible to construct a HostThread with an arbitrary lldb::thread_t, and while in most cases it's safe to assume that the initial state is running, you could do some platform specific stuff with the handle before constructing it, and the class would be confused about the state.

I would like to see the state removed and have some accessors that can determine if a thread handle is valid. Maybe removing Get/SetState() and replacing with "bool IsValid() const" or "bool HandleIsValid() const" or adding a test operator to class (operator bool() const).

Everything else is ok.

The fallout from this was now any code that used to just check if a thread was valid, are now doing:

bool
Communication::JoinReadThread (Error *error_ptr)
{
    if (m_read_thread.GetState() != eThreadStateRunning)
        return true;

    Error error = m_read_thread.Join(nullptr);
    m_read_thread.Reset();
    return error.Success();
}

Since this is now a class, we should probably just do:

bool
Communication::JoinReadThread (Error *error_ptr)
{
    Error error = m_read_thread.Join(nullptr);
    m_read_thread.Reset();
    return error.Success();
}

And have join return success if the thread handle wasn't valid. The other reason I don't like the state in the HostThread is that it seems to indicate that this thread is actually running. I could have already exited, but if you ask the HostThread its state it will tell you "running" when it really isn't. Tracking when a thread goes away can get tricky so it would be hard to keep this state up to date. Can we change it back to a "bool IsValid() const" that returns true if the thread handle is valid?

Thinking about this, I kind of agree with you. In fixing up the code to use HostThread, I actually didn't notice anywhere where we care about anything other than whether or not the thread is running. So the other states don't seem to be that useful in practice. But is IsValid() really what we want to check? We really want to check if the thread is running. We own the thread routine because it's the ThreadCreateTrampoline. Can't we detect this by just having the HostThread look at the value of a shared variable?

We can add support to HostThread for threads we start via the HostThread interface. For others we might need to still be able to interact with another thread, but it might not have the abilities that the other threads do have. Is this is the case we might want:

bool HandleIsValid() const;

and:

bool ThreadIsRunning() const;

We might not be able to tell if a thread is running if we didn't launch the thread through the host layer though, so I am not sure if we would return true or false for that...

I am not sure if you were thinking of adding more to this class. But this class isn't really designed to represent a thread from another process, so it isn't very useful in that respect. I seem to remember Jim saying that you thought that this class might be able to be reused in for other threads in other processes, and then backed back to your design, so there might just be stuff left over?

I was convinced the other way on that point, and now think that this class should not be used to represent a thread running in another process. The requirements and interface are too different.

Agreed.

This is the main reason for my concern with the next patch.

After we reach an agreement about the HostThread patch, I can go into more detail about my plan for HostProcess and how it would be integrated.

Sounds good. I propose:

1 - remove Get/SetState()
2 - add "bool ThreadIsRunning()" to the interface and replace previous uses of "host_thread.GetState() == eStateRunning" or "host_thread.GetState() == eStateStopped" with the new call where needed.
3 - add "bool HandleIsValid() const;" only if needed by someone that is checking if a thread handle is valid where they don't care if the thread is running, or if we have cases where we give a thread handle to a HostThread class but didn't launch it...

Let me know what you think. Thanks for the response and let me know when you have an approach for HostProcess.

Greg

I almost think we shouldn't support threads that we didn't create
ourselves. This models the way std::thread behaves, and also covers I
believe 100% of LLDB's existing use cases anyway (can't speak for any
out-of-tree code you might have though, so let me know if that assumption
is wrong). The only exception is operations on pthread_self(), which is
covered by a different class called ThisThread which was submitted as part
of the same refactor.