Static Analyzer Rocks Hard

Thanks for the awesome work put into the static analyzer. With it I was able to find a pretty decent number of memory leaks and dead-store that made me think, "why the hell did we think that was a good idea in the first place?"
I've not run it against everything I can, but I am very impressed with it so far.

I have found one false positive in my code that might be a good candidate to ignore, although I might consider changing my code anyway.
I wrote a test case based on some of the idea behind it (attached).
It can be tested with `scan-build -o . ccc-analyzer -c -o test.o test.m`

Thanks!
-Matthew Jimenez

test.m (1.01 KB)

Thanks for the awesome work put into the static analyzer. With it I was able to find a pretty decent number of memory leaks and dead-store that made me think, "why the hell did we think that was a good idea in the first place?"
I've not run it against everything I can, but I am very impressed with it so far.

Hi Matthew,

It's great to hear that you are already getting some mileage out of the tool. It's still an early stage tool, and we'll continue to aggressively work on making it better (and catch more bugs!)

I have found one false positive in my code that might be a good candidate to ignore, although I might consider changing my code anyway.
I wrote a test case based on some of the idea behind it (attached).
It can be tested with `scan-build -o . ccc-analyzer -c -o test.o test.m`

This is an interesting test case. I'm not certain what you thought was the salient points I was suppose to notice, but here's my observations:

1) The branch condition:

         if ([self isAllowed:rid]) {

should always be true because:

   - (BOOL) isAllowed: (id) obj
   {
      return YES;
   }

In my output from scan-build, however, the reported path to the memory leak indicates a case where the false branch is taken (which is not possible).

The reason for the analyzer reporting this infeasible path is because right now the analysis doesn't do any inter-procedural analysis. This is a known limitation. We're planning on implementing inter-procedural analysis in stages; in general inter-procedural analysis requires a fair amount of boilerplate infrastructure so that we can reason about the definitions of functions/methods across translation units.

I should mention that the reported leak manifests even if you replace the condition with "1" (making the branch unconditional):

    if (1) {

So, while the analyzer does indeed report a false path, the falseness of the path has nothing to do with the warning itself.

2) The memory leak has to do with the following snippet of code:

             if (!sql) {
                 sql = [[NSMutableString alloc] initWithString:@"select * from random_table where rid in (?"];
                 args = [[NSMutableArray alloc] init];
             } else {
                 [sql appendString:@", ?"];
             }
             [args addObject:rid];

What happens is that "sql" is assigned an object (which the analyzer assumes could be NULL if "alloc" failed), and then "args" is assigned an object (which also could be NULL). When we do another iteration of the loop, the new value of "sql" is compared against null:

   if (!sql) { ...

The analyzer, believing that the value could be NULL or a valid object, believes that the true branch is feasible (which it technically is). We then allocate a new value to store to sql, and then allocate a new value to store to args. At this point "args" is overwritten, which the analyzer believes could be a valid object that was allocated on the previous loop iteration. So, technically there is a leak here.

In reality, this leak probably wouldn't happen. If "sql" was assigned a NULL value because "alloc" failed, chances are that the allocation to "args" also failed because memory allocation failures are correlated. Thus at the end of the basic block for the true branch of "if (!sql)" we really should have the following possibilities:

   (a) both sql and args are null (allocation failed first for sql, then args)
   (b) sql is not null, but args is null (allocation succeeded for sql, but failed for args)
   (c) neither sql or args are null (allocation succeeded in both cases)

If we assume that these are the only possibilities (and not a fourth possibility, where allocation failed for sql but succeeded for args), then there is no leak.

Is this the false positive that you were thinking about? If so, this extra reasoning about correlated allocation failures is something that could be implemented fairly easily in the static analyzer (and I would be happy to do it).

Thanks for the feedback!

Cheers,
Ted

Thanks for the awesome work put into the static analyzer. With it I was able to find a pretty decent number of memory leaks and dead-store that made me think, "why the hell did we think that was a good idea in the first place?"
I've not run it against everything I can, but I am very impressed with it so far.

Hi Matthew,

It's great to hear that you are already getting some mileage out of the tool. It's still an early stage tool, and we'll continue to aggressively work on making it better (and catch more bugs!)

I have found one false positive in my code that might be a good candidate to ignore, although I might consider changing my code anyway.
I wrote a test case based on some of the idea behind it (attached).
It can be tested with `scan-build -o . ccc-analyzer -c -o test.o test.m`

This is an interesting test case. I'm not certain what you thought was the salient points I was suppose to notice, but here's my observations:

1) The branch condition:

       if ([self isAllowed:rid]) {

should always be true because:

- (BOOL) isAllowed: (id) obj
{
    return YES;
}

In my output from scan-build, however, the reported path to the memory leak indicates a case where the false branch is taken (which is not possible).

The reason for the analyzer reporting this infeasible path is because right now the analysis doesn't do any inter-procedural analysis. This is a known limitation. We're planning on implementing inter-procedural analysis in stages; in general inter-procedural analysis requires a fair amount of boilerplate infrastructure so that we can reason about the definitions of functions/methods across translation units.

I should mention that the reported leak manifests even if you replace the condition with "1" (making the branch unconditional):

  if (1) {

So, while the analyzer does indeed report a false path, the falseness of the path has nothing to do with the warning itself

Ah, sorry about that. This first case was unintended. The thought was to show a reason the array may need filtering.
I probably should have put a comment into isAllowed: indicating it might not always be the case; perhaps the method might be implemented differently in a subclass.

2) The memory leak has to do with the following snippet of code:

           if (!sql) {
               sql = [[NSMutableString alloc] initWithString:@"select * from random_table where rid in (?"];
               args = [[NSMutableArray alloc] init];
           } else {
               [sql appendString:@", ?"];
           }
           [args addObject:rid];

What happens is that "sql" is assigned an object (which the analyzer assumes could be NULL if "alloc" failed), and then "args" is assigned an object (which also could be NULL). When we do another iteration of the loop, the new value of "sql" is compared against null:

if (!sql) { ...

The analyzer, believing that the value could be NULL or a valid object, believes that the true branch is feasible (which it technically is). We then allocate a new value to store to sql, and then allocate a new value to store to args. At this point "args" is overwritten, which the analyzer believes could be a valid object that was allocated on the previous loop iteration. So, technically there is a leak here.

In reality, this leak probably wouldn't happen. If "sql" was assigned a NULL value because "alloc" failed, chances are that the allocation to "args" also failed because memory allocation failures are correlated. Thus at the end of the basic block for the true branch of "if (!sql)" we really should have the following possibilities:

(a) both sql and args are null (allocation failed first for sql, then args)
(b) sql is not null, but args is null (allocation succeeded for sql, but failed for args)
(c) neither sql or args are null (allocation succeeded in both cases)

If we assume that these are the only possibilities (and not a fourth possibility, where allocation failed for sql but succeeded for args), then there is no leak.

Is this the false positive that you were thinking about? If so, this extra reasoning about correlated allocation failures is something that could be implemented fairly easily in the static analyzer (and I would be happy to do it).

Yes, this was what I was thinking, however I am unsure how I feel about it.
Obviously any case where the allocation or initialization fail here would cause the code to misbehave, and this is something I had not taken into full consideration.
Asserting the values after allocating them causes the analyzer to not report any problems, but that might be overkill.

Perhaps what I'm more interested in are flags for the analyzer for assumptions it may be allowed to make.
Normal behavior assuming allocation is always working and a flag to assume allocation could fail (or perhaps the reverse).
Although, too many such flags might also be quite bad.

Thoughts?
-Matthew

In reality, this leak probably wouldn't happen. If "sql" was assigned a NULL value because "alloc" failed, chances are that the allocation to "args" also failed because memory allocation failures are correlated. Thus at the end of the basic block for the true branch of "if (!sql)" we really should have the following possibilities:

(a) both sql and args are null (allocation failed first for sql, then args)
(b) sql is not null, but args is null (allocation succeeded for sql, but failed for args)
(c) neither sql or args are null (allocation succeeded in both cases)

If we assume that these are the only possibilities (and not a fourth possibility, where allocation failed for sql but succeeded for args), then there is no leak.

Is this the false positive that you were thinking about? If so, this extra reasoning about correlated allocation failures is something that could be implemented fairly easily in the static analyzer (and I would be happy to do it).

Yes, this was what I was thinking, however I am unsure how I feel about it.
Obviously any case where the allocation or initialization fail here would cause the code to misbehave, and this is something I had not taken into full consideration.
Asserting the values after allocating them causes the analyzer to not report any problems, but that might be overkill.

Assertions are funny things. If you know that the allocation could never fail then the assertion properly silences the warning. If the allocation *could* fail, it merely turns an error that could be caught with static analysis into a run-time error (i.e., the assertion fires, causing the program to halt).

Perhaps what I'm more interested in are flags for the analyzer for assumptions it may be allowed to make.
Normal behavior assuming allocation is always working and a flag to assume allocation could fail (or perhaps the reverse).
Although, too many such flags might also be quite bad.

Most code is not engineered to handle the case when memory allocation fails. When memory allocation starts to fail, programs tend to do weird things since they don't know how to recover. For such programs, flagging leaks (or any other kind of bug) that occur when memory allocation fails are likely going to be deemed as false positives.

That said, there is code out there that does care when allocations fail. Such code should be designed to handle and recover from failure. For such programs, the most robust test of the code is to allow any arbitrary allocation to fail. For such code, users can silence particular warnings that they are 100% certain wouldn't fail using assertions (which would result in run-time errors if they actually did occur). Writing code, however, that assumes that any arbitrary allocation can fail is difficult at best.

In order to be pragmatic, the analyzer assumes that an allocation has succeeded *unless* you check for failure (i.e., a pointer is tested for null). This is exactly what happens in the test case; since "sql" is tested against null on every iteration of the loop, the analyzer assumes that the allocation could fail. I believe that your test case, however, intended the "if (!sql)" to really test to see if "sql" had not been assigned a new value after its initial declaration. We could potentially engineer the analyzer to handle such cases. For example, sql is initialized with the value 0, so we can prove that the branch would unconditionally be taken on the first branch of the loop, and thus we shouldn't treat that particular branch like it was testing for the success of the allocation. Such heuristics go into the realm of black art, which is fine if they are the right thing to do in 99% of the cases, but in this case it may silence more real bugs than intended.

So, I really see two options. We can add a flag indicating that allocations are assumed not to fail, or we just keep the current behavior. Assuming the checker is run using the current behavior, it looks like it is doing exactly the right thing. From my perspective it probably makes the most sense to add an assertion after the allocation to "sql" since that really is the assumption you are making. In this way the code itself documents your assumptions. Alternatively, the test case could be restructured, etc., but an assertion seems like an easy answer and would make your code more documented.

What do you think? We're trying to make the tool as useful as possible. Different people will want different operating models for the tool, but another benefit of the tool is that it makes one think about their assumptions and how their code could be improved, especially since some assumptions are not always correct.

Ted

Yes, this was what I was thinking, however I am unsure how I feel about it.
Obviously any case where the allocation or initialization fail here would cause the code to misbehave, and this is something I had not taken into full consideration.
Asserting the values after allocating them causes the analyzer to not report any problems, but that might be overkill.

Assertions are funny things.

I think that's my new favorite quote :wink:

If you know that the allocation could never fail then the assertion properly silences the warning. If the allocation *could* fail, it merely turns an error that could be caught with static analysis into a run-time error (i.e., the assertion fires, causing the program to halt).

Perhaps what I'm more interested in are flags for the analyzer for assumptions it may be allowed to make.
Normal behavior assuming allocation is always working and a flag to assume allocation could fail (or perhaps the reverse).
Although, too many such flags might also be quite bad.

Most code is not engineered to handle the case when memory allocation fails. When memory allocation starts to fail, programs tend to do weird things since they don't know how to recover. For such programs, flagging leaks (or any other kind of bug) that occur when memory allocation fails are likely going to be deemed as false positives.

That said, there is code out there that does care when allocations fail. Such code should be designed to handle and recover from failure. For such programs, the most robust test of the code is to allow any arbitrary allocation to fail. For such code, users can silence particular warnings that they are 100% certain wouldn't fail using assertions (which would result in run-time errors if they actually did occur). Writing code, however, that assumes that any arbitrary allocation can fail is difficult at best.

In order to be pragmatic, the analyzer assumes that an allocation has succeeded *unless* you check for failure (i.e., a pointer is tested for null). This is exactly what happens in the test case; since "sql" is tested against null on every iteration of the loop, the analyzer assumes that the allocation could fail.

This behavior does make sense, but should it really extend past the first usage of the of the variable or after the first pass of a loop? There may be cases where extending it might be a good idea, but I'd imagine those cases too be very rare in code.

I believe that your test case, however, intended the "if (!sql)" to really test to see if "sql" had not been assigned a new value after its initial declaration. We could potentially engineer the analyzer to handle such cases. For example, sql is initialized with the value 0, so we can prove that the branch would unconditionally be taken on the first branch of the loop, and thus we shouldn't treat that particular branch like it was testing for the success of the allocation. Such heuristics go into the realm of black art, which is fine if they are the right thing to do in 99% of the cases, but in this case it may silence more real bugs than intended.

I'm inclined to agree with the above - it is tricky and probably not needed at this point.
At the moment, I can't think of additional real bugs that could be silenced, but I'm probably being too short-sighted.

So, I really see two options. We can add a flag indicating that allocations are assumed not to fail, or we just keep the current behavior.

After reading the description of the behavior above, I'm thinking a flag would not be a good idea.

Assuming the checker is run using the current behavior, it looks like it is doing exactly the right thing. From my perspective it probably makes the most sense to add an assertion after the allocation to "sql" since that really is the assumption you are making. In this way the code itself documents your assumptions. Alternatively, the test case could be restructured, etc., but an assertion seems like an easy answer and would make your code more documented.

Adding the asserts would probably be the best case in the projects I work on, but there are going to be many cases of this, and the original case was a little confusing at first glance.

What do you think? We're trying to make the tool as useful as possible. Different people will want different operating models for the tool, but another benefit of the tool is that it makes one think about their assumptions and how their code could be improved, especially since some assumptions are not always correct.

It's definitely made me think about the way I code, but it would make me think the best option would be to always assert against all allocations - which would be madness in a large pre-existing code base :slight_smile:
Seriously though, my case may still be rare enough outside of the code I work on that it could be successfully ignored. I'll gladly ignore the bug if it comes to that. I'm already ignoring some cases of dead stores and memory leaks that are in code that was not relevant (obvious because the original author named the function "never_called_but_needed"), and I'm not aiming for complete coverage... at least not yet.

Thanks,
-Matthew

Yes, this was what I was thinking, however I am unsure how I feel
about it.
Obviously any case where the allocation or initialization fail here
would cause the code to misbehave, and this is something I had not
taken into full consideration.
Asserting the values after allocating them causes the analyzer to
not report any problems, but that might be overkill.

Assertions are funny things.

I think that's my new favorite quote :wink:

If you know that the allocation could never fail then the assertion
properly silences the warning. If the allocation *could* fail, it
merely turns an error that could be caught with static analysis into
a run-time error (i.e., the assertion fires, causing the program to
halt).

Perhaps what I'm more interested in are flags for the analyzer for
assumptions it may be allowed to make.
Normal behavior assuming allocation is always working and a flag to
assume allocation could fail (or perhaps the reverse).
Although, too many such flags might also be quite bad.

Most code is not engineered to handle the case when memory
allocation fails. When memory allocation starts to fail, programs
tend to do weird things since they don't know how to recover. For
such programs, flagging leaks (or any other kind of bug) that occur
when memory allocation fails are likely going to be deemed as false
positives.

That said, there is code out there that does care when allocations
fail. Such code should be designed to handle and recover from
failure. For such programs, the most robust test of the code is to
allow any arbitrary allocation to fail. For such code, users can
silence particular warnings that they are 100% certain wouldn't fail
using assertions (which would result in run-time errors if they
actually did occur). Writing code, however, that assumes that any
arbitrary allocation can fail is difficult at best.

In order to be pragmatic, the analyzer assumes that an allocation
has succeeded *unless* you check for failure (i.e., a pointer is
tested for null). This is exactly what happens in the test case;
since "sql" is tested against null on every iteration of the loop,
the analyzer assumes that the allocation could fail.

This behavior does make sense, but should it really extend past the
first usage of the of the variable or after the first pass of a loop?
There may be cases where extending it might be a good idea, but I'd
imagine those cases too be very rare in code.

I myself am often surprised how logic bugs can cross loop iterations. Programmers sometimes make assumptions that hold on the first loop iteration and not the second.

I believe that your test case, however, intended the "if (!sql)" to
really test to see if "sql" had not been assigned a new value after
its initial declaration. We could potentially engineer the analyzer
to handle such cases. For example, sql is initialized with the
value 0, so we can prove that the branch would unconditionally be
taken on the first branch of the loop, and thus we shouldn't treat
that particular branch like it was testing for the success of the
allocation. Such heuristics go into the realm of black art, which
is fine if they are the right thing to do in 99% of the cases, but
in this case it may silence more real bugs than intended.

I'm inclined to agree with the above - it is tricky and probably not
needed at this point.
At the moment, I can't think of additional real bugs that could be
silenced, but I'm probably being too short-sighted.

So, I really see two options. We can add a flag indicating that
allocations are assumed not to fail, or we just keep the current
behavior.

After reading the description of the behavior above, I'm thinking a
flag would not be a good idea.

Assuming the checker is run using the current behavior, it looks
like it is doing exactly the right thing. From my perspective it
probably makes the most sense to add an assertion after the
allocation to "sql" since that really is the assumption you are
making. In this way the code itself documents your assumptions.
Alternatively, the test case could be restructured, etc., but an
assertion seems like an easy answer and would make your code more
documented.

Adding the asserts would probably be the best case in the projects I
work on, but there are going to be many cases of this, and the
original case was a little confusing at first glance.

Just a thought: In a way, adding a flag to the static analyzer to assume that all allocations succeed is almost the same as adding an assertion after every allocation. The only difference is that there is no run-time check that an assertion provides. The question is whether or not observing the assertion actually firing would be useful, since it is often obvious that a program is running out of memory. This of course depends on the application and the preferences of the developer.

While I don't like the idea of having a proliferation of options to the checker which could overly complicate its interface, it does seem the me that "assuming allocations don't fail" would be a very common escape hatch when suppressing false positives for memory leaks and other bugs. It's an assumption that holds 99% of the time --- except when it doesn't --- in which case most programs are screwed because they aren't written to handle such failure (this isn't really a bug, but a design decision). Having a flag for this to the static analyzer allows one to assert this across an entire project. In many ways its a personal decision about how software should be written, and whether or not a bug-finding tool should always scold you about it.

Just a thought.

What do you think? We're trying to make the tool as useful as
possible. Different people will want different operating models for
the tool, but another benefit of the tool is that it makes one think
about their assumptions and how their code could be improved,
especially since some assumptions are not always correct.

It's definitely made me think about the way I code, but it would make
me think the best option would be to always assert against all
allocations - which would be madness in a large pre-existing code
base :slight_smile:

I think that's awesome if you (and others) want to document your code so thoroughly with assertions. The problem I see is that you may need a lot of these assertions since allocations are frequent. I'm concerned that some programmers may feel that it is necessary to add an assertion after every allocation just to reduce the noise from the checker. That's an extra line of code for every allocation; that's cumbersome for both pre-existing and new code.

I'm curious about the following: if I add an option to have the static analyzer assume that all allocations succeed, what portion of your false warnings disappear? If it is a significant number, I think this option is worth including (or even it including it as the default behavior). Would you be interested in running this experiment?

Seriously though, my case may still be rare enough outside of the code
I work on that it could be successfully ignored. I'll gladly ignore
the bug if it comes to that. I'm already ignoring some cases of dead
stores and memory leaks that are in code that was not relevant
(obvious because the original author named the function
"never_called_but_needed"), and I'm not aiming for complete
coverage... at least not yet.

I really appreciate your perspective. As you continue to use the static analyzer (especially as it evolves) I'm very interested to hear any feedback on how the workflow of the tool can be improved to fit into your model of finding and fixing bugs.

Ted

<snip />

This behavior does make sense, but should it really extend past the
first usage of the of the variable or after the first pass of a loop?
There may be cases where extending it might be a good idea, but I'd
imagine those cases too be very rare in code.

I myself am often surprised how logic bugs can cross loop iterations. Programmers sometimes make assumptions that hold on the first loop iteration and not the second.

Ok, fair enough.

<snip />

Adding the asserts would probably be the best case in the projects I
work on, but there are going to be many cases of this, and the
original case was a little confusing at first glance.

Just a thought: In a way, adding a flag to the static analyzer to assume that all allocations succeed is almost the same as adding an assertion after every allocation. The only difference is that there is no run-time check that an assertion provides. The question is whether or not observing the assertion actually firing would be useful, since it is often obvious that a program is running out of memory. This of course depends on the application and the preferences of the developer.

While I don't like the idea of having a proliferation of options to the checker which could overly complicate its interface, it does seem the me that "assuming allocations don't fail" would be a very common escape hatch when suppressing false positives for memory leaks and other bugs. It's an assumption that holds 99% of the time --- except when it doesn't --- in which case most programs are screwed because they aren't written to handle such failure (this isn't really a bug, but a design decision). Having a flag for this to the static analyzer allows one to assert this across an entire project. In many ways its a personal decision about how software should be written, and whether or not a bug-finding tool should always scold you about it.

Just a thought.

It's been my experience that developers assume success without much thought as to the result of failure until they actually witness it. Allocation failure is especially nasty since the chances of reproducing the error is so slim, and Objective-C might be even nastier as nil is a valid receiver. So the need for a tool to check this exists, but for better or worse, I still wouldn't imagine developers to be proactive about handling such situations.

Strangely enough, I have seen many developers be much more paranoid when allocating memory directly rather than through objects.

It's definitely made me think about the way I code, but it would make
me think the best option would be to always assert against all
allocations - which would be madness in a large pre-existing code
base :slight_smile:

I think that's awesome if you (and others) want to document your code so thoroughly with assertions. The problem I see is that you may need a lot of these assertions since allocations are frequent. I'm concerned that some programmers may feel that it is necessary to add an assertion after every allocation just to reduce the noise from the checker. That's an extra line of code for every allocation; that's cumbersome for both pre-existing and new code.

I'm curious about the following: if I add an option to have the static analyzer assume that all allocations succeed, what portion of your false warnings disappear? If it is a significant number, I think this option is worth including (or even it including it as the default behavior). Would you be interested in running this experiment?

I wish I could give good numbers on this at the moment, but I only had the chance to check the parts that compile on my mac so far (objective-c and windows only - it's rather silly).
However I from what I did manage to compile, I only saw this once out of about fifty other reported problems. I figure this accounts for about a tenth of the total code, so I might see this ten more times, maybe twenty.

I plan on testing the code base throughly at some point in the next month or so, and I am definitely interested in helping if I can.

Seriously though, my case may still be rare enough outside of the code
I work on that it could be successfully ignored. I'll gladly ignore
the bug if it comes to that. I'm already ignoring some cases of dead
stores and memory leaks that are in code that was not relevant
(obvious because the original author named the function
"never_called_but_needed"), and I'm not aiming for complete
coverage... at least not yet.

I really appreciate your perspective. As you continue to use the static analyzer (especially as it evolves) I'm very interested to hear any feedback on how the workflow of the tool can be improved to fit into your model of finding and fixing bugs.

Sure thing.

Thanks,
-Matthew

Matthew Jimenez wrote:

It's been my experience that developers assume success without much
thought as to the result of failure until they actually witness it.
Allocation failure is especially nasty since the chances of
reproducing the error is so slim, and Objective-C might be even
nastier as nil is a valid receiver. So the need for a tool to check
this exists, but for better or worse, I still wouldn't imagine
developers to be proactive about handling such situations.

Strangely enough, I have seen many developers be much more paranoid
when allocating memory directly rather than through objects.
  
Not handling allocation failures can lead to security holes if the
parameter to malloc
can be somehow controlled by an attacker.
It could request a huge amount of memory, that malloc would deny -> NULL
pointer.
You don't necessarely have to be really out-of-memory to get an
allocation failure.

DoS: NULL dereference -> app crash
Remote code execution hole: NULL pointer -> pointer arithmetic involving
another attacker controlled value
    -> valid *attacker controlled* address -> overwrite stack -> shellcode

However in practice most allocations are unchecked leading to huge
number of warnings, so I think it would be best
to have an option to toggle checking for malloc failures.
Or classify them differently so you can filter the errors from the HTML
report:
    null dereference on allocation failure, uninitialized value on
allocation failure, ...
   
Best regards,
--Edwin

It's been my experience that developers assume success without much
thought as to the result of failure until they actually witness it.
Allocation failure is especially nasty since the chances of
reproducing the error is so slim, and Objective-C might be even
nastier as nil is a valid receiver. So the need for a tool to check
this exists, but for better or worse, I still wouldn't imagine
developers to be proactive about handling such situations.

Strangely enough, I have seen many developers be much more paranoid
when allocating memory directly rather than through objects.

Straying widely off topic, and so not sending this to the list, but the reason a lot of people never bother to check the return from malloc() is that, quite a few operating systems (e.g. Linux, and Darwin in some cases), malloc() will not fail if you run out of memory, only if you run out of virtual address space. If you have no memory, it will create a page table entry with the present bit cleared and return a pointer to this. The first time you try to access it, there will be a page fault and the kernel will try to give you some memory. On Linux, if it fails then it will kill random processes until it can satisfy the allocation. On OS X it will (now) pause the application that asked for the allocation and pop up a dialog box asking you to quit some applications.

Because of this behaviour (which violates the spec, but hey), checking the return value for malloc() does nothing other than slow your code down.

Implementations of malloc and virtual memory are platform specific, and failure can occur under different circumstances.

The only time they will be hit is when you run out of virtual address space, which is a lot more rare than running out of physical memory these days (especially on a 64-bit system, where you typically have 48 bits of virtual address space and under 40 bits of physical address space).

This is not true. For example, the iPhone has no virtual memory. When you run out of memory you actually are running out of memory. Checking for memory allocation failure is especially important on an embedded device, particularly if there is state you want to save.

Further, not all OSes overcommit virtual memory. In such cases, if the available swap is less than the requested about of virtual address space being allocated, memory allocation will fail (even if you have plenty of available address space for the allocation).

In general, if your interest is in writing portable code, making assumptions about when allocation can fail is dangerous.

Not handling allocation failures can lead to security holes if the
parameter to malloc
can be somehow controlled by an attacker.

I'm inclined to believe that this is a slightly orthogonal issue, although your point is well-taken. The security bug comes from using an untrusted value as the size argument to malloc. While checking for malloc failure might indirectly handle some of these issues, it doesn't really address the original security problem, as the untrusted data could be used to wreck havoc in other ways.

The more complete way to catch these bugs (and potentially verify their absence) is to flag dangerous uses of untrusted data: using it as a size parameter to malloc, using it as an array index, and so on.

My point here is that not checking for malloc failure is not inherently a security bug.

However in practice most allocations are unchecked leading to huge
number of warnings, so I think it would be best
to have an option to toggle checking for malloc failures.
Or classify them differently so you can filter the errors from the HTML
report:
   null dereference on allocation failure, uninitialized value on
allocation failure, ...

I agree. I think it all depends on the particular goals of the programmer for the application they are writing.

The reality is that most code doesn't check to see if allocation fails. In such cases, I don't believe that programmers are interested in seeing warnings from a tool that involve allocation failure. When some warnings systematically become noise to the user, they really do compromise the value of the bug-finding tool.

For those who want to be particularly vigilant about their code (at least as far as memory allocation failure is concerned), there certainly should be an option to either have the analyzer perform more aggressive checking (i.e., assume malloc can fail) or have the interface for inspecting bugs allow users to filter which bug reports they see based on whether or not the bug assumes malloc fails, etc.

Ted Kremenek wrote:

Not handling allocation failures can lead to security holes if the
parameter to malloc
can be somehow controlled by an attacker.

I'm inclined to believe that this is a slightly orthogonal issue,
although your point is well-taken. The security bug comes from using
an untrusted value as the size argument to malloc. While checking for
malloc failure might indirectly handle some of these issues, it
doesn't really address the original security problem, as the untrusted
data could be used to wreck havoc in other ways.
The more complete way to catch these bugs (and potentially verify
their absence) is to flag dangerous uses of untrusted data: using it
as a size parameter to malloc, using it as an array index, and so on.

My point here is that not checking for malloc failure is not
inherently a security bug.

Agreed, it is not a security bug by itself, or as you explained, if
malloc can be made to return NULL there is a security bug elsewhere too.
However an application that cares about security should check the return
value of system/libc calls.

However in practice most allocations are unchecked leading to huge
number of warnings, so I think it would be best
to have an option to toggle checking for malloc failures.
Or classify them differently so you can filter the errors from the HTML
report:
   null dereference on allocation failure, uninitialized value on
allocation failure, ...

I agree. I think it all depends on the particular goals of the
programmer for the application they are writing.

The reality is that most code doesn't check to see if allocation
fails. In such cases, I don't believe that programmers are interested
in seeing warnings from a tool that involve allocation failure. When
some warnings systematically become noise to the user, they really do
compromise the value of the bug-finding tool.

For those who want to be particularly vigilant about their code (at
least as far as memory allocation failure is concerned), there
certainly should be an option to either have the analyzer perform more
aggressive checking (i.e., assume malloc can fail) or have the
interface for inspecting bugs allow users to filter which bug reports
they see based on whether or not the bug assumes malloc fails, etc.

Exactly :slight_smile:

Best regards,
--Edwin

>
>> It's been my experience that developers assume success without much
>> thought as to the result of failure until they actually witness it.
>> Allocation failure is especially nasty since the chances of
>> reproducing the error is so slim, and Objective-C might be even
>> nastier as nil is a valid receiver. So the need for a tool to check
>> this exists, but for better or worse, I still wouldn't imagine
>> developers to be proactive about handling such situations.
>>
>> Strangely enough, I have seen many developers be much more paranoid
>> when allocating memory directly rather than through objects.
>
> Straying widely off topic, and so not sending this to the list, but
> the reason a lot of people never bother to check the return from
> malloc() is that, quite a few operating systems (e.g. Linux, and
> Darwin in some cases), malloc() will not fail if you run out of
> memory, only if you run out of virtual address space. If you have
> no memory, it will create a page table entry with the present bit
> cleared and return a pointer to this. The first time you try to
> access it, there will be a page fault and the kernel will try to
> give you some memory. On Linux, if it fails then it will kill
> random processes until it can satisfy the allocation. On OS X it
> will (now) pause the application that asked for the allocation and
> pop up a dialog box asking you to quit some applications.
>
> Because of this behaviour (which violates the spec, but hey),
> checking the return value for malloc() does nothing other than slow
> your code down.

Implementations of malloc and virtual memory are platform specific,
and failure can occur under different circumstances.

> The only time they will be hit is when you run out of virtual
> address space, which is a lot more rare than running out of physical
> memory these days (especially on a 64-bit system, where you
> typically have 48 bits of virtual address space and under 40 bits of
> physical address space).

This is not true. For example, the iPhone has no virtual memory.
When you run out of memory you actually are running out of memory.
Checking for memory allocation failure is especially important on an
embedded device, particularly if there is state you want to save.

It's also the case the kernel's often don't use swappable memory and thus
know how much they have. Even many 64-bit kernels limit the amount of
available kernel virtual address space. Being able to check for failure
to handle error conditions will be useful to FreeBSD once we get the
point that we can run the static analysis tools on the kernel.

-- Brooks

The more complete way to catch these bugs (and potentially
verify their absence) is to flag dangerous uses of untrusted
data: using it as a size parameter to malloc, using it as an
array index, and so on.

It would be cool if, e.g. at an checker-level, a variable or
memory object could have something like the perl "taint" bit.

http://www.webreference.com/programming/perl/taint/

In perl, you untaint via a regexp. In checker, you might untaint
by checking a variable, e.g. for upper/lower bounds (signed) or
upper bounds only (unsigned variable).

If you then use the tainted variable to system function (how do
we define this?), you could get a tainted warning from the
checker.

This indeed would be a useful check, and it is something I would like to have implemented one day as part of the static analyzer.

There has been a variety of work on doing taint analysis on C programs, and there are different kinds of taint properties to check. The kind of checking you mentioned has been before for C (in a research tool) and was demonstrated to be very useful:

   Using Programmer-Written Compiler Extensions to Catch Security Holes
   http://www.stanford.edu/~engler/sp-ieee-02.pdf

Another kind of "taint property" is tracking the use of kernel/user pointers in kernel space; this is more of an address-space qualifier problem, but it can also be viewed as a form of taint propagation.

There have been a variety of proposals of how to define sources of tainted data, and what sinks (functions) cannot take tainted data. One standard approach is to use annotations on function prototypes, which we could do in the form of attributes. This approach has actually been used in the Linux kernel to annotated user vs. kernel pointers. Of course simply having an external list of well-known sources of tainted data that could be fed to the static analyzer would also be useful.

Eventually, once a framework for doing inter-procedural analysis is in place in clang, we could potentially relax taint attributes across procedure boundaries. A good example of this is MECA (another research tool):

   MECA: an Extensible, Expressive System and Language for Statically Checking Security Properties
   http://www.stanford.edu/~engler/ccs03-meca.pdf

There are of course many examples of other systems that do taint propagation (with potentially more analysis sophistication), but these are a couple of good examples.

AFAIK this is done e.g. by the sparse tool when you compile linux
with "make C=1".

In include/linux/compiler.h there are all the things defined that
sparse supports:

#ifdef __CHECKER__
# define __user __attribute__((noderef, address_space(1)))
# define __kernel /* default address space */
# define __safe __attribute__((safe))
# define __force __attribute__((force))
# define __nocast __attribute__((nocast))
# define __iomem __attribute__((noderef, address_space(2)))
# define __acquires(x) __attribute__((context(x,0,1)))
# define __releases(x) __attribute__((context(x,1,0)))
# define __acquire(x) __context__(x,1)
# define __release(x) __context__(x,-1)
# define __cond_lock(x,c) ((c) ? ({ __acquire(x); 1; }) : 0)
#else
# define __user
# define __kernel
# define __safe
# define __force
# define __nocast
# define __iomem
# define __acquires(x)
# define __releases(x)
# define __acquire(x) (void)0
# define __release(x) (void)0
# define __cond_lock(x,c) (c)
#endif

Yep, Sparse has been a great tool for the Linux kernel folks. It doesn't have a full-fledged C parser/semantic analyzer, so it would be interesting to see what would happen if the same checks were implemented in Clang (more code coverage? more bugs caught?). These checks could be implemented as an ASTConsumer in Clang, potentially built in the Analysis library. Adding Sparse's annotation support to clang would also not be difficult.

Microsoft also implemented some great annotations for doing modular buffer overflow checking:

   http://msdn.microsoft.com/en-us/library/ms235402(VS.80).aspx
   http://blogs.msdn.com/michael_howard/archive/2006/05/19/602077.aspx

I would love to see something like SAL (and other great, well-scoped use of annotations) implemented in Clang as well.