Static Code Analysis: Inherently Labour-Intensive, Little Gain

Uncategorized Add comments

I was reading back through the Symas Blog, where they respond To Veracode: Thanks, but no thanks…. The quick summary is that the Static Code Analysis tool lists false-positives that it should know are not flaws, and lists them multiple times in order to inflate its error-count.

I would add that these tools not only add labour once, but incur a considerable additional effort to re-check after any code maintenance.

In past, I’ve used a variant of Klocwork before. Apparently, our version needed a lot of customization in order to be used, since we had a guy dedicated to Klocwork: .14% of the product was solely drained by Klocwork, but that’s not the big drain. I’m actually a fan of one guy making a tool easier so that it saves more than 0.15% of a developer’s time.

Developers edit code. That code editing often inserts or removes lines.

Klocwork accepts markers for “yeah, it looks like a flaw, but we know it’s OK, we checked”. This is done by marking the line to ignore in a cross-reference list. I’m told this isn’t actually a feature of Klocwork, but a bolt-on we did, but I mention here just in case.

Klocwork requires a very powerful machine to run — roughly 6 times what we needed to simply build the code.

Code edits mean that all the Klocwork exceptions have to be re-audited — all lines of code after the part edited need to be re-checked.

Requiring a clean Klocwork Report blocked code merges.

Blocked code-merges further blocked our development progress.

Most — almost all — Klocwork flaws were false-positives.

We spun our wheels in code-inspections, check-ins, audits, for flaws which were known-OK, repeatedly. That was wasted effort in all teams, by all engineers, every single time. Much more than 0.15% wasted, and the few gains we got (the engineers were mostly skilled, or well-mentored) were overshadowed by spending entire days in the administrative burden of re-registering/inspecting/approving Klocwork exceptions.

The obvious answer is a #pragma KLOCWORK_EXCEPTION that identified the following code-block as “OK”. This is similar to a #pragma warm -NNN used in EDG compilers to silence a warning. The actual could refer to a bugreport or authorization code (of which, we might count the total permitted to detect re-use), or an opaque Klocwork signature on the block itself, allowing edits to the block be detected, but allowing cut-n-paste of that block (including those defined in headers, which appear in multiple code blocks) to be equally excepted/ignored.

If the tool was opensource, we could give a patch the Klocwork representative in our company to make a change to the tool to make it easier.

Alas, I agree with Symas Blog: it’s proprietary, opaque, a stovepipe. All we can do is point a finger and say “FAIL”.

2 Responses to “Static Code Analysis: Inherently Labour-Intensive, Little Gain”

  1. Andy Says:

    Thanks for pointing out the issues! One of the hidden costs of static analysis is the cost of inspection. It’s a common mistake to think that static analysis tools are going to spit out only killer bugs. It shouldn’t ever be expected from an analysis tool that doesn’t even run the code. Just like bugs found by other methods, a static analysis tool is going to report medium, low, don’t care (e.g. bugs in 3rd party or test code) and false positive results where the tool just got it plain wrong. Because these tools can report a lot of results, the challenge is in knowing WHICH are the problems you should fix and that’s where inspection comes in to take a chunk of time. Configuring the tool to understand the codebase better, using filters and several other strategies can help make the tool more cost effective, but they don’t come for free – some time/effort/resource/help may be needed.

  2. allanc Says:

    Thanks, Andy. The re-birth of my blogging might only have one or two readers yet. I notice (in the VOIP testimonial) that your company includes Coverity tuning, Coverity being the tool preferred by the Original post that triggered my opinions.

    How do the tools you see get around the fact that edited code tends to require re-validation on false-positives that are already discussed, inspected? (the thing for which I suggest #pragmas above)

Leave a Reply

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in