You're correct, I haven't worked in defense. I've worked in software quality tooling and test instrumentation - static analysis, test coverage analysis, stuff along those lines. Those incentives you're talking about are a big part of why I refuse to work in that industry anymore. Despite your skepticism I do have specific reasons for my opinion (I just can't talk about them).
This metric is not meaningful. How much of it is safety critical? What's the state size? What's the cyclomatic complexity?
A tersely written RTOS has much less code than an Android/Java entertainment system, but the former is both much more complicated code and much more likely to kill people.
Also meaningless. Digging a trench with a teaspoon is a "lot more effort" than a backhoe. And, sure, you're "dealing" with the trench problem, but that doesn't mean you're effective, that you chose a good approach, or that you'll be done any time soon.
It's not even a problem of motives, it's a problem of combinatorics. There is no known way to write quality software. Formal methods come closest, but they only prove that your code does what you've proven it to do. They aren't any help for specification bugs, which are responsible for most deadly software failures. Anything else we want is either undecidable or requires factorial time and space. There isn't even a good way of measuring software quality. Test suite adequacy (100% coverage), for example, does not mean your code is tested well. I happen to know for a fact that irrelevant tests (i.e. folks testing "for coverage" but not actually testing anything) is a big nuisance for defense companies; if anybody can build a viable mutation testing product they're going to make a fortune from defense, but for now it's not possible. And neither is writing good quality software.
If LM, Raytheon, Boeing had some software quality secret sauce, they wouldn't've needed to buy the stuff I worked on/designed. There are a lot of companies out there who think the stuff I worked on is their software quality silver bullet. But it's not, there are no silver bullets.
Software is a machine with 20 billion moving parts, in a universe with no Pauli exclusion principle. The mechanical side is complicated, and I'm not trying to minimize that fact. But you're grossly underestimating both the complexity of software and overestimating our ability to deal with it.
Mechanical engineers can take a single part out of a plane, and they can understand it. They understand what shape that part needs to be, they can make positive statements like "there's a 5% chance this part will fail within 10,000 hours of operation". You can use assessments like that to calculate an aggregate risk for the mechanical parts. But software doesn't work like that. You can't measure it, and you can't ever truly understand the risk. How can you say anything about your software when you've tested approximately 0% of the states, and 0% of the possible inputs? It's the problem of induction on steroids.
I am a beneficiary of the amount of time and money they spend dealing with these problems.
A "good system design", "tested to a properly anal degree" is if your entire software system is simple enough that you can simulate every possible state and input exhaustively. If you tell me that's how the F-35's mission critical components were verified, I'll be suitably impressed. I'd also be very surprised.
Anything else isn't good enough. You really need to keep in mind that software isn't a good thing by itself. Software is foremost a cost-saving measure: it replaces labor and custom parts, it makes maintenance and retasking easier and cheaper. Sometimes it legitimately can do things that have no direct substitutes. But always, there's an engineering trade-off. Every time you replace some mechanism with a computer, you're adding complexity. You're increasing your electronic attack surface. Companies need to start making those decisions ****ing critically, and that's not going to happen as long as you have executives who don't understand the trade-off, who consider "lines of code" an asset instead of a new kind of 21st century toxic waste.
Most of MISRA C is statically verifiable. That means software tools can automatically verify whether code is compliant with MISRA C just by looking at the source code. IIRC there is nothing equivalent for defense. MISRA also references HIC++ now, so maybe defense or aerospace has adopted it too.
You need to write highly testable code, sure, but I'm not aware of any standard or recommended ways of doing so. If you're required to follow specific conventions to facilitate MC/DC adequacy, they are probably specific to your company.
like DO-178C