Post

1 Star2 Stars3 Stars4 Stars5 Stars
Loading ... Loading ...

By James Bach
A press release by Toyota recently stated:
Toyota’s electronic systems have multiple fail-safe mechanisms to shut off or reduce engine power in the event of a system failure. Extensive testing of this system by Toyota has not found any sign of a malfunction that could lead to unintended acceleration.
Here are some notes for the lawyers suing Toyota. Here is what your testing experts should be telling you:

Whoever wrote this, even if he is being perfectly honest, is not in a position to know the status of the testing of Toyota’s acceleration, braking, or fault handling systems. The press release was certainly not written by the lead tester on the project. Toyota would be crazy to let the lead tester anywhere near a keyboard or a microphone.
Complete testing of complex hardware/software systems is not possible. But it is possible to do a thorough and responsible job of testing, in conjunction with hazard analysis, risk mitigation, and post-market surveillance. It is also quite expensive, difficult, and time consuming. So it is normal for management in large companies to put terrible pressure of the technical staff to cut corners. The more management levels between the testers and the CEO, the more likely this is to occur.
“Extensive testing” has no fixed meaning. To management, and to anyone not versed in testing, ALL testing LOOKS extensive. This is because testing bores the hell out of most people, and even a little of it seems like a lot. You need to find out exactly what the testing was. Look at the production-time testing but focus on the design-time testing. That’s where you’ll be most likely to find the trouble.
Even if testing is extensive in general, you need to find out the design history of the software and hardware, because the testing that was done may have been limited to older versions of the product. Inadequate retesting is a common problem in the industry.
If Toyota is found to have used an automated “regression suite” of tests, then you need to look for the problem of inadequate sampling. What happens is that the tests are only covering a tiny fraction of the operational space of the product (a fraction of the states it can be in), and then they just run those over and over. It looks like a lot of testing, but it’s really just the same test again and again. Excellent testing requires active inquiry at all times, not just recycling old actions.
If Toyota is found not to have used test automation at all, look for a different kind of sampling problem: limited human resources not being able to retest very extensively.
Most testers are not very ambitious and not well trained in testing. No university teaches a comprehensive testing curriculum. Testing is an intellectually demanding craft. In some respects it is an art. Examine the training and background of the testing staff.
Examine the culture of testing, too. If the corporate environment is one in which initiative is discouraged or all actions are expected to be explicitly justified (especially using metrics such as test case counts, pass/fail rates, cyclomatic complexity, or anything numerical), then testing will suffer. During discovery, subpoena the actual test reports and test documentation and evaluate that.
Any argument Toyota makes about extensiveness of testing that is based on numbers can be easily refuted. Numbers are a smoke-screen.
Examine the internal defect tracking systems and specifically look to see how intermittent bugs were handled. A lack of intermittent bug reports certainly would indicate something fishy going on.
Examine how the design team handled reports from the field of unintended acceleration. Were they systematically reviewed and researched?
Depositions of the testers will be critical (especially testers who left the company). It is typical in large organizations for testers to feel intimidated into silence on critical quality matters. It is typical for them to be cut off from the development team. You want to specifically look for the “normalization of risk” problem that was identified in both the Columbia and Challenger shuttle disasters.
If the depositions or documentation show that no one raised any concerns about the acceleration or braking systems, that is a potential smoking gun. What you expect in a healthy organization is a lot of concerns being raised and then dealt with forthrightly.
Find out what specific organizational mechanisms were used for “bug triage”, which is the process of examining problems reported and decided what to do about them. If there was no triage process, that is either a lie or a gross form of negligence.
If Toyota claims to have used “proofs of correctness” in their development of the software controllers, that means nothing. First, obviously they would have to have correctly used proofs of correctness. But secondly, proofs of correctness are simply the modern Maginot line of software safety: defects drive right around them. Imagine that the makers of the Titanic provided “proof” that water cannot penetrate steel plates, and therefore the Titanic cannot sink. Yes steel isn’t porous, but so what? It’s the same with proofs of correctness. They rely on confusing a very specific kind of correctness with the general notion of “things done right.”
The anecdotal evidence surrounding unintended acceleration is that it does not only involve acceleration, but also a failure of braking. Furthermore, it’s a very rare occurrence, therefore it’s probably a combination of factors that work together to cause the problem. It’s not surprising at ALL that internal testing under controlled conditions would not reproduce the problem. Look at the history of the crash of US Air flight427, which for years went unsolved until the transient mechanism of thermal shock was discovered.
You need to get hold of their code and have it independently inspected. Look at the comments in the code, and examine any associated design documentation.
Look at how the engineering team was constituted. Were there dedicated full-time testers? Were they co-located with the development team or stuffed off in another location? How often did the testers and developers speak?
What were the change control and configuration management processes? How was the code and design modified over time? Were components of it outsourced? Is it possible that no one was responsible for testing all the systems as a whole?
What about testability? Was the system designed with testing in mind. Because, if it wasn’t, the expense and difficulty of comprehensive testing would have been much much higher. Ask if simulators, log files, or any other testability interfaces were used.
How did their testing process relate to applicable standards? Was the technical team aware of any such standards?
In medical device development, manufacturers are required to do “single-fault condition” testing, where specific individual faults are introduced into the product, and then the product is tested. Did Toyota do this?
What specific test techniques and tools did Toyota employ? Compare that to the corpus of commonly known techniques.
Toyota cars have “black box” logs that record crucial information. Find out what those logs contain, how to read them, and then subpoena the logs from all cars that may have experienced this problem. Compare with logs from similar unaffected cars.

The best thing would be to reproduce the problem in an unmodified Toyota vehicle, of course. In order to do that, you not only need an automotive engineer and an electrical engineer and a software engineer, you need someone who thinks like a tester.
The unfortunate fact of technological progress is that companies are gleefully plunging ahead with technologies that they can’t possibly understand or fully control. They hope they understand them, of course, but only a few people in the whole company are even competent to decide if that understanding is adequate for the task at hand. Look at the crash of Swiss Air flight 111, for instance: a modern aircraft brought down by its onboard entertainment system, killing all aboard. The pilots had no idea it was even possible for an electrical fire to occur in the entertainment system. Nothing on their checklists warned them of it, and they had no way in the cockpit to disable it even if they’d had the notion to. This was a failure of design; a failure of imagination.
Toyota’s future depends on how they take seriously the possibility of novel, multivariate failure modes, and aggressively update their ideas of safe design and good testing. Sue them. Sue their pants off. This is how they will take these problems seriously. Let’s hope other companies learn from no-pants Toyota.

Source: http://www.satisfice.com/blog/archives/420

Category: Buggy Products, Critique, Test Strategy

Você também pode querer ler

Comments are off for this post.