- Feature Stories
- News Flash
- Making Noise
- The Fence
- Beyond the Cut
- Inspire Innovation
By: Jennifer Bails
Two commercial jetliners flying at high altitude above a major American city—both carrying hundreds of passengers—are heading toward each other. There is little time to spare before they will collide. Somehow, this impending crash goes undetected by the flight directors at the nearby Air Route Traffic Control Center, whose job it is to route plane traffic through the city's airspace. And somehow, neither pilot notices the pending head-on collision.
With the planes hurtling toward each other at speeds upward of 500 miles per hour, the on-board traffic alert system sounds an alarm. The pilots must act, and act immediately. In these last-resort situations, pilots are taught to rely on a standard collision-avoidance protocol designed to help prevent tragedies like the 2002 mid-flight crash in Überlingen, Germany, that killed all 71 passengers aboard both aircraft.
There is one roundabout procedure in particular, not yet implemented, which calls upon planes with intersecting flight paths to bank right and then curve back smoothly to the left until it is safe to resume their original flight paths, almost as if the pilots were following a rotary in the sky in opposite directions. The proposed plan was first introduced a decade ago, and three papers have been published demonstrating that the concept is fail-safe. The Federal Aviation Administration is considering recommending this course of action in its Next Generation Air Transportation System, an ongoing overhaul of the nation's airspace system that aims to make plane travel safer and more efficient.
That's why André Platzer was baffled by the results on his screen.
It was the summer of 2006, and the computer science PhD student was visiting Carnegie Mellon from the University of Oldenburg in Germany. There, he developed a new computer tool that can verify how a system could react given any set of circumstances. Because the roundabout tactic had been documented as foolproof, he decided to use it as a simple test to confirm that his computer program was glitch-free by not detecting any problems in the collision-avoidance sequence.
Yet, time and again, his program told him that if two aircraft were heading toward each other at 45-degree angles, the bank-right action meant to save lives actually would put the planes on a different crash course that pilots might not have time to recognize. "I thought I must have done something wrong," Platzer recalls. "I thought maybe I should go back to the drawing table and try to fix the mistakes in my program."
Platzer had come to Pittsburgh to work with world-renowned FORE Systems Professor of Computer Science Edmund M. Clarke, who holds a joint appointment in electrical and computer engineering. The following year, Clarke won the A.M. Turing Award from the Association for Computing Machinery—widely considered the Nobel Prize of computing—for his pioneering role in developing an automated method called model checking that finds design errors in computer hardware and software.
Model checking has transformed how the computer industry tests chips, systems, and networks, and it is now being used to debug applications, too. Computer scientists used to pore over lines of code manually, often in vain, to hunt for costly bugs. For instance, in 1994, a minor arithmetic glitch forced Intel Corp. to recall thousands of Pentium chips—a $500 million mistake.
Today, companies rely on model checking to catch such errors in hardware for everything from cell phones to external hard drives. Fast and automatic, model checking can verify highly sophisticated circuits with as many as 10120 configurations, a mind-boggling number equal to the numeral 1 followed by 120 zeroes. If there is an error, model checking will produce what's called a counterexample to reveal the source of the problem and provide important clues for how to repair it.
For all of its extraordinary power, model checking works best only with discrete systems—those with a finite number of possible configurations, such as a new circuit design for a computer microprocessor. It is more difficult to get the method right when analyzing what are called hybrid, or cyber-physical, systems.(Continued …)