NASA was having some difficulties with its projects. To begin with, test cases were developed manually, only some of the test executions where automated, and it was difficult for the organization to summarize what was actually tested. Within an organization like NASA, missed software bugs can readily lead to (at best) financial loss and (at worse) death. Automating testing allowed for more thorough, consistent testing, and that was the working premise that the authors of this presentation worked from.
In the presentation, Dharma Ganesan, Mikael Lindvall, Charles Song, and Christoph Schulze explain how three projects were targeted for model-based testing. The presentation then shares some of the processes and findings to implement MBT, and the current results achieved, which include:
An end to end approach for test automation
An easy-to-infuse testing process
More defects found earlier
It isn’t all positives, however. The modeling did require developers to learn modeling and abstraction, and they found it likewise difficult to document individual cases. View the full presentation here: http://www.nasa.gov/sites/default/files/03-04_model-based_testing_of_nasa_systems.pdf