It is not a stretch to say that there are plausible and implausible tests that can be performed on software. For instance, a plausible test could be one where the power is switched off, testing whether a piece of software notes an irregular end to an operation. An implausible test might be to throw that computer into the ocean, drag it back out, and determine if your piece of software still works.
As Eric Jacobson explains, there is a much longer list for implausible tests than there is for plausible ones–the trick is knowing where particular tests fall between those two lists, and doing the plausible tests first. The reason for this is clear: more value can be gained by checking plausible scenarios than implausible ones:
Basically, I start with the most plausible tests, then shift my focus to the stuff that will rarely happen. These rare scenarios at the bottom of the chart above can continue forever as you move toward 0% plausibility, so I generally use the “Times Up” stopping heuristic. One can better tackle testing challenges with this model if one makes an effort to determine how users normally use the product.
While Jacobson doesn't suggest completely ignoring a test if it seems implausible that a user will perform that action (removing all fonts from their computer, for example), he does suggest that more value will be gained by testing the most likely situations a user might find themselves in.