Generally, if something is wrong with a real-time system, it'll go wrong upon a trigger event occurring. For example, Knight Capital had a stock trading system which responded to specific circumstances by buying stocks - when the system was pushed out without testing, it spent all their money, causing price fluctuations and the company to go bankrupt.
As a result, system correctness testing is really important - whereas with a non-real-time system, there are often ways to compensate for issues (maybe you can revert the database and redo transactions if you find a mistake has been made), that's not always the case with real-time systems, especially those with a real-world impact.
Consider systems that control traffic lights, or fly-by-wire implementations in aeroplanes. In these systems, a flaw which isn't picked up in advance can cause injury or death, and, worse, can be triggered by an unforeseen situation - can your fly-by-wire system handle failure of the GPS satellite systems? What about the sudden appearance of a number of erroneous GPS signals from a malicious agent?
This means that development and testing of real-time systems tends to be much more stringent than for other software. It's not uncommon for vast swathes of safety case data to be produced, ensuring that any potential failures result in a safe state, which depends on the system in question: for Knight, it would have been a maximum spend, for example. For Boeing, it might be that the plane alerts human pilots to a requirement to take over. For a nuclear power plant, it might involve a controlled shut-down.