In theory, what your professor is proposing sounds reasonable. But there's a problem in practice. You never get to a stable configuration. Ever.
There is no non-trivial software out there that doesn't have unknown bugs. None. And some percentage of those bugs are security vulnerabilities.
What does the professor propose to do if someone finds a remote code execution vulnerability in the system after certification is complete?
Does he really believe it's safer to keep a system deployed with a known remote code execution vulnerability than to take the risk of a patch?
The answer (surprisingly) might be "yes". And that goes to the heart of real-world security. Security is really about risk management. Every deployed piece of software has a certain amount of risk associated with that software. The newly deployed software might break an existing line-of-business application. It might introduce new vulnerabilities.
There's basically no way of knowing what might happen. So you need to decide what your tolerance for risk is.
For some systems (often systems where someone's life depends on the system), it's actually better to keep the system unchanged (even if there are known vulnerabilities) and mitigate the vulnerabilities outside the system (with a firewall maybe). For other systems, all you need to do is ensure that your line-of-business applications continue to work.
There is no one-size-fits-all solution here. Every enterprise needs to make their own decision about the risk/reward trade-offs associated with a security patch. Some will accept the risk, others won't.