The assumption that there are a fixed number of bugs, that must be found, and can therefore only reduce (making it safer), may be true in theory, but is deeply flawed in a practical sense. That's the basic problem.
What we care about
A modern OS has tens of millions of lines of code, in some cases hundreds of millions. We don't care about bugs really, its more helpful and instructive to care about exploitable vulnerabilities - which can include deliberate design choices, dependencies, and many other means by which a system can be compromised.
For a non-OS example of the difference, consider hacking of 2 factor authentication, by (1) socially engineering a persons mobile phone provider, to persuade them to issue a new SIM or maybe change the SIM email address. We now click "2FA login", and use the "stolen" SIM access to get the 2FA login code. Or perhaps (2) we find their password in a hack of some third party site and its the same as their email password so we issue a password reset then use their email account, to get a replacement logon issued ("Forgot your password?"). Or maybe (3) they lost their laptop with SSH login certificate included and it takes them a day to notice. Or (5) the encryption schema used for something was secure in fact but evolving research means its no longer secure. Or (5) hardware vulnerabilities exist too (DMA via FireWire, Bad Maid USB, you name it).
The point is that the 2FA or password reset or SSH login feature may be bug free, but a third party vulnerability let them hack. There's no "fixed number" of issues, and not all exploitable weaknesses are due to OS bugs. Who's to say what else could have been used?
So we can't even consider just the OS, or a concept of a static number of bugs. We have to consider the universe of things it may depend on, or evolving outside capabilities - things we maybe never considered until many years later. After all, SMS hacks weren't considered until a while after SMS existed. We have to consider the evolving landscape of exploits on the OS, or vulnerabilities.
We also need to distinguish between bugs, to be fixed, and new threats, to be countered. As some of these examples show, a new threat can arise, that didn't exist before. They may also reflect an issue that isn't really fair to classify as a bug in the system to be fixed, so much as a new threat opening due to external context changes that must be countered.
Theoretical vs practical risk
We also have to consider practical risk. Security is all about raising the barrier to misuse, there are rarely if ever perfectly secure systems, its always degrees of safety, "safer" not "absolutely safe".
Only in theory can this be disregarded. In all practical senses, we need to consider things like how much attention and use will this OS be getting, and what its used for, because more use => more interest to hack, more attention => more probing for new ways to hack it. Even an obscure OS may become of great interest if a use case is discovered to be government servers, nuclear or military control, manufacturing, energy, space, banks and financials, and R&D, or their back-end systems, to take some examples.
If a system is of great interest, then a lot of attention may go in to studying other systems connected to it, and their vulnerabilities too, as a stepping stone.
Your answer
For these reasons, you can rarely consider an OS, even one that's feature frozen except for bugfixes, as having a fixed number of bugs. It just doesn't happen that way, and won't help.
The OS is a dynamic environment, and interacts with its environment. So for all these reasons, you can't evaluate the scale or seriousness of exploits, without fixing it in a specific time, with specific outside focus, specific outside exploitable levers, specified hardware and hardware access, the criteria by which you measure and consider a system "secure" (barrier height), and so on.
Therefore an OS that was in fact secure (either in fact, or for your practical purposes) is quite capable of transitioning to being insecure...... not even because of an undiscovered bug requiring fixing, but because of some external factor requiring countering - and may well do so.
And that spells the end for your argument.
it would not kill the machine but refuse to run past its extended support expiration date
Emphatically no. That would be a gross violation of the rights of the users.
– preferred_anon Feb 02 '22 at 11:57