A car and an airplane do not compare
Contemplate the difficulty of manual control, purely as a baseline. And we're talking in the framework of automation which has failed, where the rider did not expect to be hand-driving/flying this vehicle today, and their task is to reach a place of safety or perhaps "limp home" via wisely chosen routes (e.g. minor back roads).
A car is an on/off button (these days), a forward/reverse selector, a big wheel to aim it, and two pedals for slow and fast. And that's it. That's a pretty obvious control scheme. An untrained person can sort it out fast. Then, all you need is a strong aversion to hitting things, and a willingness to observe and learn, especially what other cars do, and there are plenty around to observe.
OK, so you figured out the "start" and "go" levers in your airplane. Where's "stop"? Found it, is it all right to just alternate between "go" and "stop", no worries about anything overheating, right? We're toddling around the taxiways, anything special we need to do around that BIG taxiway there?
OK we're taking off, when do we rotate? How hard to pull back? Is there a "too hard"? Now we just use the stick to aim the nose where we want to go, right, and the airplane just sorts it out?
See, I'm pointing out here that the baseline tasks of manual flying are much, much more intricate than the baseline tasks of manual driving. And that's going to rear its ugly head as a BIG problem when the automation fails and a non-qualified person needs to get the vehicle to a safe stopping point. Not really a problem for a non-qualified driver to limp a car to a shoulder. Airplane not so much - too many fatal mistakes are possible, and they can happen very quickly - too quick to call 911 to ask how to fly a plane.
"But what about all those videos showing non-pilots getting talked down by ATC?" Caution: Survivor bias. The savior is already in the copilot's seat, has been watching the pilot's moves, may have been given a few quick lessons or a few minutes hand flying (the pilot choosing them for that seat, and educating them, for just this situation)... already has a headset, correct frequency is already tuned in, and from watching the pilot they can find "Push to Talk". And they are flying in a world sparse enough that human ATC helping human pilots is the norm. Now, OP's self-flying airplane won't be the only one. Put 50,000 self-flying planes in that airspace and think about how ATC works now. Any humans are saturated by complaints about their FlyUber having no Diet Coke in the minibar. Or you could have a common-mode failure where thousands of self-fliers revert to manual at the same time and ATC is simply overwhelmed. That "talk them down" thing isn't gonna happen.
In fairness, automation is actually easier in some ways. Look at what is holding back "self-driving" from being properly called that -- it's all the weird chaotic stuff that happens from sharing uncontrolled space. Airports are much relieved of that because of access control. Airports don't have pedestrian crosswalks. A beach ball isn't going to suddenly bounce across the taxiway with an inevitable small child in hot pursuit. So at least for that part, we just need a special Bot-plane-only airport. Unfortunately, real estate developers are not begging the aviation community to take subdivision projects off their hands and turn them into conveniently located airports! So your new robo-airport will probably be 50 miles out unless you luck onto a long, narrow Superfund site - not convenient at all, ask Montreal if that spells success.
But just because the automation is easier, doesn't make it easy.
First self-driving "car" wreck. Computer "driver" but human "watcher" - after thousands of hours of watching the computer never make a mistake, the human was complacent or stunned to see the impossible. E-devices were stowed; all indication is the human was as attentive as humans get. Source: NTSB
Humans are good doers, but bad watchers.
So of course you're dying to know: What went wrong in the above photo? Yes, train control lets you overrun the distance you can see, but in this case the sightlines were good enough that an e-stop on sighting would've sufficed. This wound up being a harbinger. The problem is those pesky human factors... artfully described in the movie Sully (setting aside its accuracy, if any).
Remember - the same tech that is "improving" aviation is improving cars too - we've been working on self-driving cars for some years now, and how's that going?
The pattern seems to be that computer mistakes are different - they have no sense of scale, and so their mistakes can be whoppers. They will drive into the side of a turning semi, or disregard a pedestrian, because the thing is below detection threshold or not expected. Attentive humans do make mistakes, but they don't make large blunders like this as a rule. And so the "solution" is thought to be human watchers...
But it turns out, humans are good "doers" but really bad "watchers". The task simply does not suit our brains. The Washington Metro rail accident taught us this, but we seem doomed to re-learn it. Many self-driving regimes are giving humans the job of watching for those "once every 100 hours" edge conditions and jumping in immediately. This does not work. Actually, the human has a cognitive crash - it takes time to shift out of complacency, realize an edge condition is happening, that there's a call for action right now, and shift from passive "watcher" to active "doer". Why stop in civil transport? Take the Moskva - its clunky Soviet systems needed an "unblinking eye" from its operators, and didn't provide strong automation assistance like Patriot does. Aside from organic human complacency, you also have the impulse for electronic distraction - present in several "self-driving" car wrecks and allegedly, one commercial flight overflying its destination by 20 minutes.
On the other hand, when you flip the script and let the human be the "doer" and the computer be the "unblinking eye" of a watcher, that combination works out pretty well. It works better for the computer to say "terrain, gear" (not knowing if you're trying to land) than for the human to know the computer has latched onto too low a glideslope (a-la Die Hard II, but that actually happened).
The failure modes are much, much worse.
We already have some data to know what "self-flying airplane" failures look like. Like the odd turn of weather that Air France 447 flew into that froze their pitot tubes, necessitating a very easy, by aviation standards response of "just leave throttle and trims where they are, since they are correct, and don't do anything crazy with the stick". Or the "intermittent runaway trim" problems on several 737s that flummoxed four trained pilots who train runaway trim scenarios in the simulator.
Of course there are countless stories of the automation doing something wacky and the pilots saving the plane just fine. But I simply do not see how an untrained non-pilot could possibly sort out this kind of computer failure that already does happen to trained pilots. What happens then, when all pitot tubes freeze up due to odd weather, or whatever other automation failure renders the autopilot unable to fly? Do we just accept that the plane crashes?
Or do we try to get the poor passenger to save the day -- to which I say how? I mean, normal airplanes have all the kit in place - stick, instruments, throttle. There's a radio system and headsets already tuned to talk to ATC. There are many cases of "pilot incapacitated" situations where passengers were able to don headsets, talk to ATC and get the plane down intact. I'd expect those to be eliminated for cost in a self-flyer. Now what if there's no ATC to talk to? What if there's no headset? What if there's no stick?
OK, so you provide all those things, how do you keep the untrained passenger from using them inappropriately? They have to understand the fundamentals of flight, such as "there is a speed called 'too slow' and you need to know what a 'stall' is and how to get out of it". Wind shear is a thing. If they don't have stall-spin recovery training, they probably will have a bad day, especially if they are in IMC.
You could set it up "fly by wire" so the computer disables them unless it's in trouble, but can we count on the computer to know it's in trouble? In the WMATA crash and the self-driving accidents I mentioned, the computer thought everything was hunky-dory.
The premise of this question seems to be **the automation does not fail; or it fails, you die, and that's just accepted as the cost of flight.
The problem you have then is people on the ground. There's an established FAA gold standard for how good an airplane has to be, to be allowed to fly over populated areas. The self-flying plane would have to meet that.
A new hope
I'm thinking I should mention an X-factor here: drone deliveries. So far, they're not used for much except delivering ordnance to soldiers on front lines. But if "drone delivery to homes" starts happening in earnest, we will begin to collect large volumes of data about reliability and failure modes.
As this corpus of hard data is developed, it will give us a new view to the question of putting humans in such vehicles. This may invalidate the above factors.
But here, there'll be no substitute for vast experience. They'll have to show millions of incident-free deliveries*, or show how the accidents which do happen are survivable.
* Military deliveries notwithstanding, it turns out some soldiers are revanchist and are not so happy to receive deliveries via this new mode, even though delivery is free.