used to be old and in the way wrote:
A self-driving car will need AI that is functionally equivalent to a general-purpose intelligence. That is a hard problem. Moravec's Paradox dates back to the 80s when I was working in AI - things that are hard for humans are easy for computers, but things easy for humans are hard for computers. The second half of that is not generally understood.
It's more easily explained as complexity compromises performance. The more decisions an AI makes, the more things can and will go wrong.
The only way a robot can be 100% is if you nail it down and provide it an environment where nothing unexpected ever happens. Checkout bots shut down until that unexpected item is out of the bagging area.