Friday, 23 March 2018

How self-driving cars will actually work



When a new idea comes into town, or an old one actually gets revamped, there are usually two types of people that get into the bandwagon of developing it into a product.


The purist: They want to take the virgin idea and force it into society without any adaptation period. They want their vision as they see it in their minds take place in society without asking if society is ready for such an idea.
The Hybridist: This group believe that as great as any idea might be, there needs to be a slow adaptation period for it to be absorbed by the society. So they go ahead and blend the new idea with the existing, slowly phasing out the existing.
The last group is not among the two most important, these are the pessimist who think nothing will ever work, but in reality they are attacking the purist most of the time because they cannot see how the new model fits into the current system in a pure way.
The best way forward is that of the hybridist, the slow process of adaptation must be respected. There are a few people who will adapt to the new idea without hesitation – they have been expecting it and welcome it with open arms, but the others are more conservative and want to see how the new stuff takes some stable shape before jumping in.
The current discussion here is about self-driving cars. Any technology can be implemented now, as far as some space in the physical world can be created where all the factors that need to be in place are prepared for the new idea to take place. But in reality we cannot create a new space on earth where certain ideas will flourish, at least currently maybe in the future that will be possible but not now.
Our current roads and cities where created with the anticipation that it will be used by human drivers alone, if the designers were anticipatory enough they would have speculated the possibilities of computers and eventually Artificial Intelligence, but they didn’t. They went ahead and solved a problem that was immediate to their society and didn’t think too far ahead. They didn’t need to because even if the intelligentsia did request that roads suitable for use by self-driving cars of the future be created, they would be dusted off to the corner so that real thinking people could get ahead with what was “practical”.
Trillions of dollars have been invested into the current road infrastructure, cities have been built around them and it would be insane to say we tear down the cities to design roads that can be easy for current AI to navigate.
Does that mean it is a mistake to start working on self-driving cars? I disagree, self-driving cars are important for the future of transportation. Eventually they will take over the streets and new cities or roads will be designed to meet their need, or we wait till we have full Artificial General Intelligence, beyond deep learning guided computer vision.
So self-driving cars are important for the future and we should start working on them now, perfecting them bit by bit. But what is important is that we are not to idealistic or pure in our implementation. Although it’s cool to watch a car drive without anyone in the driver’s seat, that is not suitable for our current cities with the AI as it is in its current form.
We should adapt our self-driving car efforts to meet the current state of the art of artificial intelligence and the nature of our current societies.
The truth is that if self-driving cars are to be implement in a pure fashion, that is without a driver in the seat, then entire new cities where it is illegal for humans to drive will have to be constructed. If by some accident we have full artificial general intelligence, then this will not be necessary, but the latter is one of hope, while the earlier is one of economical will.
Another alternative would be to design special lanes for self-driving cars but that will be impractical for many cities that barely have space for cars and pedestrians. If the roads were with cars alone then self-driving cars will not be that difficult of a thing to design, but the road have pedestrians so it will be very hard to train an AI to recognize all possible ways a pedestrian can behave, there will always be too much randomness in the system, that billions of examples will not eradicate. If that were possible, then there will be no randomness at all in any system because AI will be able to predict all kinds of randomness and the stock market will not exist. But that is viewing AI as a prophecy machine which surprisingly a lot of people view it that way.
As far as all cars on the street are self-driving cars, all capable of communicating with each other, with no pedestrians on the side, there will be absolutely no accidents except by some weird software error or hacking.
So what is the way forward for self-driving cars? they have to come in to society in a subtle manner, giving time for adaptation and for AI to get better since we won’t have any new lanes in Manhattan anytime soon. There must always be a driver at the driver’s seat. Self-driving cars should be sold like seat belts, as another form of driver assistance ensuring safety of the driver and passengers of the car, just another level of insurance against disaster.
It’s understandable that the self-driving guys want to build out full solutions so that they can easily access funding and also make more money, but in the end until we have better AI they should have to deal with just selling self-driving mechanisms to existing car manufacturers or to the newer electric guys.
The companies that buy these raw mechanisms, hardware and software, will now brand it in any way they see as “marketable” to new buyers. Autopilot, a better cruise control, more than park assist etc.
The idea of no driver in the driver’s seat will not sell well with the public for a long time to come. This is not to dishearten self-driving car engineers and the general AI community but to encourage them to greater effort.
Currently premature AI is being marketed fiercely by many big companies who have invested so much money on these projects for so many years they are tired of waiting and want to cash out. They are pushing out press releases and products to cash out as soon as possible, and AI programmers desperate to please their masters are also churning out small variations of the same algorithms and calling it innovations. This is what has led to the proliferation of self-driving cars, all based on the kind of AI that is only good at tagging photos on some forgiving social media platform.
AI should be an assistant at the current level of development, call it a bad calculator you use because it works but you cannot totally rely on it. It should be applied to assistive tasks with a human as verifier. In my opinion AI is good for helping humans see what their concentration miss out. The main application of the current deep learning AI system should be used in discovering the patterns we miss out due to the lack of perfect concentration and memory in humans.
In my opinion deep learning is being misapplied, there is this strong urge in some humans to invalidate human beings in all field of work, we complain that AI will take us out of the equation but in reality this is our own desire and not that of intelligence. Intelligence is just intelligence; it has no purpose it just crunches out numbers in a mechanical way no different from a calculator. It’s the human being who applies this intelligence that is responsible for anything the intelligence does.
AI techniques like deep learning, RNNs, reinforcement learning are just tools, it’s how we apply it that really matters. And we are currently misapplying deep learning because we do not really understand its purpose.
A calculator is just a calculator; it is better at calculating than a human being. When you are faced with some arithmetic its better you use the calculator to assist you. But you the human are the one who found it necessary to do arithmetic in the first place, the calculator cannot decide to do anything.
You can actually program intention, but that will be your intention not the intention of the machine. You can design raw intelligence and then code intention to run that intelligence, if that intention becomes malevolent you the programmer should be held responsible because you did not design the intention properly.
If you are tasked with writing a control program for a nuclear reactor and you mess up in your job, because you did not understand all the constraints and probably did not test your code properly, if the reactor fails and the error is traced to some software malfunction, then you should be held responsible for it. We cannot ascribe intention to a computer program or a nuclear reactor.
Deep learning and indeed all current AI technologies should be seen as assistive tools, we are operating from a wrong mindset currently and trying to ascribe anthropomorphic features to raw code, either because we are atheist at heart, but deep inside our brains is the urge to worship something greater than us and we are trying to create that god in AI. Or because we have watched too much sci-fi movies with the theme of an anthropomorphic AI, with a personality.
Driverless (No driver in the driver’s seat) cars are a cool idea and will find their place in society in some 20 years’ time or more. It’s really a matter of human nature, it’s not a technology thing. Advanced technology can exist but human acceptance is the determining factor in choosing to use it. For now, self-driving car engineers should focus on a lighter more benign form of entry into the industry by providing it as an assistive product to assist the driver in situations of driver fatigue on a long journey or slow traffic.
Just like the autopilot function in a plane, the driver should either choose to use it or turn it off with full responsibility. If the car crashes, then the driver should be blamed in most situations and not the AI. The manufacturer of the feature should not be blamed except where the fault can be traced to a faulty software system in the self-driving mechanism. But there should be some kind of black box system that records the AI’s interaction with the driver and if it is discovered that the AI did advise a switch to manual control in the situation then the driver will be blamed.
It can be foreseen that good lawyers defending the driver in such a case will hire AI engineers who will try to device some test that will break the AI so that it can be proven in court that the AI and not the driver was responsible for the crash but the manufacture must include the reliability assurance in the warranty report to indicate to the driver that the technology is reliable to certain percentages and should not be relied on in all situations.
In most cases finding fault in the AI will be an unrealistic endeavor because in actual test the AI will prove to be more reliable than the human driver in many situations. These legal implications must be thoroughly understood by any manufacturer who wants to offer self-driving car technologies in their products.
 There must be a driver on the seat at all times for now. He must watch the road but the AI can help him reduce the amount of effort he needs to apply when driving. On highways the AI can takeover just like in an airplane the pilots use autopilot when appropriate.
But for driverless cars the extreme form of self-driving cars, that will take a long while for society in its current constitution to accept. Far longer than the engineers designing them now can understand. That dream of a taxi without a driver pulling up to pick you to the airport will take quite a while, but one day it will be possible, but not in 10 years except there is another major breakthrough in AI not just minor hacks to deep learning et al.
If Elon Musk secures funding to do his tunnels, then the tunnels will be designed to accommodate the needs of driverless cars such that if there is a long 500 miles plus tunnel you can sit in a driverless car, in the back seat and sleep comfortably your entire journey.

No comments:

Post a Comment