With a growing application of Artificial Intelligence and evolution of machine learning, it is important for the designers of these algorithms to ensure human safety, individual privacy and (above all) control.
Legislation in Line
We look around in today’s world and commonly hear of practical implementations of artificial intelligence, deep learning, and machine learning. It has spread to such a great extent that it cuts through every nook and corner of our lives. We even leave our most valuable data and information to its disposal just because its designed to make our lives easier. Today the Big Four (Amazon, Google, Apple, and Facebook) hold the majority of user data (an estimate of 1,200 petabytes – that’s a staggering 1.2 million terabytes).
This is a world which is not ruled by money; it’s ruled by data. Money changes the equation on how these companies deal with our data. The recent Cambridge Analytics data scandal just reiterates the need for ethics in the advanced areas of robotics and ML. We talk about autonomous driving cars, robots that can express feelings or emotions and virtual assistants to augment our day to day work. It’s time to bring in not just ethics into the equation, but also some regulations/legislation in line to safeguard human interest.
Some of the leaders connected to this industry have already spoken:
“And mark my words, AI is far more dangerous than nukes. Far. So why do we have no regulatory oversight? This is insane.”
Elon Musk
“So you have survival of the fittest going on between these AI companies until you reach the point where you wonder if it becomes possible to understand how to ensure they are being fair, and how do you describe to a computer what that means anyway?”
Tim Berners Lee
“The development of full artificial intelligence could spell the end of the human race. It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.“
Stephen Hawking
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Bill Gates
It is evident that ethics are not just important, but crucial to turning these powerful algorithms in favor of humans. Machines learn ethics through code that it is programmed for. Hence, the author of these so-called AI algorithms should not take this lightly by any means. Where we are today is a result of evolution that took millions of years. If you compare that with the evolution of the areas in AI, we are not even a century old since the first computer was invented. Even The Internet is only 3-decades old. So, you can imagine the pace of innovations in the world of AI.
Without ethics, a field that is so fast paced would mean disaster if put into the wrong hands. Have we thought of cyber warfare? We won’t need the same type of ammunition as we know of today. Most of the machines used in war today are sophisticated and controlled via computers; imagine having access to the onboard computers of these machines. As we speak, countries like China, US, UK, and Russia are already developing these capabilities.
While most of my blog has focused on AI as it horizontally cuts across all the other innovations, keep in mind the innovations (courtesy Forbes) that we might see in these fields:
- Autonomous Things
- Augmented Analytics
- Digital Twins
- Immersive Experience
- Smart Spaces
- Quantum Computing
Finally, I’d like to end my blog with what Vint Cerf (Known as the father of the internet) has to say about the posing risks associated with fast lane innovation…
“One thing that I can tell you is that computer scientists and engineers often do not have the capacity to fully imagine the implications of the technology they develop. In fact, William Gibson, who coined the term ‘cyberspace,’ did not understand much about the technology. However, he imagined what he might do with it, and his writings supplemented the understanding and the ability of the engineers to foresee the potential. Despite his lack of technical understanding, he was spot on for a lot of it.
Many of us did not anticipate the harmful potential of these technologies. The people who are creating these products and writing the software should feel a much greater burden than in the past because the harmful side effects can be devastating on a global scale. Not only do individuals need to feel this ethical pressure, but I feel that companies need to be incentivized to do everything in their ability to ensure these bad things do not happen.”