People are constantly engaging on debates over whether superintelligent AI is a boon or a bane for the world
Ever since the idea of artificial intelligence came to existence, people started running towards an end mark. Researchers and scientists had a single goal to achieve, superintelligent AI. However, the fact is that we are very very far away from reaching that destiny. Today’s technology has only taken us up to Narrow AI extent. The technology sector is trying to move at the fastest pace towards General AI. Henceforth, reaching superintelligence is far from reality. But this doesn’t stop the chaos involved in the outcomes of a superintelligent AI. People are constantly engaging in debates over whether superintelligent AI is a boon or a bane to humankind.
Roughly, the arrival of artificial general intelligence is anticipated in the next fifty years. Superintelligent AI is far beyond the league. But this doesn’t stop the scientists from finding ways to make it a reality. Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In simple terms, it is the imaginary AI which not only interprets or understands human-behaviour and intelligence but also is self-aware and self-vigilant enough to surpass the capacity of human intelligence and behavioural ability. With the help of superintelligence, robots or machines can think like humans, and sometimes, goes beyond humans to think of abstractions which are impossible for humans to think. More than the debate on how much should we need to go to reach superintelligent AI, scientists are engaged in finding how the world will turn once such futuristic technology is unleashed. This often leads to controversies on the boon or bane basis.
Superintelligent AI can’t be controlled
A study conducted by Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid suggests that it is theoretically impossible for humans to control a superintelligent AI. However, a far worse insight unravels that humans won’t even find that they have come up with a superintelligent AI. The co-author of the study Manuel Cebrian implies that a superintelligent machine that controls the world might sound like a sci-fi story, but there are already machines that perform certain tasks independently without programmers fully understanding how they learned it. According to the research group’s study, published in the Journal of Artificial Intelligence Research, to predict an AI’s action, a simulation of that exact superintelligence would need to be made. These research results find imprints back in the 1940s when Asimov came up with the ‘Three Laws of Robotics.’ He presumed that robots may not harm humans, they will obey human orders and will protect their own existence.
Besides, quantum theory, one of modern science’s key ways of explaining the universe, says that predicting the future may not be possible because the universe is random. According to quantum’s remark, we can’t anticipate what is coming in the future; even the most advanced machine can’t, since the evolutions are unpredictable.
Don’t worry, maybe we are thinking too far
Nick Bostrom published a book in 2004 called ‘Superintelligence: Paths, Dangers, Strategies’ in which he insisted that humans don’t have to worry about superintelligent AI. Some of the other key points from the book are listed below,
• Electronic calculators are superhuman arithmetic. Fortunately, they didn’t take over humankind so far.
• Historically, there are no records of machines killing humans. So it can never happen in the future.
• Concerns about superintelligence are overestimated. No physical quantity in the universe can be infinite, and that includes intelligence.
• According to the book, Bostrom’s findings are presented as probabilities that human-level AI will be attained by a certain time (By 2022: 10%, by 2040: 50%, by 20175: 90%).
Share This Article
Do the sharing thingy