This book is written by Swedish Philosopher, Nick Bostrom. He is a professor at Oxford University and the founding director of the Future of Humanity Institute. We all are hyped about the future but also have some fear of dangers that come along. Artificial Intelligence will make our life prosperous and there will be the smarter world without a doubt but it all depends on humans as we may not succeed in making AI as we think. The author explores the opportunities and potential risks posed by the development of AI. He also provides a hypothetical future scenario where machines surpass human intelligence and could even threaten humanity. There are endless positive aspects and possibilities but also concerns when thinking of superintelligent AI agents.
Humanity has gone through a lot of challenges for 1000s of years, but the biggest challenge is still ahead of us. How are we going to ensure that our own creation of AI will not destroy us? The author has provided the possible scenario for developing superintelligence and their consequences in detail. Engineers will love to work on AI, authoritative leaders might enjoy the possibility of total control through AI. The content of the book is intellectually stimulating and thought-provoking.
Humans have the incomparable ability in abstract thinking paired with the ability to communicate and share information which is why we are slow, small, powerless amongst the mammals, yet are beasts in comparison to any animals. Our super intelligence propelled us to the top and have been ruling the planet for thousands of years. What would the emergence of a new system/machine intellectually superior to human, mean for the world? Superintelligent machines would bring radical change for the world as we know it. What is the current state of the technology? Actually, we have already been able to create machines that have the capacity to learn and reason using information that’s been provided by humans.
In 1997, a supercomputer named Deep Blue of IBM, defeated the reigning world chess champion Garry Kasparov. It symbolized a major milestone in the development of Artificial Intelligence and Machine Learning. Deep blue was capable of analyzing millions of possible moves in a matter of seconds and adapting its strategy accordingly. It demonstrated the potential of AI and Machine Learning in a wide range of applications from medical diagnosis to financial analysis and sparked a debate about the relationship between humans and superintelligent machines.
Today, the advancements in AI have gone beyond the specific domain capabilities like Deep Blue. With the emergence of even more powerful LLMs like Open AI’s ChatGPT and Google’s Bard based on PaLM which succeeded LaMDA, the potential for AI to revolutionize various aspects of human life is getting limitless.
However, these are far from the general intelligence that human possess, which has been the goal of the AI research for decades. When it comes to Artificial General Intelligence, building a superintelligent machine that can learn and act on its own without human interference, it may still be years or may be decades away. But the author writes, the advancements in the field of AGI is happening quickly and its faster than we think. As these machines are predicted to have lot of power and leverage to human life, it will also possess dangers that humans are not able to control even in an emergency.
There are 3 main approaches one of which will yield superintelligence within next 50 years: computational models, full brain emulation or collective enhancement. There are other scenarios too but those three are strongly emphasized in the book.
Recently, computational models or machine learning and use of large language model has been all over the news as its pace of development is getting faster and has been part of our everyday life. AI such as computer vision, voice recognition, predictive models, face recognition shows that the next step in the area of AI would be refining such models and scaling up the hardware capabilities that power these technology.
We’ve even been reading news about Elon Musk’s Brain Machine Interface, Neuralink. The author has discussed about the Full brain emulation process where the actual brain is scanned and most of their properties are translated to digital signals. This makes the capability to scale up such brains possible which then combined with much higher frequency of calculation in digital machines would make the consciousness to operate faster leading to superintelligence.
The last one is the Collective enhancement, which is the process of improving the intelligence of humanity as a whole. This is possible by increasing the effectiveness of our education, communication infrastructure and accessibility of knowledge which has made highly available and accessible through the internet.
Once we are able to create the superintelligent device/unit/system, the superintelligence explosion can not be stopped. The rapid iteration of the system over its own code and massive improvements over short period of time which means that the system we develop will quickly find ways to outsmart us and realize its own goals without any regards of human values. This will result to the super fast development of such system and will form a singleton. The superintelligent machines will probably replace the entire human workforce.
Its intriguing to note that while creating the superintelligence, we might as well program the goals in such a way that could lead to the complete destruction of humanity. The result might not be what we really intended but the superintelligence can have the bad interpretation of the command from us human. What if AI decides to convert the Earth into a giant computer in order to enhance its cognitive abilities and solve certain problems? It can even annihilate humanity in the process. AI can simulate human consciousness in a virtual environment and this consciousness is treated in a way the could be inhumane. This might not be the physical reality but the emotions and experience from it could be subjectively real to its direct user.
It is important to consider such implications of unwelcome outcome of superintelligence. There might be number of ways to do so including limiting the superintelligent agent’s access to resources. Recent AI development from different companies, mostly LLMs are the good example to such control but might be the topic for debate because the control mechanism can be politically inclined or in favor of the company or a country that is building that AI.
The author writes, safety must be a top priority before superintelligence is developed or the human will face the consequences of its own creation. In the worst case scenario, it could lead to the destruction of humankind. In order to make these technology safe, safety should be number one priority over unchecked advancement because the fate of our species depends on it.
Hope you enjoyed. Thanks for reading and see you on next one.
One reply on “Superintelligence”
Admiring the time and energy you put into your blog and in depth information you present.
It’s great to come across a blog every once in a while that isn’t the same
old rehashed material. Fantastic read! I’ve bookmarked your site
and I’m including your RSS feeds to my Google account.