This essay explores the theories behind the singularity and how one might get there. It also incorporates ideas after the singularity has happened; what might happen to the human race? Artificial Intelligence has the power to create a new age, a new world in which humans can have the final decision to decide where AI takes us; solve world hunger, cure cancer etc. On the other hand, it could spark the start of World War III. The impact on society in which AI will have is exponential and progress made will be faster than it has ever been. Theory-crafters such as Kurzweil predicts that the singularity will happen by 2045. He uses evidence such as The Law of Accelerating Returns to evidently show that progress is exponential. However, Allen argues that progress will be slowed down as when we learn something new, something else shows up as a problem in which we need to solve and that will create more problems after the fact it has been solved and so on. In my opinion, the singularity is inevitable, no matter when it will happen. But we have to be careful as the singularity approaches and be prepared for how AI will change the world.
There are several definitions of a technological singularity. Some say that the singularity is the turning point where technology becomes more intelligent than humans. Others argue that the singularity is not when Artificial Intelligence (AI) is smarter than humans, as they do not gain any more intelligence than the smartest human alive. Having said this, AI can perform tasks more efficient and effectively than humans which as a result, AI can learn and improve upon themselves a lot faster which will eventually cause an intelligence explosion. Many people believe that the singularity will happen within this century, one of these individuals is a futurist named Ray Kurzweil, who is very good at predicting the future of artificial intelligence; Kurzweil (2005) claims that the singularity could happen by 2045. Others like Paul Allen, co-founder of Microsoft, believe that the singularity is not near and is much further than we are led to believe. One of the reasons why it may take longer is because how difficult it will be in replicating the biological brain into a machine (Allen, 2011). If we do not know how our brains fully function, how are we supposed to make a machine with a computing brain more intelligent than humans? We first need to discover our own brains before we create new ones. In addition to what has been said, how will AI have an impact on society, will it be the extinction of the human race, or the start of a new age where humans and AI live coherently?
Reasons for the Singularity Being Near/Far
How long before the singularity? Some say it will happen within the 21st century, Kurzweil explains this in The Law of Accelerating Returns in that the progress made in the 21st century will be more exponential than the 20th century. This means that instead of making 100 years of progress in the 21st century (where each year we make a fixed amount of progress), we will make 20,000 years of progress in the 21st century as the technological change is exponential (Kurzweil, 2001). To summarise, the amount of progress that is predicted to be made in the 21st century will surely cross the point of a technological singularity, so it could happen within this century and teenagers today could be alive to experience this change, that is if the Accelerating Returns becomes true. This return is also similar to how Moore’s Law works; this law is also an exponential trend which explains that every 24 months, we would be able to fit double the amount of transistors onto an integrated circuit. This will mean that the circuit will run much faster as the more transistors there are; the more close together they will be, which will mean that the electrons will have less distance to travel between transistors (Kurzweil, 2005). Although this method is the driving force for many circuit manufactures like Intel and AMD, Moore’s Law is predicted to come to an end no later than 2019 as the transistors will be soo close together, they will be atoms apart (Kurzweil, 2001). If what Kurzweil is saying is true, then the singularity could be closer than we expected and we could potentially start to see more scientist interested in the idea of AI.
On the flip side, Paul Allen explains an idea called the Complexity Brake. The more in-depth we go into learning new things about natural systems (like the brain), the more we need to look back on our basic understandings. In doing this, we need more specialised knowledge and are constantly required to expand and change scientific theories in more and more complex ways (Allen, 2011). This means that although we are progressing soo much; we would need to take the time to revise current knowledge. As a result of learning more about natural systems, the more we are likely to discover something new, this would need time to solve and so on. In a sense; solve one problem, two more arise. In addition to this, Paul Allen also explains the idea of the difficulty of actually implementing intelligence into artificial intelligence. As humans; we develop and improve our knowledge over time by either learning from others, in school, or personal experience and much more. When it comes to creating artificial intelligence, AI researchers have tried to create deep knowledge on a narrow subject but it has proven to be difficult (Allen, 2011). However, another way of solving the problem is not to implement knowledge directly into the AI. Instead, Urban (2015) suggest that we can create a foundation for the AI. So at first, just like humans, it knows nothing, and gradually over time, it learns what to do and what not to do. For example, if we want the AI to draw a cat; at first, the AI will probably not know how to pick up a pencil, if we teach the AI how to pick up a pencil, then we can start teaching it how to draw a cat. The things that the AI learns, the transistors in the brain of the AI will grow stronger, and the transistors that weaken are the things that the AI is told not to do, such as throwing a pencil or breaking it. So eventually, after several teaching lessons, the AI will be able to pick up a pencil and draw a cat perfectly every time. Next step, we could teach the AI how to colour and so on and so on until at the very end; it can be an artist of its own, independently making its own creations.
How AI Can Be Created
A strategy that Kurzweil explains how artificial intelligence could be created and a key aspect of artificial intelligence itself is reverse engineering the brain. The brain plays a fundamental part and the reason why intelligence is put into artificial intelligence. The brain is what makes the whole machine work and if it was possible to replicate the human brain; then we could create a machine just like a human and make it do what humans do, but much faster and more efficient, capable of learning independently.
"The key to reverse-engineering the human brain lies in decoding and simulating the cerebral cortex, the seat of cognition. The human cortex has about 22 billion neurons and 220 trillion synapses." (Ganapati, 2010)
To be able to run a software simulation of the human brain, researchers predict that it would need a computational capacity of at least 36.8 petaflops and a memory capacity of 3.2 petabytes (Ganapati, 2010). Currently, computers today do not reach to this scale, but this would be possible to hit in the next few years. Currently, the fastest computer in the world is the Tianhe-2, aka Milky Way 2 which is capable of running up to speeds of 33.86 petaflops (Vaughan-Nichols, 2014). This is very close the required amount of petaflops which means the hardware side of artificial intelligence is technically already here. The difficult part of reverse engineering the brain is creating the software itself. As humans, we do not know how our brains fully work, so creating an artificial brain; we first need to discover what really makes a human brain and why it is the way it is over many years of evolution. It is also argued that AI is not technically more intelligent than humans, AI is really about performing tasks faster and more reliable than humans. AI is still using the same amount of intelligence as humans but can react to situations quicker and correctly. Although AI will have the same amount of intelligence as an average human at the start of the singularity, this will soon change as AI will exponentially learn and develop upon itself, learning faster than humans which will result in AI being smarter in several years after the singularity has happened, this is also known as the intelligence explosion. Having said this, do AI researchers need to implement a way to make AI smarter than humans? Not necessarily. They could implement code which will only allow them to do certain things, like working a till at a supermarket, or paint pictures etc. There is no real need to make an AI that knows everything. The only reason why researchers want to implement a solution to make AI smarter is the human mind being curious of how far they can change the world with AI.
Impact on Society
The singularity is a phenomenal thing to think about; how far humans have come to reach a point where we can create new ‘life’ that is more efficient than the predecessor. This, also comes with ethnical issues such as should artificial intelligence have the same rights as humans? Have homes, work, get paid etc. How will these artificial intelligence be treated? Is it right for humans to use artificial intelligence only for labour? As stated in the Humans Rights Act 1998, that all humans in the United Kingdom will have “Protection against slavery and forced labour”. Yes, these are human rights and artificial intelligence is not technically a human. However, one day, AI would want to be like humans; blend in with us and be a part of the society we created for them, instead of being treated as a machine to do work. Moreover, is it right for millions of people working potential jobs which can be automated (like a till at a supermarket) to be replaced by artificial intelligence? So many people will lose jobs as companies would gradually want artificial intelligence to do their work instead of humans as artificial intelligence can do it quickly, reliably and can work all the time, so long as they are powered and operational and no technical glitches are in the software used for the AI. Artificial Intelligence could also be used to be a judge in court, someone who is unbiased and truthful and follows the code of law, a fair way to bring justice or mercy. AI could also be in politics and solve problems which could seem difficult to humans; but may have a simple solution which may not have been so obvious at first glance. Having said this, if we rely on AI to do everything for us, then they would slowly gain control of the world which could be a threat if they are coded poorly or the security system in place fails and hackers can take control. There would need to be laws in place to carefully regulate the creation of AI, make sure that there is no code that could potentially cause a threat to others, or if AI could be created with the purpose of destroying everything around them.
Immortality and Extinction
What all humans go through is the cycle of life; the birth of the baby and the death at the end of the human life. AI could be the solution to humans’ fantasy dreams of being immortal. Unfortunately, it could also take a turn and be the result of extinction of the human race if mistreated poorly. Since AI will become self-improving and constantly developing itself, like humans but much faster, AI could come up with solutions to our problems a lot quicker. We could be seeing exponential change as soon as the singularity has happened. For example, pollution of the earth, world hunger, depletion of resources etc. Using nanotechnology, AI can create food from nothing and would be molecularly identical to real food (Urban, 2015), which means it would taste, smell and feel the same, you would not know the difference between the two nor will it affect our bodies differently in any way. This is a good method of generating food as it will not cost a lot to produce and animals can be free from being bred only for the sole purpose of being killed. This would also stop animals from being extinct as humans would no longer have a need to kill them and furthermore, AI could also find a way to bring back extinct animals using preserved DNA (Urban, 2015). However, many farmers if not most farmers would lose their jobs as there would not be a need for food produced on farms anymore. AI could also allow us to achieve immortality. Although this may seem like a far-fetched idea, Kurzweil explains that if we make nanobots that are injected into our bloodstream, these nanobots will be able to repair organs and repair blood cells so we would always be healthy. This technology will also enable us to change our age, so we will never grow weak and can live forever (Urban, 2015). These ideas seem like dreams, but sometimes dreams do not go as planned, and as much as we all want to be healthy and alive, we could also be extinct and AI could potentially destroy our species. If, somehow, a corrupt organisation gets their hands on how to create AI, then they can make the most powerful army of AI to destroy the world. Or, if scientist were to rush the creation of AI without any careful thought, then the AI would be coded poorly and may have intentions to kill rather than what we would have liked (Urban, 2015). There is no telling on what will happen when AI is created, who will create it and what the intentions are of creating AI. All we can do now is pay close attention and approach AI carefully and thoroughly think it through.
In conclusion, the research that I gathered both show valid points of either why the singularity may and may not be near. However, I truly believe that the singularity is inevitable and will eventually happen regardless of when it will happen. Although it is hard to predict the future, I think that the first computer mind may be seen close to around 2050. As no prediction is perfect, all we can do now is pay close attention to what is happening in the present day.
Allen, P. (2011) Paul Allen: The Singularity Isn't Near. [Online] Available at:http://www.technologyreview.com/view/425733/paul-allen-the-singularity-isnt-near
Ganapati, P. (2010) Reverse-Engineering of Human Brain Likely by 2030. [Online] Available at:http://www.wired.com/2010/08/reverse-engineering-brain-kurzweil
Kurzweil, R. (2001) The Law of Accelerating Returns. [Online] Available at:http://www.kurzweilai.net/the-law-of-accelerating-returns
Kurzweil, R. (2005) The singularity is near, New York, Viking.
Urban, T. (2015) The AI Revolution: Road to Superintelligence. [Online] Available at:http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Urban, T. (2015) The AI Revolution: Our Immortality or Extinction. [Online] Available at:http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html
Vaughan-Nichols, S. (2014) Six Clicks: The six fastest computers in the world. [Online] Available at:http://www.zdnet.com/pictures/six-clicks-the-six-fastest-computers-in-the-world