The Singularity is nearer than you think

The Singularity is Near is a thought provoking book.    The emotions I experienced while reading ran the whole gamut from deeply disturbed to inspired.    The author, Ray Kurzweil, is a billionaire entrepreneur who graduated from MIT and made his fortune from a whole host of inventions, ranging from musical synthesizers, OCR, text to speech, and speech to text, among other things.    In his book, he shares his vision of the future and makes a compelling argument as to why the technological singularity is close at hand.

The singularity is defined as the point in time in which the rate of change in technological progress happens too quickly for the human brain to process, due to the emergence of artificial intelligence that has surpassed human intelligence.   Such an AI would be able to quickly iterate and improve upon its own design without the need for any human intervention.    Not only would each iteration yield exponentially more computing power and capability, the time needed for each iteration would decrease exponentially as well.    The resulting runaway AI would herald the arrival of the singularity, and by definition, no one can really say what will happen.

And that to me is the worrisome part.   Humanity will necessarily relinquish control of its own fate and leave it in the hands of its artificial creations.    There are many who are not enthused by this prospect.   At the far end of this spectrum are guys such as Ted “The Unabomber” Kaczynski, who believe that technological progress inevitably leads to an increase in human suffering and loss of freedom.    While Kaczynski’s actions are morally reprehensible, many of the concerns that he raises are valid.     Improvements in technology necessitate a restriction in freedom.    Consider the invention of the automobile.   In order to accommodate everyone being able to drive a car, bureaucracy in the form of traffic regulations and legislation had to be created, in addition to the formation of state level DMVs.     This bureaucracy limits what we can and cannot do.  Before cars, we could walk wherever we pleased without needing to stay on sidewalks and having to heed traffic lights.    Consider also the current generation of military drones that go on surveillance missions and launch remote strikes on military targets.    One can only imagine that the next such generation of drones would be smaller, smarter, and stronger.     Such drones would no doubt enable the government to create a 24/7 Orwellian surveillance system capable of quickly and quietly dispatching dissidents.    Even corporations can freely collect and harvest our personal information that we so freely post on social networks.    Given that technology impinges on our personal freedoms, it is not at all farfetched to imagine that the invention of a super intelligent AI would reduce humans to the status of house hold pets.

This is but one possible negative outcome.    Science fiction features many robot uprising scenarios wherein the human race is completely obliterated.    But what is more disturbing to me than the prospect of total annihilation is the eventual emergence of neural nanotechnology.    Nanotechnology would allow us to enhance our neural pathways, vastly increasing our biological intelligence by augmenting it with machine intelligence.     This would bypass one of the biggest limitations of the human brain:  knowledge transfer.    A concert pianist must take years of piano lessons and spend thousands of hours practicing before she can master her craft.     Computers on the other hand, can quickly copy data from one machine to the other, sharing information with ease.     Now imagine a world where we could simply download “knowledge modules” and install them onto our machine brains.   Suddenly everyone would be able to speak multiple languages, play chess at a grandmaster level, solve differential equations, all while having a great sense of humor.    With nothing to distinguish one another, we will lose all individuality.   It is reminiscent of the Borg collective of Star Trek, where newly acquired knowledge is quickly shared among all the drones.     Such an egalitarian society to me seems quite dull.     In a chapter discussing aging and death (and how technology can someday make us immortal), Kurzweil dismisses the argument that our limitations make us human.   In the context of mortality, I would agree.   However, in the case of our inherent knowledge transfer limitations, I feel that such limitations make life rewarding.      Taking years to learn something is not a downside, but a fulfilling journey.       It will be interesting to see how human/machine hybrids find purpose and meaning in the post singularity (assuming the robots don’t kill off everybody of course).    Of course, just getting to that point will be troublesome.

Consider what happens as technology continues to improve and automate away tasks that previously required a lot of human intervention.     Expedia and Concur destroyed the livelihood of many travel agents.    Sites such as Zillow and Redfin will someday do away with most real estate agents (although why they have not succeeded – yet – is a different topic altogether).      Grocery stores have self checkout lanes.     Retail stores use software to handle all their complicated supply chain logistics.    Today there is almost no industry where computers are not utilized in some way.    Now imagine what happens as artificial intelligence continues improving at an ever accelerating pace and eliminates the need for human intervention altogether.    Today, the Google driver-less car has logged hundreds of thousands of miles on the road.   Commercial driverless cars are soon to follow.    In a couple of years, bus drivers, taxi drivers, and chauffers will all be out of a job.   IBM’s Watson computer beat 74-time champion Ken Jennings at Jeopardy quite convincingly, and now IBM is using Watson as a medical diagnosis tool.    How many in the medical profession will still have a job once computers outperform them in the future?       Even art is being automated.     There are already AI programs that can churn out novels, songs, and paintings.     Who is still going to have a job in the future?    The industrial revolution put every artisan out of a job.   Likewise, a similar fate awaits all humans as the technology sector continues to innovate.   Some will argue that new jobs will be created by technology; however as AI continually improves, even those will be automated away.    Entire job industries will be wiped out.     This massive unemployment will obviously cause a lot of social upheaval.     How will governments respond?    Will money have any meaning in the future if nobody works for a living?

Kurzweil does not address these issues in his book, which is unfortunate, because it would have been cool to hear his insights on the matter; he has obviously given a lot of thought toward the dangers that future technological innovations in genetics, nanotechnology, and robotics will pose.   In fact, he devotes an entire chapter to this topic.    Despite this, Ray Kurzweil remains optimistic about the future, believing that we will eventually merge with our technology and transcend our humanity.    Others picture a future that will be a lot more grim for humanity.   Given these two diametrically opposed viewpoints, which vision of the future will be proven correct?   In the end, it may not really matter.    As Kurzweil astutely points out, history has shown that progress cannot be stopped.    Even a complete relinquishment of all scientific research wouldn’t really work: This would require a global totalitarian regime.    Of course, in order for such a regime to maintain its power, it would need to make sure it always had a technological advantage over its citizens, thus making its Luddite agenda an unsustainable self-contradiction.      Even a doomsday scenario in which the entire human race was wiped out via some massive meteor, nuclear war, viral pandemic, or some other form of unpleasantry would only serve as a hard reset.  Other sentient life forms would likely emerge again here on earth, or elsewhere in the universe (assuming this hasn’t occurred already).    The entire process would start all over again; it would appear that technology is on an inexorable march on a path toward the future.

Where does this path lead?   Kurzweil believes that the universe has an ultimate destiny and that the technological singularity is a major milestone along the way.    He provides a fascinating roadmap of the journey, dividing the universe up into six major epochs, each characterized by the nature of information and how it replicates.    Each epoch builds on the foundations of the previous one in order to generate information of increasing complexity.

The first epoch is that of “dumb” matter in the universe.      A vast amount of information is encoded into every piece of matter:  the number of molecules it’s made of, the number of atoms in each molecule, the spin state and energy level of the electrons orbiting each atom, and so on.    Matter, and the information stored within it, can replicate itself – although not efficiently.   For example, a crystal is comprised of a precise arrangement of atoms in a lattice.    As a crystal “grows”, it repeats this pattern over and over again.       Although not intelligent, the matter in the universe coalesces into objects of increasing complexity.    Entire planets, solar systems, star systems, and galaxies are formed.   From these arise the conditions necessary for biological life, leading to the second epoch.

In this epoch, biological life encodes information about itself in its genes via DNA.   DNA of course, is itself  made up of the “dumb” molecules from the first epoch.     The information stored within DNA then, represents a much higher level of abstraction.    It can self replicate much more efficiently, and even has mechanisms for error correction in the copying process.    As life evolves on earth over millions of years, the first sentient life forms appear.      In this third epoch, information is now encoded in the neural patterns of the brain.     The invention of a spoken language and a written alphabet by homo-sapiens facilitate the transmission of these patterns, which now replicate as memes.     Educational institutions help preserve these memes over the centuries, allowing humans to retain and build on the knowledge of their ancestors.     Standing on the shoulders of giants, scientists and engineers build the first computer (although there is much dispute as to which computer was the first one, for the purposes of this narrative we will pretend there is only one clear progenitor), heralding the fourth epoch.    Information is now stored in electronic circuitry and replication of this data is a simple matter of copying bits between machines.   This is facilitated by massive communication networks such as the internet.      As the machines begin to increase their computing power, artificial intelligence rivals that of humanity.   The singularity marks the arrival of the fifth epoch.    AI begins to saturate the universe, harnessing the matter in the universe itself for computational substrate (ie Dyson spheres).

The sixth and final epoch that Kurzweil describes is a gradual “awakening” of the universe.   In essence, the entire universe itself is turned into a computer and becomes self aware.    This is not to anthropomorphize the universe; this awakening will be an emergent process, wherein the inanimate universe transforms and transcends into something altogether different.    Carl Sagan once said, “we are a way for the cosmos to understand itself”.    The sixth epoch then, represents the fulfillment and realization of that statement.      This of course is all highly speculative and borders on religious.    One of the criticisms of Kurzweil is that he writes a lot of religious science fiction and that the singularity he describes is nothing more than a “rapture for nerds”.     Personally, I found his description of this final stage in the evolution of the universe to be quite beautiful and profound, with none of the trappings of religious dogma.    Whether or not any of it comes true remains to be seen.

There are of course, many other criticisms of Kurzweil’s work.    He even devotes an entire chapter in his book to address them.    Because he makes such strong assertions, including the bold prediction that in 2045 a laptop would possess billions of times more computing power than every human brain in existence (both past and present), many have told him that what he was saying either could not possibly happen, or that it could not happen so soon.    Kurzweil points to the exponential rate at which technology is improving (referred to as the law of accelerating returns in his book), while the naysayers argue that such growth will continue until it doesn’t.

The question boils down to whether or not there are limits to our knowledge and ability.   The pragmatists take a more conservative position on this:   Some things are by their very nature unknowable and not doable, while the optimists feel that there are always workarounds.    With regards to the singularity, the two main barriers are the hardware required to run a computer powerful enough to surpass the human brain’s parallel processing capabilities, and no less important, the software that can mimic it.   Kurzweil goes through great pains to discuss the promising ideas and solutions that can be found in the research pipeline.

On the hardware side of things, one of the major problems will be all the heat generated by the increased computing power.   Paradigms such as reversible computing will significantly reduce heat dissipation.     This will allow computing power to continue increasing at an exponential clip.    However, Moore’s law will come to an end due to the fundamental physical limitations to how small silicon transistors can become, but companies are already looking into what’s next after silicon.     A variety of technologies such as carbon nanotube based computers, DNA computing, and quantum computing (just to name a few) will potentially allow Moore’s law to continue unabated.

In order to take advantage of this powerful hardware, software will need to be written that can mimic the brain.   Instead of inefficiently hand coding each rule manually as is done in an old fashioned AI expert system, self-emergent machine learning techniques such as genetic algorithms, neural nets, and Bayesian networks will need to be employed.    At the same time, as brain scanning technology continues to improve, we can integrate what we learn as we reverse engineer the brain to come up with ever more accurate models.     The key here is to operate at the right level of abstraction.    Consider the Mandelbrot set.   It is a fractal of literally infinite complexity.    To calculate each point in the set would take an infinite amount of time, yet we can use a simple equation to represent it in its entirety.     There is strong evidence that the brain is fractal in nature.   Instead of painstakingly modeling the brain by mapping every dendrite and neuron, it would be much easier to generate an accurate model the brain by finding the right set of equations.   Of course, deriving these equations is non trivial, but it serves to illustrate why a top down approach to the problem will work the best.

All in all, The Singularity Is Near was a great read.    It is hard to categorize the book, as it contains a mix of philosophy, religion, and science.    It was necessarily epic in its scope given its subject matter.     Topics discussed ranged from Fermi’s paradox, the speed of light, the nature of consciousness, and everything in between.     There is something in it for everybody.     As a software developer, it served a springboard for a Wikipedia binge as I looked up the machine learning techniques and paradigms he described.     Anyone interested in science will find themselves  learning much by googling all the research articles that he cites:   I was personally amazed to find out there was a naturally occurring nuclear reactor in Africa 1.7 billion years ago.    There are a lot more of these nuggets of knowledge contained within.    That alone makes the book worth the read, but more importantly, it got me thinking about all the massive change brought about by the explosive growth in technological innovation that I’ve seen just growing up in my lifetime.     Humans have a linear view of present day trends: even an exponential curve is a straight line if you zoom in close enough.     Hence we miss the exponential rate of change that has happened in just this past decade.     Even more change is in store for the future, and it will be quite different from anything we’ve experienced up until now.   Important discussions need to be had about this.    So whether or not you agree with Kurzweil, it is worth reading his seminal work and to consider all its implications; it makes for deep conversation.

Leave a Reply

Your email address will not be published. Required fields are marked *