“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” — Stephen Hawking
Just last month we lost Stephen Hawking, arguably one of the most influential thinkers on AI in modern history. Even though he is no longer among us, his impact can still be felt in the various efforts he supported that are still being pushed forth. Some of those efforts are incomprehensible to the everyday person, such as his theories on quantum gravity and general relativity, but others speak to society at large, like wondering how humanity will survive the next 100 years. One of the efforts Hawking had dedicated his life to that is perhaps most pertinent today is his concern about AI regulation. It is a wonder to think there was a possibility society would’ve never gotten Hawking’s warning if it weren’t for his admirable will to live despite the odds that were stacked against him.
An Everlasting Impact No One Could Ever Imagine
If it it weren’t true, people probably would not believe that when Hawking was a child, he wasn’t really inclined to academics, even so far as not being able to read. Despite his apparent shortcomings though, he later followed in his father’s footsteps and studied at Oxford, he also surpassed his father by earning additional degrees at Cambridge. It’s crazy to think that same man who once struggled to grasp reading would go on to obtain a PhD in applied mathematics and theoretical physics, specializing in general relativity and cosmology. However, getting to that point of academic success did not come without its tribulations for Hawking.
Just when he was on track to start his PhD in his early twenties, he was diagnosed with what is known today as Amyotrophic Lateral Sclerosis (ALS) and was told he only had two years left to live. Most people would perhaps try and live out a so-called ‘bucket list’ given such a grim prognosis, but Hawking instead went on to complete his PhD despite his limited time left. He then further defied expectations of him, not only becoming the world renown scholar we all know today, but also proceeding to live nearly 50 more years.
…Except Maybe Hawking Himself
In those 50 or so years that he lived, Hawking strived to make a tangible impact on society, stating that he had so many things he wanted to do and wasn’t in a hurry to die just yet. And it perhaps is the most like Hawking to come to terms with death through physics — what Hawking calls the “theory of everything.” That’s right, he was so unfazed by the prospect of death that he boiled it down into a physics equation.
When he wasn’t pondering the depths of our universe, Hawking also devoted his time towards efforts that affect us in the present time. One of those efforts was making sure AI wouldn’t get literally out of control. Hawking was a staunch proponent of making sure that AI was built with humanity’s best interests in mind and he often cautioned would-be innovators about getting too carried away and ignoring the obvious need for regulation.
He further acknowledged that AI is a “dual use” technology that could be used for good purposes as well as great harm. At its worst, AI could “destroy civilization and could be the worst thing that has ever happened to humanity” due to lack of adequate regulation that isn’t enough to keep up with the technological developments being made. Despite his fears though, Hawking also praised the potential benefits for AI in our society, especially given that “everything that civilisation has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools that AI may provide, but the eradication of war, disease, and poverty would be high on anyone’s list.”
He Was and Still is Way Ahead of His Time
Hawking was a brilliant man known for his experiments and theorycrafting. Some of which were to the intellectual extreme, and others were more (perhaps to himself most of all) utterly blasé. As an example of the former, apparently just two weeks before his death, he had predicted the end of the universe in a forthcoming paper he was co-authoring. In the paper, he was trying to prove the multiverse theory, and in addition, he predicted how our universe would eventually fade to darkness as the stars run out of energy. And as a case for the latter, he once even threw a lavish banquet party for time travelers, complete with champagne towers overflowing and decadent canapes. Hawking had only sent out the invitations after the party ended, stating that only time travelers (i.e. people from the future) would’ve been the only ones who knew about and come to the party. It wouldn’t be a stretch to believe then therefore, that no one came to the party except Hawking himself who had thrown the party. He did this all as a “simple” experiment to prove the fact that time travel was in fact impossible — that, and he also just really liked champagne.
His Vision of AI Regulation and Progress Thus Far
Stephen Hawking was indeed a skeptic about where AI could take us, if not to our own end if we didn’t heed his warnings. To combat the risks he perceived with society embracing AI more so everyday, Hawking advocated for more regulation alongside the development of AI. This was to ensure that AI built now and in the future does not make his worst dystopian nightmares about AI destroying our society a reality. Although they are very strong and prominent voices on both sides of the debate of whether AI regulation should happen, it is in fact already happening.
While successes and downfalls of AI have been hot news topics lately, the conversation around governments trying to get AI under control seemed to be out of the limelight, but perhaps it should be. Without adequate regulation from the governments themselves, then the deciding factor when it comes down to disputes over AI will be handled by the courts, which don’t necessarily have the best track record of getting things done well when it comes to new technologies, even with precedents. As UC Berkeley professor, Amy Gershkoff puts it, “not only could regulation from case law create inconsistent directives for businesses, but it also runs the risk that a few individual judges, who might not be well versed in technology or AI, could wind up disproportionately impacting the industry without sufficient input from stakeholders.”
That being said, it’s not like our government is filled with AI or technology experts either (if the Senators questioning Mark Zuckerberg is anything to go by). However, the government realized that fact and to address the problem before it occurs, it introduced the Future of AI Act that effectively states an advisory committee should be created specifically to deal with AI issues. Even with this committee though, the government seems hesitant to pass regulation specifically addressing AI itself, and to be perfectly honest, it’s quite understandable and perhaps an even better way of dealing with the situation.
This is because AI itself is a tricky subject where previously held ideas no longer apply. “How do human-centric concepts like intent apply to robots?” Subjective abstracts like ‘intent’ don’t tend to work well in government bills. Instead of addressing AI directly, both federal and state governments started making legislation to address the particular risks caused by specific applications of AI. An example of this is regulating the safety of self-driving cars (and not necessarily the AI that controls them) through the Self-Drive Act. Regulations like this seem to be the beginning of addressing concerns with AI without stifling innovation and still providing adequate protections for society. Small, but definite strides are being made towards regulating AI even if it isn’t obvious, and I think Hawking would be proud.
I also hope it will be reassuring to the late Stephen Hawking and the rest of society to know that not all AI developed today is for nefarious purposes. There are many movements and organizations today dedicated to making sure that AI develops with the humanity’s best interests in mind — some of which Hawking was a part of until the day he died. One such example is Claire, the AI travel manager, whose duty it is to make sure you can book travel itineraries with no worries. Claire is just one such example of modern AI that is being developed to help humans, and there are many more sure to come, partly thanks to Hawking’s early and persistent calls to making sure our inventions don’t get too far ahead of ourselves.
April holds a Master’s in Public Policy and Human Development from the United Nations University and wrote her dissertation on Intellectual Property Policies for Artificial Intelligence. As a public policy consultant, she specializes in science, technology, innovation policy, and international affairs. Despite her namesake, she was not born in April.