The dramatic rise, fall, and rise of Sam Altman as head of
Microsoft
-backed OpenAI raises a very important question that no one seems to be able to definitively answer: Is artificial intelligence dangerous? And if it is, what should be done about it?
Experts in the field clearly see a risk. That’s why so many of them called for a pause on AI experiments and governments are setting up task forces to oversee advances.
Altman himself testified to Congress in May that AI could cause great harm to humanity. He was joined by an executive from
IBM
(ticker: IBM). Different people seem to have different ideas of what this could mean, but it’s obviously a widespread belief. One idea is that AI could be used to manipulate elections by creating convincing fake news and hyper-targeting people on social media. That is indeed a scary thought, but is it categorically worse than what already exists with
Facebook
(META) and X, formerly known as Twitter?
There’s also concern that AI will automate jobs. But if history is any guide, productivity enhancements usually create more jobs than they destroy.
The biggest fear overhanging everything around AI seems to be the inchoate thought that maybe one day machines will become smarter than people. At that point, the technology would be outside of our control.
That leads quickly from fear to feelings of awe and disgust. After all, that’s what happened in Jurassic Park. Jeff Goldblum’s character famously said that scientists were so obsessed with whether or not they could recreate a dinosaur that they didn’t stop to ask if they should. Then the dinosaurs went on a rampage. Similarly, the computers in the Terminator and Matrix franchises took over the world and tried to eliminate humans.
This is an old genre of science fiction. It boils down to fear of the unknown, and it’s definitely not a good reason on its own to halt AI development.
Consider another great technological leap forward that looked incredibly dangerous–the development of the atomic bomb. It was, and is, obviously capable of destroying all life on the planet. And some people in the 1940s argued it shouldn’t be developed and, once it was, that it shouldn’t be used.
Yet the bomb arguably saved many more lives than it has taken. It prevented the need for a ground invasion of Japan in World War II and stopped the Soviet Union and the U.S. from going into a full-on war.
Even if we could stop the further development of AI, fear of its worst-case uses isn’t a compelling reason to slam on the brakes. What’s more, pausing AI would also deprive the world of its best uses, which could include saving humanity with healthcare advances as well of course as making many people–not just those in Silicon Valley–more productive and wealthier. Regulate the technology as the needs arise. Don’t try to stop it in its tracks.
Of course, as Sam Altman’s return to OpenAI, after flirting with
Microsoft,
seems to illustrate, the advance of AI looks unstoppable. But that’s no reason to be afraid. Might as well embrace it.
Write to Brian Swint at brian.swint@barrons.com
Read the full article here