©2021 Reporters Post24. All Rights Reserved.
Artificial intelligence (AI) could wipe out humanity, and we need to figure out how to control it before it does. Elon Musk and OpenAI’s CEO Sam Altman can agree on that much. That’s why the two are racing against each other to build a “superintelligent” A.I. that’s smarter than humans but still aligns with human interest.
It’s not the first time the billionaire and millionaire—and former colleagues—have gone head-to-head.
Musk cofounded OpenAI—the parent company of the viral chatbot ChatGPT—in 2015 alongside Altman and others. But when Musk proposed that he take control of the startup to catch up with tech giants like Google, Altman and the other cofounders rejected the proposal. Musk walked away in February 2018 and changed his mind about a “massive planned donation.”
Now Elon Musk is new company, xAI, is on a mission to create an AGI, or artificial general intelligence, that can “understand the universe,” the billionaire said in a nearly two-hour-long Twitter Spaces talk on Friday. An AGI is a theoretical type of A.I. with human-like cognitive abilities and is expected to take at least another decade to develop.
Musk’s new company debuted only days after OpenAI announced in a July 5 blog post that it was forming a team to create its own superintelligent A.I. Musk said xAI is “definitely in competition” with OpenAI.
Neither xAI nor OpenAI responded immediately to Fortune’s request for further comment.
Attacking the same problem from different sides
Elon Musk is approach is to have an active hand in developing this technology in order to prevent it from becoming a danger to humans. He outlined his plans for creating a “good” A.I. during his Twitter Spaces talk.
“The safest way to build an A.I. is actually to make one that is maximally curious and truth-seeking,” Musk said.
Just such an approach is problematic in the eyes of some critics, as truth can be slippery and relative.
For example, the Financial Times’ Tim Harford recently described the challenge of finding truthful answers online (on Google, specifically) by referring to the work of sociologist Francesca Tripodi and psychologist Peter Wason. Tripodi cited the example of googling “Why is the sky blue?” and “Why is the sky white?” and getting different answers, while Wason’s trailblazing work in the 1960s produced the term “confirmation bias,” or the tendency to search for answers that support one’s preconceived beliefs.
Meanwhile, Securities and Exchange Commission Chair Gary Gensler fears A.I.’s potential to distort financial markets, maybe even cause a financial crisis. In a speech this week about the “transformative” nature of A.I., he cited the potential of “herding” investors into a single narrative, or kind of truth.
A.I. “could encourage monocultures,” if everyone is making similar decisions because they are all consulting the same data source, Gensler said.
Musk is undeterred. “It really seems that at this point it looks like AGI is going to happen so there’s two choices, either be a spectator or a participant. As a spectator, one can’t do much to influence the outcome,” Musk added. “You don’t want to have a unipolar world where one company dominates in AI.”
Meanwhile, Altman and his team seem to be taking a more cautious approach, as the CEO has made several comments expressing fear about what he’s unleashed with ChatGPT, which popularized generative A.I. and supercharged development in the space.
“A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too,” Altman wrote in a February blog post.
OpenAI’s new team is developing what they call an “automated alignment researcher,” which is essentially a superintelligence designed to control superintelligence.
“Currently, we don’t have a solution for steering or controlling a potentially superintelligent A.I., and preventing it from going rogue,” OpenAI cofounder Ilya Sutskever and the new team’s cohead Jan Leike, wrote in the blog post.