[ad_1]
As Nolan’s film took over theaters this summer, the debate over how to develop AI safely and responsibly was reaching a peak in Washington. As President Biden was convening top CEOs for discussions about AI at the White House, tech executives and senators saw an opportunity to use Oppenheimer’s struggles to illustrate the morally complex stakes of the debate over the emerging technology.
But Silicon Valley’s fascination with Oppenheimer has left Nolan with “conflicted” feelings.
“It’s a wonderful thing that scientists and technologists of all stripes are looking to history and looking at that moment and worrying about unintended consequences,” Nolan said in a recent interview at the Hay-Adams hotel in Washington. “But I also think it’s important to bear in mind that the nuclear threat is a singular threat to humanity.”
Nolan says that the atomic bomb was a “force of destruction,” and policymakers need to address that differently than a tool such as artificial intelligence. He warns against viewing AI as a special case and cautioned against ascribing “godlike” attributes to the technology in ways that could allow companies and governments to deflect responsibility.
“We need to view it as a tool, and we need accountability for the people who wield the tool and the ways they wield the tool,” he said.
Some technologists are warning of “doomsday” style scenarios in which AI grows an ability to think on its own and attempts to destroy humanity. Their warnings have resonated on the global stage, and they were a key focus of an international gathering of global leaders to discuss AI safety at Bletchley Park, a historic site in Britain where Allied code-breakers deciphered secret German messages during World War II.
But Nolan warns that focusing on those potential outcomes distracts from solving problems companies and policymakers could address now.
“It lets everybody off the hook if we’re looking at the most extreme scenarios,” he said.
Already, AI systems are ingesting his work and other Hollywood movies to generate photos and videos, he said. Nolan says policymakers need to address the ways that AI systems are taking people’s work now.
“When we look to the far reaches of where this technology might be applied or where it goes, I think it distracts from things that need to be addressed right now, like copyright law,” he said. “They’re not as exciting and interesting to talk about … but there’s an immediate impact on employment and compensation that needs to be dealt with.”
Oppenheimer’s story also signals how difficult the path ahead will be to regulate artificial intelligence, according to Nolan. ChatGPT accelerated a race within top companies to develop and deploy AI systems, and policymakers around the world are in the early stages of catching up. In the U.S. Congress, lawmakers have launched a group to develop bipartisan legislation to address the technology, amid extensive lobbying from the tech industry.
Oppenheimer largely failed in his efforts to address the risks of his invention. He was “crushed” in his efforts to prevent the development of the hydrogen bomb, Nolan said. The scientist’s efforts to work within the political system to create change largely failed, especially after his security clearance was revoked due to allegations that he had ties to communism.
“I sympathize with people on the cutting edge of A.I. who will look at Oppenheimer’s story and seeing it as a cautionary tale, partly because I don’t think it offers many answers,” he said.
In the postwar years, the atomic researchers were elevated in pop culture and reached fame scientists had never before seen in history, Nolan said. But ultimately, they found themselves excised from the political system.
“When politicians need the inventors, they have a voice, and when they no longer need them, they have less of a voice,” Nolan said. “Oppenheimer’s story points to a lot of the difficulties, pitfalls around these kind of issues.”
If inventors can’t ultimately decide how their technology is used, it bodes poorly for a host of tech executives, researchers and technologists who have invested significant time in educating Washington policymakers about artificial intelligence this year. OpenAI CEO Sam Altman, Tesla CEO Elon Musk and top AI researchers from schools such as the Massachusetts Institute of Technology have spent hours testifying in hearings and speaking with lawmakers in closed-door meetings amid the new AI debate.
The modern political environment presents new challenges, especially as the companies developing AI systems amass greater political influence in Washington.
“I’m worried that our leaders in Washington have not yet managed to break free from the manipulations of the tech industry that consistently tell them that they don’t understand enough to regulate,” Nolan said. “We have to get past that mode immediately.”
When Nolan began working on the movie about the 20th century scientist, he says he had no idea it would be so relevant to this year’s tech debate. He frequently discussed AI during his “Oppenheimer” media blitz, and in November, he was awarded the Federation of American Scientists’ Public Service Award alongside policymakers working on artificial intelligence, including Sen. Charles E. Schumer (D-N.Y.), Sen. Todd C. Young (R-Ind.) and Alondra Nelson, the former acting director of the White House Office of Science and Technology Policy.
“Making a film about Oppenheimer, I never thought I would spend so much time talking about artificial intelligence,” Nolan said.
[ad_2]
Source link