A new survey of more than 1,200 registered voters provides some of the clearest data yet showing the public’s desire to control AI.
The poll included 54% of registered US voters in a new district vote Conducted by the Technology Stewardship Project, Congress agreed to take “swift action to regulate AI” in order to enhance privacy and security and ensure that the technology provides “maximum benefit to society”. Republicans and Democrats expressed nearly identical support for Curb artificial intelligence, a rare sign of bipartisanship that signals a growing consensus around the rapidly evolving technology. 41% of voters said they would prefer to see regulation come from government intervention versus only 20% who think tech companies should regulate themselves. It also appears that the voters polled did not buy Arguments from technology executives who warn that new regulation of artificial intelligence could set the US economy back. Only 15% of respondents said regulating AI would stifle innovation.
“While new AI technology — and the public’s understanding of it — is evolving rapidly, it’s clear that the majority of Americans don’t trust Big Tech to prioritize safety and regulation, by a two-to-one margin,” Technology Watch Project deputy executive director Kyle Morris told Gizmodo.
The poll comes down to what could turn out to be a turning point in government AI policy. Hours before the poll was released, the Biden administration met with the leaders of four leading AI companies to discuss the risks of AI. The administration also revealed that the National Science Foundation will provide $140 million in funding to launch seven new national research institutes for artificial intelligence.
The final setback against artificial intelligence
Even without polling, there are some clear signs that the national conversation surrounding AI has shifted away from mild amusement and excitement around AI generators and Chatbots towards potential damages. However, what those damages are varies greatly, depending on who you ask. Last month, more than 500 technical experts and business leaders Sign an open letter Call on AI Labs to immediately halt development on all new large language models that are more powerful than OpenAI’ GPT-4 Because of concerns that it could pose “profound dangers to society and humanity”. The signatories, including OpenAI co-founder Elon Musk and Apple co-founder Steve Wozniak, said they would support the government’s moratorium on the technology if companies voluntarily refuse to play.
Other leading researchers in the field, such as University of Washington linguistics professor Emily Bindy and managing director of the AI Now Institute Sarah Myers-West, agree that AI needs more regulation, but reject the increasingly popular trend of attributing human-like characteristics to machines. Which plays a great role for advanced game or word association. artificial intelligence systems the researchers previously told Gizmodo, not sentient or human, but that doesn’t matter. They fear that technology’s tendency to make up facts and present them as fact could lead to a flood of misinformation making it difficult to determine what is true. And they say the technology’s baked-in biases from discriminatory datasets mean the negative effects may be worse for marginalized groups. Meanwhile, conservatives, apprehensive about “wake-up” bias in chatbot results, applauded Musk’s idea. Creating his own politically incorrect “BasedAI”.
“Unless we have politics involved, we face a world where the trajectory of AI is unaccountable to the public, defined by the few companies that have the resources to develop these tools and test them out in the wild,” West told Gizmodo.
A new wave of AI bills is on the way
Congress, a legislature not known for keeping up with new technology, is looking to pick up the pace when it comes to AI technology policy. Last week, Colorado Sen. Michael Bennett Submit the invitation bill To form an “Artificial Intelligence Task Force” to identify potential civil liberty issues raised by AI and make recommendations. Days earlier, Massachusetts Senator Ed Markey and California Rep. Ted Liu They presented their bill Trying to stop the AI from taking control of the nuclear launch weapons. They said they worried it would lead to a Hollywood-style nuclear holocaust. Similarly, Senate Majority Leader Chuck Schumer fired himself artificial intelligence framework In an effort to increase transparency and accountability in technology.
“The age of artificial intelligence is here, and here it is to stay,” Schumer said in a statement. “Now is the time to develop, harness and enhance its potential to benefit our country for generations.”
This week, the Biden administration signaled its interest in the region by meeting with the leaders of four leading AI companies this week Discusses the AI security. Lena Khan, chairwoman of the Federal Trade Commission, one of the nation’s top law enforcement agencies, recently published her op-ed in The New York Times with a clear, direct message: “We must regulate artificial intelligence. “
Lots of that sudden movement, according to Lawmakers speak in a recent political article, comes from a strong public response to ChatGPT and other popular emerging chatbots. The app’s wild popularity and public confusion about its ability to generate compelling and sometimes annoying responses has been said to have hit and miss in ways few other technical issues have.
“Artificial intelligence is one of those things that moved at 10 miles an hour, and all of a sudden it’s 100, going 500 miles an hour,” Frank Lucas, chairman of the House Science Committee, told Politico. “He’s got everyone’s attention, and we’re all trying to focus,” Lucas said.
Want to learn more about artificial intelligence, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides to The best free AI art generators And Everything we know about OpenAI’s ChatGPT.
(tags for translation) Michael Bennett