Jeffrey Hinton, the prominent computer scientist sometimes referred to as the “father of artificial intelligence,” has resigned from Google and now says he regrets what AI can mean to misinformation and people’s livelihoods. Among many tech luminaries, Hinton worries about the implications of artificial intelligence, according to an interview with The New York Times published Monday.
Hinton said he fears that ordinary people will not be able to distinguish real images, videos, and texts from those generated by AI, and that AI may also kill jobs, not just rote work or crunching numbers but also more advanced jobs.
“The idea that these things can actually become smarter than people – few people think so,” said Hinton. “But most people thought it was too far-fetched. And I thought it was too far-fetched. I thought it was 30 to 50 or even more. Obviously, I don’t think that anymore.”
Taking the current AI path one step further, Hinton fears that an AI could generate its own computer code, become autonomous and weaponize itself. Now that AI is unleashed, he said, there is no way to control or regulate it. While companies may agree to a set of terms, countries may continue to develop AI technology in secret, without wanting to compromise on any basis.
Hinton, along with two of his students, built a neural network, or mathematical system that can learn new skills by analyzing an existing dataset, and that can teach itself how to recognize objects in images. Google acquired the company in 2013 for $44 million. Hinton, along with Joshua Bengio and Yan Lacon, won the Turing Award in 2019 for their work on neural networks.
“Jeff has made foundational breakthroughs in the field of artificial intelligence, and we value his ten years of contributions at Google,” Jeff Dean, Google’s chief scientist, told CNET in an emailed statement. “I have greatly enjoyed our many conversations over the years. I will miss him, and wish him well!”
Dean went on to say that Google was one of the first companies to publicize the principles of AI and that it is “continually learning to understand emerging risks while also innovating boldly.”
AI-powered chatbots like ChatGPT took the world by storm late last year with their ability to answer any question with human-like responses. From writing poems to biographies, each time the generative AI gives the person unique and original responses. It flips through the internet search form to type a query and filters through a list of website links to find an answer. Generative AI does this by combing through huge datasets and piecing together the most logical sentences. Referred to as Autocorrect for stimulants.
With the launch of ChatGPT, many companies have integrated AI into their products. Microsoft has revamped Bing to include the same ChatGPT-powered technology. Apps like Photoshop, Grammarly, and WhatsApp are also embracing AI. Google responded by releasing its own AI chatbot called Bard, a launch it fumbled. Compared to Bing and ChatGPT, Bard is unaffected, even though Google is working on an AI-powered search engine. Artificial intelligence is likely to be a major topic this month Google I/OWhere if a company does not flag its flag firmly, it may be left behind.
Microsoft is also looking to ensure the responsible use of AI, publishing a blog post on Monday about embedding in-house guidance and investing in a variety of talent to aid future development.
Editors’ note: CNET uses an artificial intelligence engine to create some personal finance explanations that are edited and verified by our editors. For more information, see this post.