AI marketing is in a fever, and it promises that this new wave of generative AI tools, powered by large language models, can help us do everything from Navigating legal contracts to Save hundreds on our phone bills. When a leader like Alphabet CEO Sundar Pichai describes AI as “deeper than fire or electricity,” it’s hard not to be excited about the possibilities. But as the CEO of Consumer Reports, I know some of the flashiest things on the market don’t always live up to the hype. An insatiable appetite for quarterly earnings is often the driver behind the generative AI that is transforming society today, and consumers will have to fight for a fair shake. Only when the AI revolution focuses on transparency, accuracy, and fairness can we be sure that it lives up to its true potential for us, ordinary consumers, not just corporate shareholders.
A consumer-first approach is not guaranteed. When Consumer Reports was founded in 1936, there was little information available to the public to help Americans evaluate the safety and performance of products. It was an era of unrestrained advertising demands, rapid technological progress, and patchwork regulations. Sound familiar? Today, our transformational products are not as physical as they were in 1936 – or even 1996 – but nevertheless, the rigor with which they must be tested and corporate accountability remain the same.
We just need to look to the recent past to understand the problems we might expect with the AI revolution we’re going through. The onslaught of social media and the digital transformation of online markets delivered many of the same promises made by today’s AI revolution—instant communication, increased speed and accuracy of information, and the democratization of power. However, these tools have also generated new forms of manipulation and discrimination – which we continue to struggle to fully address with mixed results.
The underlying problems aren’t new, but the AI supercharges them. For years, scammers have used the Internet to take advantage of consumers. Now they use artificial intelligence imitating the voices of their loved onesTrick grandparents into “helping” by sending money or providing sensitive information. Companies are already using search engines to blur the line between answers to our questions and ads to sell our products. With generative AI search tools, consumers can be exposed to a supercomputer, powered by your personal data, that prioritizes company profits, not honest answers or your best interests.
Arguably the most insidious problem to root out is the biased data that potentially underpins this innovative technology. Even before the new explosion of generative AI, experts had already uncovered how the algorithmic technology that powers our world today can discriminate. For example, a joint investigation by Consumer Reports and ProPublica found that some auto insurance companies may use an algorithm Premiums charged are on average 30% higher in zip codes with predominantly minority populations compared to whiter neighborhoods with similar accident costs. While generative AI is a new field, there are already examples of it perpetuating biases, such as giving Tips on how to spread antisemitism online. What would happen if more of the market – of our daily lives – was handled by opaque technologies Promote systemic injustice throughout our community?
We must ask an important question: Are we creating a world in which this technology serves us or where we serve this technology? As we embrace the benefits of AI, we must ensure that innovation in this space is driven by consumer-first values.
Generative AI must be transparent – which is key to accountability. For advocates and regulators to review potential harms, they need insight into the data used to inform any AI, with models open to third-party researchers for testing. Many model providers partner with research institutions to assess and de-risk – but we cannot rely on self-regulation and voluntary disclosure only when profits pay company interests. For individual consumers, transparency means that it should be crystal clear whether someone is paying for the information we get and whether we are dealing with a real person or an artificial one.
Generative AI models also need to ensure accuracy by design. Consumers must be able to assume that the information they are getting is true and accurate, not empty words or disguised advertisements. This requires due diligence and oversight from companies, as well as a process for people to correct or dispute information provided by AI.
And fairness must be at the heart of AI, and it must be developed and deployed with fairness in mind. This means reviewing biases in the data entered, during design, and throughout the life of the product, ensuring that all communities enjoy the benefits of this technology.
The AI show promises a lot – and I believe in the potential. But with the launch of this new class of technology comes a new class of responsibility, transparency and accountability. Consumer protection is worth fighting for, and together we can ensure that this new era of the AI revolution is guided by a new era of consumer rights.
Marta Telado has been the CEO of Consumer Reports since 2014.
Want to learn more about artificial intelligence, chatbots, and the future of machine learning? Check out our full coverage of artificial intelligenceor browse our guides to The best free AI art generators And Everything we know about OpenAI’s ChatGPT.