ChatGPT ‘may make up facts,’ OpenAI’s chief technology officer says
- ChatGPT may not always be accurate, OpenAI’s CTO Mira Murati said in an interview with Time magazine.
- But talking to users can help it get better, Murati said.
- The bot’s accuracy has implications for schools, news organizations, and others using the tool.
Thanks for signing up!
Access your favorite topics in a personalized feed while you’re on the go.
ChatGPT has captivated the worlds of tech, media, and education since OpenAI released the chatbot to the public in November, but its responses might not always be right.
The bot “may make up facts” as it writes sentences, OpenAI’s chief technology officer Mira Murati said in an interview with Time magazine, describing that as a “core challenge.” ChatGPT generates its responses by predicting the logical next word in a sentence, she said — but what’s logical to the bot may not always be accurate.
One way for users to address that is by continuing to engage with the bot and challenge responses they don’t believe are correct, she said.
“We picked dialogue specifically because dialogue is a way to interact with a model and give it feedback,” she told the outlet. Murati made the remarks in the same interview in which she said artificial intelligence tools need to have government oversight.
A representative for OpenAI did not immediately respond to Insider’s request for comment on Monday.
ChatGPT has been seen as a game changer since its public debut last year, with users swarming the bot with questions and requests for it to write code, essays, letters, articles, and jokes.
Some news organizations have indicated plans to experiment with various artificial intelligence tools to help produce articles and other content, even as some schools are taking steps to block ChatGPT’s use by students.
Using artificial intelligence to generate reliable content, even with human input, can be a challenge. The news outlet CNET, for instance, has been trying out AI to help produce a number of articles with fact-checking and reviewing done by human editors, but there were still inaccuracies, the tech site Futurism reported.
Other news sites including BuzzFeed and Men’s Journal are also looking toward AI tools to help write posts.
Insider has dispatched its own reporters to conduct experiments with ChatGPT and give readers a sense of how the tool functions in the real world. Insider’s Samantha Delouya used ChatGPT to see how it would recreate an article she’d written, and found that it fabricated quotes.