coyle29_ Frank Rumpenhorstpicture alliance via Getty Images_chatgpt Frank Rumpenhorst/picture alliance via Getty Images

Preempting a Generative AI Monopoly

The emergence of powerful new tools like ChatGPT represents a major breakthrough in artificial intelligence while highlighting the need for regulatory intervention. To protect the public interest, policymakers must prevent this nascent market from becoming dominated by a handful of giant private companies.

CAMBRIDGE – ChatGPT, the new artificial-intelligence chatbot developed by the San Francisco-based research laboratory OpenAI, has taken the world by storm. Already hailed as a milestone in the evolution of so-called large language models (LLMs), the world’s most famous generative AI raises important questions about who controls this nascent market and whether these powerful technologies serve the public interest.

OpenAI’s release of ChatGPT last November quickly became a global sensation, attracting millions of users and allegedly killing the student essay. It is able to answer questions in conversational English (along with some other languages) and perform other tasks, such as writing computer code.

The answers that ChatGPT provides are fluent and compelling. Despite its facility for language, however, it can sometimes make mistakes or generate factual falsehoods, a phenomenon known among AI researchers as “hallucination.” The fear of fabricated references has recently led several scientific journals to ban or restrict the use of ChatGPT and similar tools in academic papers. But while the chatbot might struggle with fact-checking, it is seemingly less prone to error when it comes to programming and can easily write efficient and elegant code.

https://prosyn.org/PfUKJPM