While this could be an online dating nightmare scenario, it could also be the experience you’re having with ChatGPT, the big language paradigm that OpenAI released in December to great fanfare. The charming chatbot represents a leap forward in the ability of machines to sift through the vast volumes of human-written text found on the Internet and infer relationships and knowledge to create original content—from accurate responses to your questions, to short stories in the style of Chekhov, to compelling college philosophy essays.
The advances that ChatGPT heralds in artificial intelligence are eye-catching: algorithms are now able to generate new text that closely resembles text written by humans. The questions it raises, such as whether humans or machines will write the best volumes of the future, are fascinating in every sense of the word.
But like an attractive blind date, why not go to this party with a little vigilance and skepticism, and maybe even take a trusted friend along?
It’s ChatGPT’s fluency and even its rhetoric that pose its imminent threat – it’s hard to spot errors and falsehoods in text that reads like reliable news and scholarly sources from people we trust, or seems to come from a personable interlocutor in conversation with us. It is easy for people who intend to spread lies to harness their efficiency and fluency to mislead on a massive scale. This is why some experts warn that large language paradigms can dramatically increase the risk of disinformation and disinformation campaigns — making it much cheaper and easier to create and disseminate fake scientific findings, fraudulent political claims, and conspiracy theories that threaten the lives of people in pandemics or at risk. endangered democratic elections.
“ChatGPT mixes truth and falsehood,” said Gary Marcus, a prominent scientist and entrepreneur in the field of artificial intelligence, noting that Russia at one point eliminated Over a million dollars a month in troll farms that created misinformation to influence the 2016 US election.” Cost went to almost zero to produce as much shit as you want. It’s just amazing what you can do with it, and it’s so hard for naIFive people to realize they don’t read a human.” Marcus said the ability to quickly and cheaply create many associated sites with the same false claim could be used to trick search engines into treating a claim as if it had many sources — and thus should be elevated to the fore when searches Persons about medical or political information, for example.
Faculty members in universities and colleges She has now been caught trying to anticipate possible ways that ChatGPT can undermine learning, including that students can now outsource their essay writing to a chatbot and that this scam can be hard to spot. A world in which ChatGPT becomes an essay generation engine for achieving academic milestones will certainly be a world in which students rob themselves of thinking and learning, let alone compromising their integrity. When I was teaching editorial writing at Harvard Kennedy School last fall, it was abundantly clear how much students learn by wrestling with the writing process, refining their ideas, and frequently revising their work.
However, I can imagine that many students will take a higher and more difficult path, and many teachers and school administrators will do their best to make it difficult to use these AI systems to earn their grades. Some college faculty are already discouraging student use of ChatGPT with methods such as having students write responses to essay questions in class without using their computers.
The rest of the world has a lot to fear from a friendly robot, for all its promise. But the situation is not hopeless. Marcus suggests that online platforms and sources should spend more resources validating accounts that generate large amounts of content. It is impossible to monitor every statement that appears online, but policymakers must also find a way to regulate the worst offenders of disinformation and disinformation campaigns.
And just like when we talk to strangers, we’ll all have to be more skeptical about what we encounter online — and how we check it. We need to start asking about everything we read: What do we know about the source and the motives behind the source and the origins? How do we know it was written by a real human being? ChatGPT has primarily sparked discussion about AI and its potential, but the most compelling public discussion and public education offered by this new technology is not about the artificial world and all-knowing machines we could one day create. It’s about reality and how we stay grounded in it.
Beena Venkataraman is editor-at-large for Globe Opinion.