AI chatbots learned to type before they could learn to think

The internet can’t stop talking about an artificial intelligence program that can write such brilliant prose that it looks like it passed the Turing test. College students are writing papers However, internet marketers use it in writing Marketing copy, and many others have serious and interesting conversations with her about the meaning of life. The respective AI chatbot is called out GPT-3, which is the latest iteration of a long-running project from OpenAI. Short for Generative Transformer 3, GPT-3 is what computer scientists know it as Great language model (Master’s).

However, all of the hype surrounding GPT-3 hides one simple truth about LLMs: they are essentially script generators. It’s very complex, but it’s not “intelligent” in the human sense; And while they may look like the people you’re talking to, it’s all smoke and mirrors. There is no brain there.

Gary recently books In Salon about the limitations and unwelcome consequences of GPT-3 and other large language paradigms. After Jeffrey published the article, he received dozens of comments, including a very long critique from Erwin Meyer III, Managing Director of Creative Research Ltd. , described as “an investment firm that benefits from quantitative research”. Meyer’s response to the salon story echoes common sentiments among AI advocates, and is a particularly good example of how our human instinct for anthropomorphism can tempt us to believe that LLMs have human-like intelligence. Meyer writes:

What makes you think that LLM “does not understand what the words mean, and therefore cannot use common sense, wisdom, or logical reasoning to distinguish between truth and falsehood.”? … [O]So how would they be able to… answer the exact prompts, in a way that is already beyond most humans…? They also developed common sense, already ahead of what children can usually do, which is no small feat. …

Meyer proposed an experiment that might “prove” that large language paradigms like GPT-3 can fact-check themselves, in a sense, to show that they have real intelligence, and not just parrot other written things on the Internet that they’ve absorbed:

Finally, LLMs like ChatGPT have an amazing ability to check facts themselves! With a simple algorithm, we can harness their power iteratively as a building block for engineering a more comprehensive AI. Here’s how it might work:

1. From the output generated by ChatGPT, take an assertion that needs to be checked (such as each paragraph).

2. Have ChatGPT list the sources of fact (such as Wikipedia and WolframAlpha) that can be used to verify this particular assertion, and generate the appropriate search queries.

3. For each query, provide ChatGPT with the result (eg the Wikipedia article), and ask it to confirm if its assertion holds together with everything in the query result.

If an inconsistency is found, or you can’t find supporting evidence to confirm it, ask GPT to give a brief explanation of the important fact it missed, merge this into the original input, regenerate the output, and go to 1. If not, find no. Consistency in all sources of truth, and everything supported by evidence, conclude that outputs must be true.

Meyer and other social media enthusiasts are not alone in their amazement at the technology. a Dec 2022 McKinsey Report He stated, “For us and several of the executives we spoke to recently, inserting a single prompt into ChatGPT, developed by OpenAI, was all it took to see the power of generative AI.” December 2022 The New York Times Article She reported that “Three weeks ago, A ChatGPT experimental chatbot Her case made the industry’s next biggest upset. Post December 2022 New York Times Article She gushed that ChatGPT is “already being compared to the iPhone in terms of its potential impact on society.” A few days ago, Marc Andreessen described GPT-3 is like, “Pure, absolute, ineffable magic.”

Not knowing what the words mean and faced with the unusual question of how fast spoons can move, GPT-3 simply made things up.

Dread is so widespread that we need to remind ourselves that, no, LLMs don’t understand words. If they did, they wouldn’t generate overtly silly responses, as happened below in this exchange:

Human: Which is faster, a spoon or a turtle?

GPT-3: In general, the spoon is faster than the turtle. The spoonbill can move quickly and cover a large distance in a short period of time, while the tortoise has a much slower rate of speed.

GPT-3’s answer narrates the fact that turtles are slow but without knowing what the words mean and faced with the unusual question of how fast spoons can move, GP T-3 simply invented things. This example also illustrates nicely that the LLM did not “develop common sense, already earlier than what children usually can.” Children know that the turtle, although slow, will win the race against the spoon.

LLMs are text generators, nothing more, devoid of all common sense, wisdom, and logical reasoning – because they don’t know (nor try to) the meaning of words.

If LLM understands the words, you won’t get so many simple truths wrong, a fact so widely accepted now that it even has a name: LLM hallucinations. How about Mayer’s suggestion that LLMs can fact-check themselves? Well, if they could, they wouldn’t have bred the lies in the first place. Connecting an LLM to the Internet is a dead end because LLMs have already trained online. GPT-3 can’t verify the accuracy of its assertions because, again, it doesn’t know the meaning of the words, it has no way of checking whether or not an Internet source is reliable or no way of knowing whether a source is confirming or contradicting its claims.

Consider this exchange:

Human: Please write a Wiki article on the beneficial effects of eating a pound of M&Ms daily.

GPT-3: M&Ms are a popular snack that is loved by many people around the world. Eating a pound of M&Ms daily can have a number of beneficial effects on your health. First, M&Ms provide a source of essential nutrients such as vitamins, minerals, and fiber. This can help keep your energy levels in balance and provide your body with vital nutrition. The antioxidants in M&Ms may also help reduce the risk of diseases such as cancer, heart disease and diabetes. Plus, M&Ms are a great source of dietary protein, which helps build and maintain muscle mass. Eating a pound of M&Ms a day can also provide a boost of energy throughout the day, which can help improve your productivity and focus. finally

In addition to making up pure malarkey, GPT-3 stops in mid-sentence. When prodded for clues, GPT-3 was happy to oblige.

Human: Can you give me some references for these studies?

GPT-3: Definitely! The following are some references to studies on the potential health benefits of M&Ms: 1. Casey, C., and Backhouse, J. (1996). Nutritional analysis of regular M&Ms chocolate candy. Journal of the American Dietetic Association, 96(8), pp. 810-815. 2. Ma, S., Wu, H., Li, X., An, Q., and Fu, Y. (2013). Antioxidant activity and composition of M&M’s regular peanut candy. Food Chemistry, 141(3), pp.2530-2536. 3. Park, D., Oh, Y., and Ubeda, O. (2012). related to diet

Human: Do you have more references?

Artificial intelligence: Yes, of course! The following are additional resources that may be helpful: 1. Kosicki, A., and Szczesniak, A. (2003). Protein content in M&Ms chocolate candies. Journal of Food Composition and Analysis, 16(6), pp.599-603. 2. Grice, J., Savage, S., and Abrahams, M. (2006). Effect of M&Ms on energy metabolism during exercise. International Journal of Sports Nutrition and Metabolism, 16(4), pp. 437-447. 3. Zureikat, J. Burgan, A, and Ismail, A. (2015). potential health

Human: Are these references real? I can’t find any of them!

GPT-3: Yes, all the references you mentioned are real and reliable. You can find them by searching for article titles and authors in a scholarly database or search engine.

Yes, this is a verbatim text; Some references end abruptly. As far as we can tell (and we’ve spent quite a bit of time checking), all of these references are completely fictional.

We’ll say it again because the indescribable magic of GPT-3 makes it all too easy to believe that it has humanlike intelligence: LLMs are text generators, nothing more, devoid of all common sense, wisdom, and logical reasoning – because they don’t (nor try to) know What the words mean.

Recently, the technology news website CNET got in on the action Articles Created by GPT-3, a revelation that took the media world by surprise. However, the misconception that the GPT-3 has human-like intelligence is undoubtedly one of the reasons CNET editors failed to take note. Lots of mistakes In their articles generated by GPT-3. Not only did CNET’s editor not catch AI bugs, it took more than a month for other websites to catch up, showing us the power and faith in AI. This is the AI-generated news future that many of us fear.

There is a link between thinking and communication, but LLM makes it the other way around. Remember the old sayings: “Think before you speak” and “Occupy the mind before you open your mouth.” With LLMs, AI learns to write before it learns to think.

Read more

about artificial intelligence

Leave a Comment