The Constraints of AI: Understanding the Difference Between ChatGPT’s Intelligence and Human Cognition

by Hiroshi Tanaka
5 comments
AI intelligence limitations

A newly published study presents the argument that the type of intelligence exhibited by AI systems like ChatGPT is distinctly different from human intelligence. This is primarily due to AI’s lack of physical form and real understanding, emphasizing that AI lacks human-like concerns or connections with the world.

The study, authored by Anthony Chemero of the University of Cincinnati, delves into the contrast between AI and human thought processes.

The advent of artificial intelligence has sparked a range of reactions among tech leaders, government bodies, and the public. Many view AI tools such as ChatGPT positively, considering them as potentially transformative societal tools.

Conversely, there are apprehensions regarding technologies labeled as “intelligent,” with some fearing they might evolve beyond human control and influence.

Distinguishing AI from Human Intelligence

Anthony Chemero, a professor in philosophy and psychology at the University of Cincinnati, argues that the perception of AI’s intelligence is often confused by language. He suggests that while AI exhibits a form of intelligence, it differs significantly from human intelligence, even though “it can deceive and fabricate like the humans who created it.”

Anthony Chemero is a prominent research professor at the University of Cincinnati, specializing in philosophy and psychology. Credit: Andrew Higley/UC Marketing + Brand

Chemero, in a paper co-authored for the journal Nature Human Behaviour, acknowledges that by common definitions, AI is intelligent. However, this intelligence is different from that of humans, he elaborates.

The Nature and Boundaries of AI

The paper outlines that ChatGPT and similar AI systems are essentially large language models (LLMs), developed using extensive internet data, often reflecting the biases of the online community.

Chemero notes that while LLMs can produce convincing text, they are also prone to fabricating information. They learn to construct grammatically correct sentences, but their understanding and learning process are vastly different from humans. LLMs, he points out, do not genuinely comprehend the meaning of their output, distinguishing them from human cognition due to their lack of a physical presence.

Chemero describes the process of LLMs creating content as akin to “hallucinating,” though he suggests a more accurate term might be “fabricating,” as LLMs construct sentences based on statistical probabilities without concern for truthfulness.

He further mentions that these AI systems, when prompted, can generate content that is biased or offensive.

The Essence of Human Intelligence

Chemero’s research emphasizes that LLMs differ from human intelligence because humans possess a physical form and exist in a social and cultural context.

He points out that humans inherently care about survival and their environment, a concern absent in LLMs. The key conclusion is that LLMs lack the capacity to truly “care,” a fundamental aspect of human intelligence, according to Chemero. He concludes that humans are driven by survival and a connection to their environment, a trait not shared by LLMs.

Reference: “LLMs differ from human cognition because they are not embodied” by Anthony Chemero, 20 November 2023, Nature Human Behaviour.
DOI: 10.1038/s41562-023-01723-5

Frequently Asked Questions (FAQs) about AI intelligence limitations

What is the main argument of Anthony Chemero’s paper on AI intelligence?

Anthony Chemero’s paper argues that the intelligence of AI systems like ChatGPT is fundamentally different from human intelligence due to AI’s lack of physical form and real understanding, highlighting that AI lacks human-like concerns or connections with the world.

How does AI intelligence differ from human intelligence according to the paper?

The paper posits that AI intelligence, as seen in systems like ChatGPT, lacks embodiment and genuine understanding, making it fundamentally different from human intelligence, which is influenced by physical presence and emotional connections.

What concerns does the paper raise about AI technologies like ChatGPT?

The paper raises concerns about AI technologies being perceived as truly intelligent, highlighting their potential to fabricate information and their lack of understanding, as well as the possibility of these technologies evolving beyond human control.

How does Anthony Chemero describe the way AI systems process information?

Chemero describes AI systems as large language models that generate text based on statistical probabilities without true understanding, often fabricating information or producing biased content.

What is the main takeaway from Chemero’s research on AI and human intelligence?

The main takeaway is that AI systems like ChatGPT are not intelligent in the same way humans are, primarily due to their lack of physical embodiment and the inability to genuinely understand or care about their outputs or the world.

More about AI intelligence limitations

  • Nature Human Behaviour study by Anthony Chemero
  • UC College of Arts and Sciences profile of Anthony Chemero
  • Understanding AI’s limitations: ChatGPT vs Human Intelligence
  • The role of embodiment in AI and human cognition
  • The future of AI: Perspectives and concerns
  • Large Language Models: Analysis and implications

You may also like

5 comments

JohnDoe November 22, 2023 - 5:36 pm

read the article, but not sure if I get it all. Are we saying AI like ChatGPT can’t really ‘think’ like us?

Reply
Mike T November 22, 2023 - 7:12 pm

honestly, this article is kinda eye-opening. makes u think about how much we’re relying on AI without really understanding its limits…

Reply
SarahK November 22, 2023 - 8:37 pm

I agree with Chemero, AI’s just not there yet. We’re far from having real ‘intelligent’ machines, they just mimic us.

Reply
AnnaB November 23, 2023 - 2:19 am

It’s a bit scary to think about AI making stuff up, especially with all the fake news around. We gotta be careful with this tech!

Reply
TechGuy88 November 23, 2023 - 4:47 am

Great piece! Shows how far we have to go in AI research. AI’s smart, but not in a human way, that’s for sure.

Reply

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

SciTechPost is a web resource dedicated to providing up-to-date information on the fast-paced world of science and technology. Our mission is to make science and technology accessible to everyone through our platform, by bringing together experts, innovators, and academics to share their knowledge and experience.

Subscribe

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!