Blog Calendar
About This Author
Come closer.
Carrion Luggage
#1104403 added December 25, 2025 at 8:16am
Restrictions: None
Thinkspots
I'd saved this Quanta article just because I thought it was interesting, especially as someone who is learning a new language later in life.

Is language core to thought, or a separate process? For 15 years, the neuroscientist Ev Fedorenko has gathered evidence of a language network in the human brain — and has found some similarities to LLMs.

See, I'd never wondered whether language was core to thought or not; for me, it absolutely is. I think in words. Sometimes also pictures, but also words (numbers are words, too, like seventeen or one-eighty-five).

Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine.

I thought AI chatbots were LLMs, but whatever.

That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process.

I expect this is especially true for writers.

But what if our neurobiological reality includes a system that behaves something like an LLM?

It's funny. As technology advanced, we kept coming up with new terms to compare to how the brain works. Near the beginning of the industrial revolution, it was "gears turning" (that one persisted). Later, some compared neuronal signaling to telegraph lines. A while back, people started saying our brains are "hardwired" to do this or that. Now it's "the brain works like an LLM."

The joke is that a) no, the brain doesn't work like any of those things; it's just a useful metaphor and b) if anything, LLMs are like the brain, not the other way around. (In math, A=B is the same as B=A, but not necessarily in language.)

Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko began studying how language works in the adult human brain.

The brain is, however, notoriously hard to study, because it's complicated, but also because we're using a brain to study it with.

Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains.

I'd want to be more careful using the word "mindless." I'm pretty sure I know what the author means, but one of the great mysteries left to solve is what, exactly, is a mind.

“You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.”

I'm no expert at coding, but I know some computer languages have variables called "pointers" whose data is solely where to find other data. Don't ask me; I never did get the hang of them. But again, we have a technological metaphor for the brain. These are like the Bohr model of the atom: useful for some things, but not reflective of reality. So when I read the above quote, that's where my brain went.

Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess).

Yet.

A lot of the article is an interview with Fedorenko, I don't really have much more to say about it; it's just a bit of insight into how thinkers think about thinking, from a physical point of view.

© Copyright 2025 Waltz Invictus (UN: cathartes02 at Writing.Com). All rights reserved.
Waltz Invictus has granted InkSpot.Com, its affiliates and its syndicates non-exclusive rights to display this work.
... powered by: Writing.Com
Online Writing Portfolio * Creative Writing Online