Blog Calendar
About This Author
Come closer.
Carrion Luggage

Carrion Luggage

Blog header image

Native to the Americas, the turkey vulture (Cathartes aura) travels widely in search of sustenance. While usually foraging alone, it relies on other individuals of its species for companionship and mutual protection. Sometimes misunderstood, sometimes feared, sometimes shunned, it nevertheless performs an important role in the ecosystem.

This scavenger bird is a marvel of efficiency. Rather than expend energy flapping its wings, it instead locates uplifting columns of air, and spirals within them in order to glide to greater heights. This behavior has been mistaken for opportunism, interpreted as if it is circling doomed terrestrial animals destined to be its next meal. In truth, the vulture takes advantage of these thermals to gain the altitude needed glide longer distances, flying not out of necessity, but for the joy of it.

It also avoids the exertion necessary to capture live prey, preferring instead to feast upon that which is already dead. In this behavior, it resembles many humans.

It is not what most of us would consider to be a pretty bird. While its habits are often off-putting, or even disgusting, to members of more fastidious species, the turkey vulture helps to keep the environment from being clogged with detritus. Hence its Latin binomial, which translates to English as "golden purifier."

I rarely know where the winds will take me next, or what I might find there. The journey is the destination.


December 25, 2025 at 8:16am
December 25, 2025 at 8:16am
#1104403
I'd saved this Quanta article just because I thought it was interesting, especially as someone who is learning a new language later in life.

Is language core to thought, or a separate process? For 15 years, the neuroscientist Ev Fedorenko has gathered evidence of a language network in the human brain — and has found some similarities to LLMs.

See, I'd never wondered whether language was core to thought or not; for me, it absolutely is. I think in words. Sometimes also pictures, but also words (numbers are words, too, like seventeen or one-eighty-five).

Even in a world where large language models (LLMs) and AI chatbots are commonplace, it can be hard to fully accept that fluent writing can come from an unthinking machine.

I thought AI chatbots were LLMs, but whatever.

That’s because, to many of us, finding the right words is a crucial part of thought — not the outcome of some separate process.

I expect this is especially true for writers.

But what if our neurobiological reality includes a system that behaves something like an LLM?

It's funny. As technology advanced, we kept coming up with new terms to compare to how the brain works. Near the beginning of the industrial revolution, it was "gears turning" (that one persisted). Later, some compared neuronal signaling to telegraph lines. A while back, people started saying our brains are "hardwired" to do this or that. Now it's "the brain works like an LLM."

The joke is that a) no, the brain doesn't work like any of those things; it's just a useful metaphor and b) if anything, LLMs are like the brain, not the other way around. (In math, A=B is the same as B=A, but not necessarily in language.)

Long before the rise of ChatGPT, the cognitive neuroscientist Ev Fedorenko began studying how language works in the adult human brain.

The brain is, however, notoriously hard to study, because it's complicated, but also because we're using a brain to study it with.

Her research suggests that, in some ways, we do carry around a biological version of an LLM — that is, a mindless language processor — inside our own brains.

I'd want to be more careful using the word "mindless." I'm pretty sure I know what the author means, but one of the great mysteries left to solve is what, exactly, is a mind.

“You can think of the language network as a set of pointers,” Fedorenko said. “It’s like a map, and it tells you where in the brain you can find different kinds of meaning. It’s basically a glorified parser that helps us put the pieces together — and then all the thinking and interesting stuff happens outside of [its] boundaries.”

I'm no expert at coding, but I know some computer languages have variables called "pointers" whose data is solely where to find other data. Don't ask me; I never did get the hang of them. But again, we have a technological metaphor for the brain. These are like the Bohr model of the atom: useful for some things, but not reflective of reality. So when I read the above quote, that's where my brain went.

Unlike a large language model, the human language network doesn’t string words into plausible-sounding patterns with nobody home; instead, it acts as a translator between external perceptions (such as speech, writing and sign language) and representations of meaning encoded in other parts of the brain (including episodic memory and social cognition, which LLMs don’t possess).

Yet.

A lot of the article is an interview with Fedorenko, I don't really have much more to say about it; it's just a bit of insight into how thinkers think about thinking, from a physical point of view.


© Copyright 2025 Waltz Invictus (UN: cathartes02 at Writing.Com). All rights reserved.
Waltz Invictus has granted InkSpot.Com, its affiliates and its syndicates non-exclusive rights to display this work.

... powered by: Writing.Com
Online Writing Portfolio * Creative Writing Online