Native to the Americas, the turkey vulture (Cathartes aura) travels widely in search of sustenance. While usually foraging alone, it relies on other individuals of its species for companionship and mutual protection. Sometimes misunderstood, sometimes feared, sometimes shunned, it nevertheless performs an important role in the ecosystem.
This scavenger bird is a marvel of efficiency. Rather than expend energy flapping its wings, it instead locates uplifting columns of air, and spirals within them in order to glide to greater heights. This behavior has been mistaken for opportunism, interpreted as if it is circling doomed terrestrial animals destined to be its next meal. In truth, the vulture takes advantage of these thermals to gain the altitude needed glide longer distances, flying not out of necessity, but for the joy of it.
It also avoids the exertion necessary to capture live prey, preferring instead to feast upon that which is already dead. In this behavior, it resembles many humans.
It is not what most of us would consider to be a pretty bird. While its habits are often off-putting, or even disgusting, to members of more fastidious species, the turkey vulture helps to keep the environment from being clogged with detritus. Hence its Latin binomial, which translates to English as "golden purifier."
I rarely know where the winds will take me next, or what I might find there. The journey is the destination.
You know I think I found a perfect way to be honest and travel with the pooch! Drive in your own vehicle! You'll see more sights, stay in a pet friendly hotel and maybe have some adventures. But thats just my opinion.
We've been flying two or three times a year the last few years, and with layovers and return flights that's probably eight or ten different flights. I haven't seen a single service dog. I wonder if it's more of a problem on other airlines or for certain departure point and destinations?
Oh, wait. At least half of those are international flights. I don't know how or if this applies to service dogs, but pets for sure have to quarantine before they can cross international borders. When we moved to Germany the second time, we had two dogs, and we rehomed them rather than put them in quarantine for (six months? I think it was).
And yet, they never ask people who had siblings things like "Was it hard, not being the center of attention?" Or, "How did it feel to feud with your siblings over the inheritance?"
To be fair, when it comes to the latter question, it doesn't really need to be asked because a lot of the time people will come out and complain about it on their own. That's what happened with my father-in-law, anyway.
And I will say this. Even though I was an only child, big family gatherings were a part of my childhood, and I generally enjoyed them. I do kind of miss it in middle age and do sometimes go overboard when it comes to giving Christmas gifts as a result.
Horace didn't "invent" souvenir. He maybe expanded the word's meaning from it's original use "to recollect" or "to remember" into "a token item that reminds me of something."
If this is how the article writer wants to define the beginnings of one word, he or she is clearly bad at this and everything else they wrote is now in question.
Slop may be seeping into the nooks and crannies of our brains.
Let me tell you, whoever first called "AI" output "slop" should be outed as the most influential person of the decade. Sadly, it wasn't me, this time.
If you think of something to say and say it, that could never be AI slop, right? In theory, all organically grown utterances and snippets of text are safe from that label.
Welllll... philosophically, do you really know you're not artificial? I mean, really, really know? There are a whole lot of "this is all a simulation" folks out there, some of whom may or may not be bots, but if they're right (which they probably aren't, but no one can prove it either way), then you're just as much AI as your friendly neighborhood LLM. Just, maybe, a little more advanced. Or maybe not. I can point to a few supposedly organic biological human beings who make less sense than chatbots. Flat-earthers, for example.
But our shared linguistic ecosystem may be so AI-saturated, we now all sound like AI.
For variant values of "we" and "all," okay.
Worse, in some cases AI-infected speech is being spouted by (ostensibly human) elected officials.
Well, those are all alien lizard people anyway.
Back in July of this year, researchers at the Max Planck Institute for Human Development’s Center for Adaptive Rationality released a paper on this topic titled “Empirical evidence of Large Language Model’s influence on human spoken communication.”
Today (or, rather, when I first saved this article earlier this month) I learned that there's a Center for Adaptive Rationality, and that it's named after someone better known for defining the absolute lower limit on the amount of size and time that can be meaningfully measured. There's a metaphor in there, somewhere, or at least a pun, but I haven't quite teased it out, yet. Something about human rationality being measured in Planck lengths. Most people wouldn't get the joke, anyway.
As Gizmodo noted at the time, it quantified YouTube users’ adoption of words like “underscore,” “comprehend,” “bolster,” “boast,” “swift,” “inquiry,” and “meticulous.”
And? All that shows is that some tubers' scripts may have been generated or assisted by AI.
That exercise unearthed a plausible—but hardly conclusive—link between changes to people’s spoken vocabularies over the 18 months following the release of ChatGPT and their exposure to the chatbot.
See that? That double emdash in that quote right there? That's also a hallmark of LLM output. There is absolutely nothing wrong with using emdashes—I do it from time to time, myself, and have been long before this latest crop of generative language models. But now, thanks to LLMs, you can't use one without being accused of AI use. Unfortunately, I fear the same is going to happen to semicolons; those few of us who know how to use them correctly are going to be scrutinized, too.
But two new, more anecdotal reports, suggest that our chatbot dialect isn’t just something that can be found through close analysis of data. It might be an obvious, every day fact of life now.
I must underscore that these are, indeed, anecdotes. Which can bolster understanding, but fall short of the meticulous standards needed for science. Many people don't comprehend that scientific inquiry requires more than just stories, though people are more swift to relate to stories than to dry data. That's why many science articles boast anecdotes in their ledes—to hook the reader, draw them in before getting to the dry stuff.
I really, really, hope you see what I did there.
Anyway, the money quote, for me, is this one:
As “Cassie” an r/AmItheAsshole moderator who only gave Wired her first name put it, “AI is trained off people, and people copy what they see other people doing.” In other words, Cassie said, “People become more like AI, and AI becomes more like people.”
You humans—er, I mean, we humans tend to hold our intelligence in high regard, for inexplicable reasons. It's right there in the official label we slapped on ourselves: homo sapiens, where "homo" isn't some derogatory slur, but simply means "human." The Latin root was more like "man," and also gave French the word "homme," and Spanish "hombre," which mean adult male human, and we can argue about the masculine being the default, as in "all men are created equal," though I agree that usage is antiquated now and that we should strive to be more inclusive in language. The important part of that binomial for this discussion, though, is "sapiens," which can mean "wise" or "intelligent," which we can also argue isn't the same thing (it certainly is not in D&D).
But I've noted in the past that our so-called creative process relies primarily on soaking up past inputs—experiences, words, mannerisms, styles, etc.—and rearranging them in ways that make sense to us and, sometimes, if we're lucky, also to someone else. Consequently, it should shock or surprise no one that we're aping the output of LLMs. I've done it consciously in this entry, but I have undoubtedly done it unconsciously, as well.
We can assert that this is the difference: consciousness. The problem with that assertion is that no one understands what consciousness actually is. I'm convinced I'm conscious (cogito ergo sum), but am I, really, or am I just channeling the parts of Descartes' philosophy that I agree with? And as for the rest of you, I can never truly be sure, though it's safest to assume that you are.
We're all regurgitative entities, to put it more simply (though with an adjective I apparently just made up). Everything we think, say, do, or create is a remix of what the people before us have thought, said, did, created, etc.
Despite my stylistic choices here, I did not use AI or LLMs to write anything in this entry, or for that matter any other entry, ever. The image in the header is, of course, AI-generated, but not the text. Never the text, not without disclosure. You might not believe that, and there's not much I can do about it if that's the case. But it's true. Still, the influence of LLMs is apparent, is it not? At the very least, without them, I would never have had occasion to write this entry.