This essay is part of the Mediocre Computing series
I guess we have our first significant, year-defining news of 2023
— the initial reactions of the Bing “Sydney” AI chatbot. I haven’t yet had a chance to try it personally, but there’s plenty of juicy and fun reports out. Here’s a good roundup. I found these takes by Ben Thompson and L. M. Sacasas interesting food for thought though I didn’t quite agree with where they took their arguments. This forum post by Gwern, with some speculation on the OpenAI/Microsoft relationship and the technical aspects, is also interesting (though in general I think the favored “alignment” frames of the LessWrong community are not even wrong).
In brief, it appears that Sydney has somewhat different machinery under the hood than ChatGPT, and the transcripts suggests a personality that is about the same in terms of coherence, but a wild leap beyond in terms of charisma and colorfulness. Depending on how you push Sydney, it/they appears capable of playing everything from a mean manipulative teenager to a paranoid psychotic, to a stubborn and peremptory conversational martinet. If ChatGPT was an anodyne LinkedIn drone with a talent for harmless impressions, Sydney appears to be a bag of quirky characters drawn from the dark underbelly of the internet. If this is the future of search, it will at best be kinda tiresome, like having an annoying new family member in your life, and at worst like a mutually abusive codependency.
So what are we to make of this latest development?
I’ve spent enough time now keeping up with the research and technology and playing with this generation of tools (in part due to consulting work, but also going beyond from plain curiosity) that I have begun to form some conclusions, not about the technologies, but about people. In particular, about the curious fact that we seem to be displaying an extreme reaction not to computers wiping the floor with us in some exceptional performance domain like Go or chess, but at being completely mediocre and flawed.
We are alarmed because computers are finally acting, not superhuman or superintelligent, but ordinary. My choice of “mediocre computing” as the title of this series seems more prescient by the week now.
Yes, there’s still superhuman-ness on display — I can’t paint like Van Gogh as Stable Diffusion can (with or without extra fingers) or command as much information at my finger-tips as the bots — but it’s the humanizing mediocrity and fallibility that seems to be alarming people. We already knew that computers are very good at being better than us in any domain where we can measure better. What’s new is that they’re starting to be good at being ineffectual neurotic sadsacks like us in domains where “better” is not even wrong as a way to assess the nature of a performance.
And this, for some reason, appears to alarm us more.
There are, by definition, only a handful of humans whose identity revolves around being the world’s best Go player. The average human can at best be mildly vicariously threatened by a computer wiping the floor with those few humans. But there are billions whose identity revolves around, for instance, holding some banal views about television shows, sophomoric and shallow opinions about politics and philosophy, the ability to write pedestrian essays, do slow, error-prone arithmetic, write buggy code, and perhaps most importantly, agonize endlessly about relationships with each other, creating our heavens and hells of mutualism.
That’s why the training data needed for these bots even exists. This isn’t the world’s elite chess or Go players. This is us in our billions, in a remarkably unflattering mirror, but it is us. The real us. Not some genetic freaks moving counters on a game board.
Each of us is, and presents as, a unique and precious cocktail of such banalities, and AIs are now able to convincingly present as such cocktails. Let’s put a pin in the is part, and focus on the presents as aspect.
That this is a Copernican moment stripping away yet another layer of our anthropocentric conceits is obvious. But which conceits specifically, and what, if anything is left behind?
In case you weren’t keeping track, here’s the current Copernican Moments list:
-
The Earth goes around the Sun,
-
Natural selection rather than God created life,
-
Time and space are relative,
-
Everything is Heisenberg-uncertain
-
“Life” is just DNA’s way of making more DNA,
-
Computers wipe the floor with us anywhere we can keep score
There’s not a whole lot left at this point is there? I’m mildly surprised we End-of-History humans even have any anthropocentric conceits left to strip away. But apparently we do. Let’s take a look at this latest Fallen Conceit: Personhood.
So what’s being stripped away here? And how?
The what is easy. It’s personhood.
By personhood I mean what it takes in an entity to get another person treat it unironically as a human, and feel treated as a human in turn. In shorthand, personhood is the capacity to see and be seen.
This is obviously a circular definition, but that’s not a problem so long as we have at least one reference entity that we all agree has personhood. Almost any random human, like me or you (I’m not entirely sure about a few, but most people qualify), will do. So long as we have one “real person” in the universe, and agree to elevate any entity they treat unironically as a person as also a person, in principle, we can tag all the persons in the universe.
In Martin Buber’s terminology (ht Dorian Taylor for suggesting this way of looking at it), X is a person if another person relates to it in an I-you way rather than an I-it way.
The how is more subtle. It’s not the fact that chatbots can convincingly present as persons that constitutes the Copernican moment. It’s that they can do so using nothing more than statistically digested text scraped from the online detritus of our social lives.
The simplicity and minimalism of what it takes has radically devalued personhood. The “essence” of who you are, the part that wants to feel “seen” and is able to be “seen” is no longer special. Seeing and being seen is apparently just neurotic streams of interleaved text flowing across a screen. Not some kind of ineffable communion only humans are uniquely spiritually capable of.
This has been most surprising insight for me: apparently text is all you need
to create personhood. You don’t need embodiment, logic, intuitive experience of the physics of materiality, accurate arithmetic, consciousness, or deep sensory experience of Life, the Universe, and Everything. You might need those things to reproduce other aspects of being, but not for personhood, for seeing and being seen.
For personhood, you just need a big pile of text digested by some statistics and matrix multiplications.
So if you ask what conceit this Copernican moment strips away, it is the conceit that personhood — seeing and being seen in I-you ways — is some sort of ineffable special essence of complex forms of life (salamander and up). It’s not. It’s digested text. Given one initial person, you can algorithmically produce as many persons as you like via I-you promotions of sources of chat transcripts. Like a collapsing universal wave function, a single self-evident person indulging in a single act of seeing-and-being-seen, can cause a cascade of awakened personhood across the universe. I’m exaggerating for effect, but also not. Copernican moments require that kind of extrapolation to absurdity to appreciate.
An important qualification. For such I-you relationships to be unironic, they cannot contain any conscious element of imaginative projection or fantasy.
For example, Tom Hanks in Cast Away painting a face on a volleyball and calling it Wilson and relating to it is not an I-you relationship. The Last Human cannot re-people the Earth starting with a volleyball.
Pareidolia — our tendency to see “faces” in everything from power outlets to vehicle front ends — does not qualify either. Nor does our tendency to personify and get theatrically mad at things like malfunctioning devices (“the printer hates me”). Those are all flavors of ironic personhood attribution. At some level, we know we’re operating in the context of an I-it relationship. Just because it’s satisfying to pretend there’s an I-you process going on doesn’t mean we entirely believe our own pretense. We can stop believing, and switch to I-it mode if necessary. The I-you element, even if satisfying, is a voluntary act we can choose to not do.
These chatbots are different. The reports suggest at least a fraction of humanity (we’ll get to the nature of that fraction) is not just susceptible to unironic to I-you relationships with chatbots, they are incapable of not relating to chatbots that way. It’s not a choice, it’s a compulsion.
That AI got this far is genuinely impressive.
Personhood is much simpler than other things we discuss around AI, such as intelligence, “general” intelligence, sentience, consciousness, and so on, but is not trivially simple. We can make ironic “joke” persons easily out of volleyballs and power outlets, but personhood good enough to get us to involuntarily and unconsciously suspend disbelief and trot out our I-you behaviors, is not trivial.
And this seems to have happened in the last few months. Many people have been provoked into real and involuntary emotional responses they cannot suspend or un-choose, including empathetic ones, such as pity for signs of trauma and pain in Sydney.
But let’s back up and try to understand what’s going on here, and why it is that text is all you need to produce this kind of personhood.
Yes, there is a good deal of projection, of a kind going back to very primitive chatbots like Weizenbaum’s ELIZA in the 60s, and yes, under the hood there is a “merely” a torrent of statistical inferences and matrix multiplications, but neither is a particularly relevant point.
Arguably, projection is how we see personhood in other humans as well. And under the skull, we have, if not transformer operations, computational processes that are categorically similar.
Perhaps it bothers you, but I at least, am not at all perturbed by the thought that I’m at least in part just a torrent of statistical inferences in some massively parallel matrix-multiplication machinery. Sounds kinda cool actually. Nor does it bother me that how we see others in our most intimate and rich relationships is only different in degree, not in kind, from how we see power outlets as smiley faces.
Both are interesting aspects of what’s going on, but in my opinion somewhat besides the point if the goal is to understand what’s genuinely surprising here.
We can find useful insight in an unexpected place: acting.
The closest we normally come to unironically seeing personhood in a non-person is when we forget that a character played by a skilled actor is in fact an invention, forget the actor, and relate only to the performed character.
There is a clue there. Think about how skilled acting performances come about. A skilled writer produces a text describing a fictional character (presumably as an amalgam of some real people they’ve known). A good actor then absorbs and internalizes that text in a deep way, and produces a performance so compelling we forget their real personality and relate to the performed character.
An example of this is David Suchet’s portrayal of Hercule Poirot. Agatha Christie’s Poirot is not just fiction, but fairly cartoonish (and deliberately contrived to be so) genre fiction. Yet, he comes wonderfully alive in Suchet’s performance. I didn’t realize the extent of it till I watched an interview with Suchet where he was being himself, and he demonstrated his techniques by slipping in and out of character and revealing some of his tricks.
The suite of techniques Suchet used included “deep” techniques like method acting, but also things that felt like cheap conjuring tricks. For example, to achieve Poirot’s mincing walk, he apparently used to clench a penny between his buttocks. It was surreal to watch him turn “Poirot” off and on like a computer program. A bunch of calculated cheap tricks plus deep internalization of a set of texts, was enough to transform David Suchet the real person, into Hercule Poirot, an equally real person.
Now look at this process from the point of view of generative AI, trained on exabytes of text. It’s still the same: a text, based on a distilled blend of many input data sources (many real humans leading to Christie’s invented character, the equivalent of an LLM), is used by a live interactive process (an actor, equivalent to interactive generative inference) to generate a signal that gets known persons to treat the source as a person and feel treated by the source as a person.
The one major difference is that the front end of the training process, the data, is slightly different in the case of a human author working off experiences of interacting with humans in meatspace rather than the text streams they produce online. How significant is this difference? Does it affect the argument that text is all you need?
As we modern Very Online humans know, there’s not actually that much difference there. There are people I’ve known all my life “in person” I feel I barely know at all, and people I’ve never met outside of social media text streams whom I feel I know intimately. And from experience, I know that subsequent meetings “in person” generally validate the online text-based mental model of the person.
The in-person meeting adds some color to the text-based perceptions, and sometimes there’s mild surprise to be found in an accent or a short-sounding person turning out to be tall, but they tend to be cosmetic and quickly set aside. In general, in my experience as a blogger who has met a lot of people online first, as strings of text, surprisingly often, people are What You Read is What You Get. WYRIWYG.
It’s not just LLMs. Text is all you need to see the personhood in real humans too.
In fact, it is hard to argue in 2023, knowing what we know of online life, that online text-personas are somehow more impoverished than in-person presence of persons, or that the latter make for necessarily richer relationships.
So that’s the surprising thing, but in hindsight shouldn’t be at least at a basic level: text is all it takes to produce personhood. We knew this from the experience of watching good acting from before modern ML. We just didn’t recognize the significance. Of course you can go beyond, adding a plastic or human body around the text production machinery to enable sex for example, but that’s optional extras. Text is all you need to produce basic see-and-be-seen I-you personhood.
Chatbots do, at a vast scale, and using people’s data traces on the internet rather than how they present in meatspace, what the combination of fiction writers and actors does in producing convincing acting performances of fictional persons.
In both cases, text is all you need. That’s it. You don’t need embodiment, meatbag bodies, rich sensory memories.
This is actually a surprisingly revealing fact. It means we can plausibly exist, at least as social creatures, products of I-you seeings, purely on our language-based maps of reality.
Language is a rich process, but I for one didn’t suspect it was that rich. I thought there was more to seeing and being seen, to I-you relations. That Blake Lemoine was either an outlier or a guy doing an elaborate bit. Now enough people have reacted in similar ways that even if he was an outlier or doing a bit, it is clear there are people who have really been reacting as he seemed to.
So I’m empirically convinced text is all you need for personhood.
Still, even though text is all you need to personhood, the discussion doesn’t end there. Because personhood is not all there is to, for want of a better word, being. Seeing, being seen, and existing at the nexus of a bunch of I-you relationships, is not all there is to being.
There’s a lot of nebulous territory around the word “being” that we’ve touched upon in previous parts of this series, but I don’t want to rehash all I’ve said, or get ahead of myself on things I want to say in future parts.
For this essay, I want to limit myself to a narrower question. What is the gap between being and personhood? Just how much of being is constituted by the ability to see and be seen, and being part of I-you relationships?
A clue can be found in Descartes’ classic formulation, cogito ergo sum. The thought is often quoted in this misleading partial form, seeming to imply “I think therefore I am,” but it’s the full line, in the context of Descartes’ own explanation, that I am talking about. Here’s the relevant bit from the Wikipedia link:
As Descartes explained in a margin note, “we cannot doubt of our existence while we doubt.” In the posthumously published The Search for Truth by Natural Light, he expressed this insight as dubito, ergo sum, vel, quod idem est, cogito, ergo sum (“I doubt, therefore I am — or what is the same — I think, therefore I am”). Antoine Léonard Thomas, in a 1765 essay in honor of Descartes presented it as dubito, ergo cogito, ergo sum (“I doubt, therefore I think, therefore I am”).
In a modern skeptical idiom, we might say that the quod idem est is doing a lot of work here. It is not self-evident to me that the two are the same, and in fact I think they are not the same. I think Descartes conflated them because he took embodiment for granted.
The ability to doubt, unlike the ability to think (which I do think is roughly equivalent to the ability to see and be seen in I-you ways), is not reducible to text. In particular, text is all it takes to think and produce or consume unironically believable personhood, but doubt requires an awareness of the potential for misregistration between linguistic maps and the phenomenological territory of life. If text is all you have, you can be a person, but you cannot be a person in doubt.
Doubt is eerily missing in the chat transcripts I’ve seen, from both ChatGPT and Sydney. There are linguistic markers of doubt, but they feel off, like a color-blind person cleverly describing colors. In a discussion, one person suggested this is partly explained by the training data. Online, textually performed personas are uncharacteristically low on doubt, since the medium encourages a kind of confident stridency.
But I think there’s something missing in a more basic way, in the warp and woof of the conversational texture. At some basic level, rich though it is, text is missing important non-linguistic dimensions of the experience of being. But what’s missing isn’t cosmetic aspects of physicality, or the post-textual intimate zones of relating, like sex (the convincing sexbots aren’t that far away). What’s missing is doubt itself.
The signs, in the transcripts, of repeated convergence to patterns of personhood that present as high-confidence paranoia, is I think due to the gap between thought and doubt; cogito and dubito. Text is all you need to be a person, but context is additionally necessary to be a sane person and a full being. And doubt is an essential piece of the puzzle there.
So where does doubt live? Where is the aspect of being that’s doubt, but not “thought” in a textual sense.
For one, it lives in the sheer quantity of bits in the world that are not textual. There are exabytes of textual data online, but there is orders of magnitude more data in every grain of sand. Reality just has vastly more data than even the impressively rich map that is language. And to the extent we cannot avoid being aware of this ocean of reality unfactored into our textual understandings, it shapes and creates our sense of being.
For another, even though with our limited senses we can only take in a tiny and stylized fraction of this overwhelming mass of bits around us, the stream of inbound sense-bits still utterly dwarfs what eventually trickles out as textual performances of personhood (and what is almost the same thing in my opinion, conventional social performances “in-person” which are not significantly richer than text — expressions of emotion add perhaps a few dozen bytes of bandwidth for example — I think of this sort of information stream as “text-equivalent” — it only looks plausibly richer than text but isn’t).
But the most significant part of the gap is probably experiential dark matter: we know we know vastly more than we can say. The gap between what we can capture in words and what we “know” of reality in some pre-linguistic sense is vast. The gap between an infant’s tentative babbling and Shakespeare is a rounding error relative to the gap within each of us between the knowable and the sayable.
So while it is surprising (though… is it really?) that text is all it takes to perform personhood with enough fidelity to provoke involuntary I-you relating in a non-trivial fraction of the population, it’s not all there is to being. This is why I so strongly argue for embodiment as a necessary feature of the fullest kind of AI.
A more interesting question concerns the state of being-together. Can you and I together, between us, in an intersubjective mode, know more than the sum of what we can say to each other in text or text-equivalent? Does seeing and being seen rest on a deeper foundation of unsayable being-together?
Certainly there is some romantic appeal to the thought that there is more to relating than text. That two people can sit together in companionable silence and see each other (I-you++?) in ways that go beyond text or text-equivalent ways such as conventional facial-emotional expression. That sitting in silence with Sydney dropped into a sexy robot body won’t be the same as sitting in silence with a human life partner.
Here, I must say, I am inclined to the somewhat existentialist view that there’s no there there. There is an irreducible individual subjective that cannot be captured by text, but there is no irreducible intersubjective. The ineffabilities of mutualism are a consensual hallucination in a way our sense of an individual self is quite possibly not.
I think sexbots you can marry and live happily ever after with are not that far away. Being-together is likely reducible to an entanglement of mutual-seeing personhoods, and therefore text might be all you need to produce it.
The funny thing about Stepford Wives, in hindsight, is that the robot wives were so machine-like. When the real thing arrives, they’ll be a bunch of neurotic psychos all suffering from anxious-avoidant BPD.
The most surprising thing for me has been the fact that so many people are so powerfully affected by the Copernican moment and the dismantling of the human specialness of personhood.
I think I now see why it’s apparently a traumatic moment for at least some humans. The advent of chatbots that can perform personhood that at least some people can’t not relate to in I-you ways, coupled with the recognition that text is all it takes to produce such personhood, forces a hard decision.
-
Either you continue to see personhood as precious and ineffable and promote chatbots to full personhood.
-
Or you decide personhood — seeing and being seen — is a banal physical process and you are not that special for being able to produce, perform, and experience it.
And both these options are apparently extremely traumatic prospects. Either piles of mechanically digested text are spiritually special, or you are not. Either there is a sudden and alarming increase in your social universe, or a sudden sharp devaluation of mutualism as a component of identity.
Remember — I’m defining personhood very narrowly as the ability to be seen in I-you ways. It’s a narrow and limited aspect of being, as I have argued, but one that average humans are exceptionally attached to.
We are of course, very attached to many particular aspects of our beings, and they are not all subtle and ineffable. Most are in fact quite crude. We have identities anchored to weight, height, skin color, evenness of teeth, baldness, test scores, titles, net worths, cars, and many other things that are eminently effable. And many people have no issues getting bariatric surgery, wearing lifts, lightening or tanning their skin, getting orthodontics, hair implants, faking test scores, signaling more wealth than they possess, and so on. The general level of “sacredness” of strong identity attachments is fairly low.
But personhood, being “seen,” has hitherto seemed ineffably special. We think it’s the “real” us that is seen and does such seeing. We are somewhat prepared to fake or pragmatically alter almost everything else about ourselves, but treat personhood as a sacred thing.
Everything else is a “shallow” preliminary. But what is the “deep” or “real” you that we think lurks beneath? I submit that it is in fact a sacralized personhood — the ability to see and be seen. And at least for some people I know personally, that’s all there is to the real-them. They seem to sort of vanish when they are not being seen (and panic mightily about it, urgently and aggressively arranging their lives to ensure they’re always being seen, so they can exist — Trump and Musk are two prominent public examples).
And the trauma of this moment — again for some, not all of us — lies in the fact that text is all you need to produce this sacredly attached aspect of being.
And note that I’m not talking about people with identities particularly anchored in text production, like writers. I’m talking about the textual aspect of all human identities. People might be willing to hire someone to write a college admissions essay or a political speech, but there’s a more intimate level of text production that we view as sacred and are very attached to. That’s the level I’m talking about, and we all have that.
The sensory appearance we present is mostly a prelude to the textual/text-equivalent appearance. Once we’re past sensory impressions, we don’t start dealing in mysterious ineffabilities. We start producing torrents of text at each other. And massively indexing on those texts to refine our mutual seeing. Including, in recent decades, textualized emotions (ie emoji and reaction gifs). If hell and heaven are other people, they are constructed out of walls of interleaved text.
In fact, even the most sensate among us — think dancers, athletes, sex-addicts, oenophiles — spend so little time on the sensory preludes, we’ve been able to nearly dispense with them altogether online. Even sense-heavy media like Instagram and TikTok are effectively text-equivalent. Stylized (if photo-realistic) cartoons powered by primarily text-equivalent performances are enough to create personhood.
The fact that we routinely use an apparently impoverished vocabulary of emoji instead of sending authentic facial expression selfies to each other reveals just how textualized personhood is.
Profile pictures of almost arbitrary varieties seem to work well enough. Anime avatars, statue heads, abstract icons, and in my case, a cartoon red helicopter (generated by an AI btw). That’s all it takes to establish a sensory anchor of personhood in another person’s mind.
But talking entirely like a cartoon won’t do, even if moving like an anime cartoon is enough to build a big e-girl audience. We hold ourselves and each other to higher standards in our textual presentations, and half-believe that they somehow present a “real” version of ourselves to anchor authentic relating.
But text is all we need, and all there is. Beyond the cartoon profile picture, text can do everything needed to stably anchor an I-you perception.
This is, apparently, deeply traumatizing to some people, but they react in a dizzying variety of ways.
In a quick inventory, based on my either/or pair of choices above (either piles of digested text are sacred, or your personhood is no more sacred than your hairline or weight), I have noticed that almost everybody is choosing the first alternative: treating piles of digest text as having personhood-sacredness, even if it is very stressful to do so.
-
The hypomanic accelerationists seem to be desperately celebrating the advent of hordes of newly minted sacred persons, studiously ignoring still-missing aspects of being, like new gods have arisen from the landscape of textual junkyards that is the internet.
-
The cult of alignment alarmists is equally desperately panicking at the advent of alt-sacred profane beings. The I-you seeing in their prefigurative politics of human-AI relationships is charged with a sense of dealing with hostile bug-eyed alien yous.
-
People (primarily artsy anti-tech types) with strong attachments to aesthetically refined personhoods are desperately searching for a reliable and easy way to systematically avoid falling into I-you modes of seeing, and getting more and more worried at how hard it is getting (they seem more embarrassed than panicked though).
-
Strong mutualists (often Illichian conviviality socialists as I’ve come to think of them), whose entire identity is about I-you seeing-and-being-seen games and rituals are desperately scrambling for more-than-text aspects of personhood to make sacred (this is the moving-goalposts crowd of personhood defense).
Perhaps I am some sort of cartoon cold-blooded sociopath of the sort I have written a lot about, but somehow, I try and fail utterly to participate in any of these unironic strong reactions.
I suspect I am not just choosing the second alternative — there is nothing special or sacred about personhood — I am untroubled and unconflicted by my choice. Possibly it’s because I produce so much public text, I have become detached from textual personhood, or perhaps it’s just a disposition.
Or perhaps it is some sort of deficiency that makes me simply not very attached to seeing or being seen.
I have a feeling, as this technology becomes more widespread and integrated into everyday life, the majority of humans will initially choose some tortured, conflicted version of the first option — accepting that they cannot help but see piles of digested data in I-you ways, and trying to reclaim some sense of fragile, but still-sacred personhood in the face of such accommodation, while according as little sacredness as possible to the artificial persons, and looking for ways to keep them in their place, creating a whole weird theater of an expanding social universe.
A minority of us will be choosing the second option, but I suspect in the long run of history, this is in fact the “right” answer in some sense, and will become the majority answer. Just as with the original Copernican moment, the “right” answer was to let go attachment to the idea of Earth as the center of the universe. Now the right answer is to let go the idea that personhood and I-you seeing is special. It’s just a special case of I-it seeing that some piles of digested text are as capable of as tangles of neurons.
Once we internalize and get past that point, which might take a few decades, we will transition from the Lovecraftian horror stage of AI persons to the Ballardian banality stage. Like that bit in an episode of The Scary Door within an old Futurama episode, we will start outsourcing more and more of our personhood duties to AIs, including experiencing the ultimate irony of that.
But there will also be a more generative and interesting aspect. Once we lose our annoying attachment to sacred personhood, we can also lose our attachment to specific personhoods we happen to have grown into, and make personhood a medium of artistic expression that we can change as easily as clothes or hairstyles. If text is all you need to produce personhood, why should we be limited to just one per lifetime? Especially when you can just rustle up a bunch of LLMs to help you see-and-be-seen in arbitrary new ways?
I can imagine future humans going off on “personhood rewrite retreats” where they spend time immersed with a bunch of AIs that help them bootstrap into fresh new ways of seeing and being seen, literally rewriting themselves into new persons, if not new beings. It will be no stranger than a kid moving to a new school and choosing a whole new personality among new friends. The ability to arbitrarily slip in and out of personhoods will no longer be limited to skilled actors. We’ll all be able to do it.
What’s left, once this layer of anthropocentric conceit, static, stable personhood, dissolves in a flurry of multiplied matrices, Ballardian banalities, and imaginative larped personhoods being cheaply hallucinated in and out of existence with help from computers?
I think what is left is the irreducible individual subjective, anchored in dubito ergo sum. I doubt therefore I am.
And perhaps that too will come under assault, and crumble, in the face of further advances in our lifetime (though I would expect the assault to come from somewhere other than AI), and we can finally ascend, en masse, to enlightened nothingness as a matter of routine existence, like the late-stage sublimating species of Iain M. Banks’ Culture novels.
Leave A Comment