This post is now part of a grand conversation in the SFWA about machine learning, AI, and its impact on fiction. For more points of view, click here.
First, it’s not AI. It’s machine learning, aided and abetted by human input from stem to stern. It’s essentially your phone’s text prediction but with more sweat and blood in. Which is an accomplishment, but it’s not Mr. Data.
Second, read this article of Unmitigated Pedantry. Bret Devereux articulated a lot of the half-formed ideas I’ve had about what we’ll call AI for argument’s sake as of last Friday.
Go ahead, I’ll wait.
That was where I stood two days ago.
Yesterday, Clarkesworld closed for submissions.
Neil Clarke is about the nicest man in science fiction. He’s also dedicated. He didn’t close for submissions during his heart attack. He’s made some dread pact with a dark power to always get his responses out within three days. He’s the best paying regular market for short fiction, and everyone’s first port of call.
Being the first port of call, he got maybe 50 submissions a month. But now…
That staggering difference is AI-written slush, clogging up the works. Neil is one man. He can’t read all that in a month, much less reply in three days.
Taylor Swift had a song about this.
And it’s just the tip of the iceberg. Sheila Williams at Asimov’s may have a team, but how overwhelmed are they going to be this year compared to last year? And AI detection software is still crude, and, anyway, that just starts another arms race with each trying to outwit the other. You’ll never know if your AI detector will work today or if some bright spark in Russia just came up with something that technically passes. Right now, like Dr. Devereux, there are some stereotypical aspects of machine-generated writing (fake citations, boring but technically perfect plotting) that we can pick up on, but humans are fallible, too, and those visible signals are going to evolve.
The problem isn’t with the machine-learning ‘AI’ as such.
It has potential for aiding the handicapped (alt text generators, automatic closed captioning), for assisting writers in the outlining, story-bible-checking, and other “preproduction” phases, and putting Depositphotos book covers out of business.
The problem is that it’s being implemented by people who, as Kane Lynch pointed out last night over my wife’s roast artichoke and vegan pasta, fundamentally do not understand what art is or what it’s for. A few weeks ago, this tweet made the rounds.
This is the problem. The people who are developing AI and presently leading the narrative on what it is, does, and means do not understand how real human beings work. I rather enjoy porn, and despite what this fellow thinks, I’ve had access to pictures of naked or nearly-naked men, women, and others for the better part of three decades, some of it even computer-generated. It does not replace my wife’s roast artichoke and vegan pasta, our long meandering conversations, the brightness in her eyes when I show her some new science fiction I’ve known for ages, her incisive wit editing my work, her embrace, the sound of her prayers, or her passion and creativity…for art and leftist politics! *koff*
Now, this guy is easy to mock. In fact…
…but the people back of AI “art” and “fiction” just as fundamentally misunderstand how humans work. Art and fiction aren’t just an extruded mass to consume – even at the bottom barrel-scrapings of porn, romance, and pulp. Even mediocre (written) porn, you’re reading for the artist’s personality – their verbal tics and turns of phrase and weird little obsessions. The sub-mediocre stuff is full of shortcuts – cut/paste, entire stories resold with the names changed – and I have no doubt they’ll turn to this shortcut too. (It’s hard writing a novella a week, and I have immense respect and trepidation for those authors that actually do!) But the moment you say “I like this author” and you even subconsciously notice their nom-de-plume next time you search, you’re out of the stuff that AI can automate.
Because writing and art aren’t about automation. They’re about personality. And personality comes from deutomation.
“What the Hell is deutomation?”
To deutomate something is the opposite of automating it – it renders a process more involved and more conscious. Deutomation makes art (including fiction) better. That’s why we self-edit so many drafts and read and reread our prose until we detest it. Because the time and effort and labor involved makes the writing better. This is not a bug, this is a feature. It grinds our personality, our unconscious obsessions and verbal tics, into the writing, so it bursts off the page.
Automating art gets it fundamentally bass-ackwards. I can see usages of this kind of machine-generated art for sketches, tests, roughs – testing the ideas. But for the actual creation of the work of art you plan to show other people as a finished objet d’art? That’s something that gets better from deutomating it, not automating it.
And yet, people who don’t think they need to pay for writing, or even ask permission, are the people training these “AIs” and proclaiming them THE FUTURE! as loudly as the terrorists in Doña Ana Lucía Serrano…to the Future!. These are people who, as near as I can tell from out here, don’t believe in ethical constraints on their work, nor understand what human beings might want from their work, and when confronted, just verbally bully their interlocutors and crow “well this is the future GET USED TO IT LUDDITE!” These aren’t people I want in charge of my cheese drawer, much less disruptive technology. I have a nice double-crème brie in there, it’d spoil from disruption.
Mathieu’s Law of New Technology – assume bad actors exist, and they will use your technology to harm other people.
I’m not actually afraid of “AI” stealing my job. Like Dr. Devereux, the fundamental misunderstanding of what my job is insulates me from that, and my extensive experience reading porn and seeing where the shortcuts stop gives me some experience in predicting where this shortcut will also stop. But I am worried about clog. We’re going to clog up (if the AI boosters are to be believed) legal services, medical services, movie theaters, Google searches, and, not least, editor’s inboxes, with substandard machine-extruded “content” that drowns out anything useful, because machine learning can’t at present, and may never, understand its content. If I wanted terrible medical advice, WebMD is already right there, telling me I have uterine cancer. It’s the phone tree for tech support all over again.
And what do we scream at the phone tree? “GET ME A REAL PERSON!”
We’re gonna still want a real person – especially a real artist or writer or musician. But this is the phone tree writ large, at amounts that cripple Neil Clarke the way a heart attack never could. I don’t have any solutions to this – though SFWA are fervently discussing possible stopgaps – but asking the right question is the first and most important step toward any solution.
My apologies, this wasn’t a super-tight argument about The Right Way Forward with AI – although a culture shift that maybe ethical constraints like asking permission before training on someone’s blood, sweat, tears, and IP actually apply to how technology is used would be a good start. This is a series of thoughts from one writer who’s been trying to imagine better futures for two and a half decades.
But, seriously, engineers? Assume bad actors exist. And assume they will use your technology. Please.
AI is the true revenge of the nerds.
I thought the revenge of the nerds was sexual assault by trickery in a climax that aged like fine milk?