
If you enjoy this newsletter, please press the heart button below - it helps more people find my work. Subscribe here if you haven’t already and may I gently encourage you to forward this email, share it, tweet it or repost it? It will make me smile, I guarantee it.
There are a few ways to calculate how likely your job is to be done by a robot, but the most intriguing one is probably the most important:
Your job is less likely to be stolen by a robot if humans doesn’t think that a robot would be competent at it.
So we might trust robots to pick up our parcels and move them across a warehouse, or administer an experiment - we trust them less as therapists, nurses and childminders (at least, we do at the moment - the original chatbot, ELIZA, was so useful as a confidante that its creator’s secretary was convinced it was capable of human understanding). But, in theory at least, the fact that humans think machines aren’t good at caring is what makes those areas less susceptible to us giving them those jobs in the first place.
This is why, of all the kinds of journalism artificial intelligence might steal away from us underemployed journalists, it’s not likely to be opinion pieces or art journalism. Watching a machine trying to report on an embodied art experience, or have a strongly-held opinion is ludicrous, a straw man, like a dog smiling goofily in a hat and sunglasses. It’s incongruous and so we marvel at how it uncannily tries to perform the role - those sneaking similarities and differences.
A few days ago, the Guardian published an op-ed written by the much-hyped language model GPT-3, in which the machine argued that language automation, i.e. itself, should not be feared. It’s a boring piece, frankly. Op-eds in the Guardian and elsewhere, of course, have a style, which this model was trained on to try to replicate. But the piece is repetitive, unconvincing, and, I think most damningly, it doesn’t really seem to want to rile the reader up, which is the real purpose of many op-eds. But formal considerations aside, the reason that no one would care about the content of the argument is also the only reason I, or anyone else is talking about it: it’s written by a robot, and robot opinions are not important.
A few weeks ago, I was playing around with a tool that allows you to use GPT-3’s predecessor (named, in the robotic convention, GPT-2) to generate a review from Artforum. Interestingly, it’s a magazine whose texts are often disdained as formulaic by art insiders, so there is a kind of poetry to treating it as an algorithmic corpus. Here’s a typical opening that I got while playing around this morning:
The title of Mary Heilmann’s exhibition of new works at the gallery’s new, separate space—a former industrial space on Cal Street, with two entrances—is a cryptic reference to the fact that the space is the former home and studio of a close friend of mine.
A title of an exhibition simply can’t be a cryptic reference to the writer’s friendship. But like the op-ed, it’s funny because it’s true: Artforum writers would totally refer to their close personal friends in exactly this hand-wavy, social climbing way. It means something. And yet, like all these AI-generated texts there is the impression of the writing “doing its job”, or “playing its role”, as the roundtable discussion on the same website notes. The text fills in the space on the page effectively, something that seems simple, but isn’t. It hits all the uncanny notes that humanoid robots often produce, making us question why this machine engaging in some specifically human task.
It’s fun to laugh at the quirks of a robot attempting humanity because we love to assess how they fail, and draw conclusions from the results. Why are the machines so bad at getting the facts right, for example? Aren’t they supposed to know everything? The Guardian op-ed, as I noted on Twitter, had a really glaring error in it (since emended with a rather under-informative “[sic]”), The machines have been trained to write, but not to verify what they’ve laid out as fact. “They’re really good at generating stories about unicorns,” Mike Tung, CEO of Stanford startup Diffbot, which scrapes the entire internet for knowledge to make these texts more useful, told the MIT Tech Review. “But they’re not trained to be factual.”
Of course, the Guardian could have not told us that the op-ed was by a robot. That would be scary, and might portend something else - a newspaper manufacturing persuasion on topics that the writer couldn’t possibly agree with in order to sell advertising, for example. Which is…exactly what newspapers already do. Opinions that are inflammatory sell newspapers, which is to say, clicks. The whole reason the Guardian ran a piece written by AI is because it’s inflammatory to run something written by an AI. With anaemic budgets for editors and no fact checkers, it didn’t matter if it contained falsehoods. Media companies have a vested interest to profit by publishing repetitive chum masquerading as an argument. The future doesn’t look like an army of robot op-ed writers telling us what to think, the future looks, as always, a lot like the past: people with money, making use of it to try to entrench a status quo that benefits them. Now that scares me.
I’m into
This video of Jane Fonda talking about progressive coalitions, in the seventies, in a sparkly gold dress. Lobster tail pastries. My friend’s 10-week-old corgi puppy. This great piece about fact checking in non-fiction.
I’ve written
For Apollo, I reviewed the memoir of John Giorno, poet, sound artist and lover to all the famous ‘60s Pop artists, who’s got all the juicy details you never knew you needed about Andy Warhol and Robert Rauschenberg’s sex lives.
If you enjoy this newsletter, please press the heart button below - it helps more people find my work. Subscribe here if you haven’t already and may I gently encourage you to forward this email, share it, tweet it or repost it? It will make me smile, I guarantee it.
Follow me on Twitter: @josiet_j
Read more of my work: www.josietj.com