Whatever Works

Whatever Works

the antidote to AI foolishness is intellectual seriousness

seven principles for creative people (and everyone else). it's not *difficult* but you have to *use your brain and think about it*

Naomi Alderman's avatar
Naomi Alderman
Apr 05, 2026
∙ Paid
The folding garden
yes well maybe people have always needed a bit of fun poked at their attitudes to technology, we all need our foolishness punctured. image is Heath Robinson obviously. source: https://www.bbc.co.uk/news/uk-england-london-61522197

I am writing about AI this week1 because: I do think there’s the potential now for AI use among humanities people to turn into a kind of horrible witch-hunt. A ‘Reds Under The Beds’-type scare where everyone feverishly denies having used it, and you can gain social points by turning on other people and accusing them. Where it becomes increasingly not-OK to say that you are even interested in these new technologies, where that is seen as an admission of guilt. And the only way to avoid an accusation is to be the one who accuses first.

I don’t believe in cancellations and public shaming. I think there are - and must be - ways to talk about what behaviours are right and wrong, helpful and unhelpful, intelligent and foolish, without putting up an individual for ridicule and vitriol. I think we have all learned that from the Twitter Mob Era, and also I literally wrote a book called Don’t Burn Anyone At The Stake Today.

So we’re not doing that. We’re talking about principles we can live and do creative work by.

One of the helpful ways to do this I think is to talk about trends and not individuals. Recently the following very foolish things have all happened:

  • a journalist and novelist was found to be - and has admitted to - using AI to write a book review for the New York Times. (they’ve said he won’t be writing for them anymore.)

  • and the piece of another reviewer at a different paper was so full of errors that it looks likely it was written by AI.

  • political candidate is now known as MattGPT because his book is full of ChatGPT links and - it looks like - AI errors.

  • a novel has been withdrawn because large chunks of it were written by AI, which the author has sort-of admitted.

  • I saw the following rather extraordinary statement by a novelist and self-described “Orthodox Christian and reactionary radical”,
    “one thing that is very clear is that AI systems will soon be able to write serious novels, poems, screenplays and indeed essays, and that they may be indistinguishable from work written by humans. Maybe plenty of them will be better.”
    This statement was so startling that before I even get into this I feel I have to say: no this is not clear. Certainly not ‘very clear’. In fact the opposite is true, AI is getting worse at writing (probably because it is getting better at doing eg serious business analysis and these kinds of writing are in opposition) as Jasmine Sun points out in this Atlantic article.
    “AI systems will soon be able to write serious novels” is the kind of thing a writer can only believe if they have indeed refused to engage with AI2. This is a statement I saw recently but I have seen this sort of thing from others too.

    It’s bizarre because all writers know that you absolutely cannot and should not be writing books if you’re not reading books. We all understand that. The fact is: this same logic applies to technology. You can’t write videogames if you don’t play videogames. You cannot know what AI is and does if you never use it. You can know you don’t fancy it! That’s fine! But you don’t have expertise in it.

  • I have also encountered more than one person who is so convinced by the stories about how AI is going to take over the world that they are eager to tell me that everyone should be using it for everything. People who go to ChatGPT for life advice on every issue. I wrote this six months ago about using AI for life advice. My basic thought was: LLMs will tell you what you want to hear so if you’re using them for therapy-adjacent stuff you have to regularly prompt them to give you a steel-manned counterview. You need to know how to use them against their own worst tendencies and also when to step away. Increasingly a lot of people don’t seem to.

I think it’s obvious from how easy it was to assemble this collection that we’re talking about a trend rather than a bad person. Or even a foolish person. This is a trend of a lot of different kinds of foolishness about AI.

All of these things speak to a lack of seriousness in engagement with technology. I don’t think this is writer-specific or even creative-industries-specific. I think it’s breaking out all over, and I’m just thinking particularly hard about writers because… well… I’m a writer. Almost everything I say here applies in lots of different areas.

So look, firstly I want to say: it is FINE to decide you don’t want to use any of these technologies, it is totally totally OK and if you are an author following your inner promptings toward what your work needs and steering well clear I fully support that. I support any author in not touching AI with a bargepole. I do not support them in opining then on what it can and cannot do.

What I’m describing as ‘a lack of seriousness’ IS NOT ‘not using the technology at all because you believe it’s bad for your creative practice and also therefore not giving opinions on what it can and cannot do’ - that is a serious thoughtful position.

I loved Joel Snape’s recent post on How To Be A Serious Person and it made me think a lot about what it means to be a serious person intellectually. I think it means not being blown by the wind. It means thinking things through and taking your time to come to your views, not just leaping on an opinion you’ve seen floating past and then claiming it as yours like adolescents do when they’re trying out a posture. It means not claiming expertise where you don’t have any. It means not shying away from claiming expertise where you do have it. I reserve my right to be playful but I don’t think that means I’m unserious intellectually.

A lack of seriousness is:

  • because you do not take technology seriously: giving a forceful opinion on what AI can do without actually having used the tools, claiming to be able to intuit what a technology is going to do in the future without engaging with it or developing expertise

  • because you do not take technology seriously: using an AI tool without ethical thought - without having wondered about what the rights and wrongs are. Or doing something with an AI tool that you feel while you’re doing it is probably not right but talking yourself out of that feeling

  • because you do not take technology seriously: using an AI tool without having understood its affordances - its strengths and weaknesses, the criticisms of its abilities, the concerns about its reliability

  • because you do not take technology seriously: using an AI tool without having wondered what it might do to your creative practice and how to protect against bad results, without having considered what is too much AI now

All of those things are very foolish.

I think staying away from a particular technology certainly can be ‘taking it seriously’. But all those behaviours are united by not taking it seriously.

Well so, without wishing to big myself up too much, I have like Liam Neeson in that gif3 ‘a particular set of skills’. I know about technology and I know about serious literary writing.

I might be… hmmm, what can I say… well I think I am the only novelist in the world to have won major literary awards who also owns a videogames company? I think that’s right. In any case I think I’m at least one of the most technologically-involved literary novelists in the world. This is the reason I keep eg getting calls from the Today Programme to get me to come on at silly o’clock to explain OpenClaw to the British public4.

I think a lot of what what’s causing AI foolishness among arts and humanities people5 is a two-cultures problem. Many technology billionaires are demonstrably unaware that there is eg a lot of prior work in the humanities about ‘storytelling’. Or ‘values’. Or ‘what it means to be a good person’. And people who work in the humanities are often unaware that technology is indeed its own massive discipline with a big body of serious thought which you need to put effort into getting to grips with, if you don’t want to end up being really quite foolish about it.

I wrote a column more than a decade ago (!) about even people working in digital interactive literature not taking videogames seriously, even though that is the obvious medium where digital interactive literature is already happening. I said, tetchily:

Your experimental technological literature is already here; it’s the noise you’re trying to get your children to turn down while you pen your thoughts about the future of location-based storytelling.

So, if the problem as diagnosed is a lack of seriousness among some toward technology, what are the solutions? What does it mean to take AI, this new ‘technology which can write’ seriously? What are ‘serious behaviours’ to take on?

Here are seven principles I work by. They are framed around writing but they’re mostly applicable to anyone using LLMs for anything. We need to try to think about this quite hard.

1. do not panic.

to say this again, it’s very OK not to use AI and not to want to. And if you don’t want to learn about it, this is the best way to be serious.

Since the news about the NYT reviewer broke, there have been a lot of feverish conversations in my social circles about how this could possibly happen and I have heard and read the sentiment many times: “no writer should ever use AI, no serious writer should touch these tools with a bargepole”. I think that’s wrong, but I understand the feeling. Technology is a field of expertise like any other and if you don’t want to put the time in to get to know it, then staying away completely is a great plan.

I do not think that AI is going to be writing serious novels anytime soon and importantly even if it did the evidence is that a lot of people really don’t like the idea of reading those novels. This is why Hachette has pulled that novel. This poll found that among Americans AI is about as unpopular as… ICE. Most people are not going to be lining up to voluntarily read novels written by a technology this unpopular6.

And let’s be real: the amount of money in novel-writing is not interesting to massive technology companies with valuations in the hundreds-of-billions-of-dollars7. They are interested in what they can by working in defence, in coding, in engineering, in biomedical sciences, in logistics. Even if it were possible to perfectly tune a model to write a novel, it is really not worth their while to do it, especially given the reputational ick involved.

Don’t believe anyone who tells you that you must get involved. Because then you’ll do it in a panicked way and you’re more likely to end up being foolish.

2. if you do want to use it, first of all get to know the norms in the industry, laid down by industry bodies

I think the Human Authored scheme as described here by the Society of Authors strikes the right balance. It is the same scheme as championed by the Guild of Authors in the US.

It says: “The use of GAI for spell check and grammar, researching, brainstorming, outlining, or any purposes other than generating text does not disqualify a work from being Human Authored under the scheme.”

I think that’s right. Using it for spell check, research, brainstorming and outlining never stops it from being your work. Once you are generating text with it and pasting it in, that’s not your work anymore.

Fascinatingly, this is also what Jasmine Sun says, for a reason that does engage seriously with the affordances of the technology:

“IMO the bright-line for AI in writing is copy-pasting text. it sounds arbitrary but I don’t think it is — the arduous process of stringing together a sentence myself is when I’m forced to confront whether the ideas actually make logical sense. LLMs are too good at making every half-baked thought sound equally plausible”

I have heard from some writers that they object to using it for outlining and brainstorming (but most people do not seem to object to using it for research). I would say that for me it feels right because the AI - as far as I’ve seen - cannot really ‘outline’ a book or ‘produce all the ideas needed for a book’. The most it can do is nudges, comments, it produces 100 ‘ideas’ of which perhaps five have any spark and which you the writer need to sift through and identify. In this sense it seems to me a lot more like an Oulipean experiment where you find 20 random words from the dictionary and then have to assemble them into a story.

In any case, there is now guidance in my industry. Simply knowing the basic industry norms, reading the information put out by the Society of Authors or Author’s Guild, would have prevented almost all the failures of seriousness in recent cases of AI-misuse. When you are copying-and-pasting text from an LLM into a document and then presenting that document as your own work, you have crossed the line. If all writers using AI had followed those guidelines, they’d be in a much better position.

So those are some things not to do. But then, what should we do to use AI successfully? Are there any ways to do it well? Must we renounce AI and all its works or are there sensible ways to engage with it intelligently?

Below the line, five more principles, including starting to think about AI ethically, how to make sure you understand both the strengths and the weaknesses, some things I would never do with AI in my writing life and some places where it’s useful - and how to learn without succumbing to ‘cognitive offloading’ and destroying your own capacity to do your work.

If you enjoyed this, do give it a like as it helps people find it. And consider becoming a paid subscriber if you can - I love writing these and the paid subscribers are making me go “right yes this is a potentially sustainable part of my writing life”.

Keep reading with a 7-day free trial

Subscribe to Whatever Works to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2026 Naomi Alderman · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture