Thoughts on using AI to generate prose.
Lately, I’ve used AI to help me write a couple of README documents I have with software projects. I haven’t been lazy about it. I’ll start out writing a bit, then prompt it to flesh things out, check my wording, go look at the code to see if I missed anything important. I’ll read it, edit it, and prompt again. It’s a tight loop, and after a few iterations and a few days stepping away from it, it becomes hard for me to remember what content was from me and what was from the AI. It feels like this has been productive to me, though it’s hard to say. Still, AI is still novel enough for me that I get some joy just from using this new tool.
Something about it feels dirty though, like I’m cheating or plagiarizing, and I start to think I better put some kind of statement on there saying that AI was involved, but saying it was produced with “AI Assistance” just seems too vague and meaningless. That could mean anything from “I used AI to do a quick spell and grammar check” to “90% of the content was written by AI after a couple of prompts and I did some minor touch ups”. I could go into more detail about how AI was used, but honestly I’ve since forgotten and it was such a tangled mess of back-and-forth iterations that trying to explain it just would be practical.
README’s are one thing though, and they serve a fairly utilitarian purpose. So long as they’re readable, document a piece of software well, and were vetted by someone in the know, I wouldn’t be overly bothered reading one written by AI. Other types of prose seem more troubling though. A student using a single prompt to write their English composition essay seems like an obvious, at-least to me, unethical use of AI. Using it to make slop click-baity marketing content seems scummy, but a little bit more gray. But what about using it to assist with making a blog post like this one?
If I read a blog and the majority of it was written using AI, I would feel cheated. But where’s the line? Using it to fix Grammar or help a bit with the wording seems fine to me. Using it to conduct research or brainstorm content to add sounds OK, but if the end result is content that’s just a rehashing the results of a prompt or two, something about that still feels low-effort and borderline unethical.
And I’m sure, by now, we’ve all seen content online that left us with that unsettling feeling. That feeling that what we’re reading was AI produced. Sometimes its obvious, sometimes less so. AI has its “tells”: the structured listicle styles, the abundance of emojis, the overuse of em-dashes, the use of “it’s not just X but Y” constructs. But then, none of these are necessarily proof of AI, they come about from being trained on human written content, and I’ve been abusing em-dashes ever since I read “Eat’s Shoots & Leaves” and picked up a bad habit of overanalyzing punctuation. What we read influences us, and I recently found myself using a “not just X but Y” construct with a Slack message I sent to a colleague. Looking back this use seems somewhat shameful, like my writing has been tained with a “those who look into the abyss” style corruption. But why should it? It’s a useful turn or phrase and the sentiment of the message was from my own original thoughts.
So, coming back to the blog post use case. Should I use AI? On the one hand, it seems like a self-defeating move. I’ve always found writing a useful exercise to help me organize my thoughts and reflect, and I’ve always aspired to write more as a way of becoming better at it. If those are my goals, than would using AI distract from that? If something about the struggle is the point, then wouldn’t offloading my “thinking” hinder me? But then again, learning and self-improvement don’t always come from personal, isolated, struggle; getting feedback and assistance from others is part of the process. I wouldn’t say it’s wrong to seek assistance from teachers, editors, and the critique of others, so why does it feel different with AI?
Well for one, I suppose, almost by definition the AI lacks humanity. The teacher, editor, or helpful critic usually have some benevolent intent and are trying to help improve and not to do my work or my thinking for me. I suppose one could just attribute that the AI not having the right context, maybe I could prefix any requests with something like “AI agent, I want to act as a helpful teacher, please critique my work in a way to help me improve my writing without rewriting it or providing enough detail that I would be tempted to plagiarize”, but I’m not sure if that would really work.
And then there’s the fear. Others are using these tools and Pandora’s box has been opened. If I don’t master the new technology, I’ll be left behind and refusing to use LLM’s may soon be seen as silly as refusing to use spell check or calculators.
So, for this post I’ll do it the old fashioned way, I’ll write it by hand. Well, ok not by hand, I’ll use a text editor: VIM, and I’ll let myself use spell check, but that’s it, no more. Maybe I’ll ask AI to critique it after I’m done. I’d love to hear more about what others think on this topic, maybe I’ll go ask AI. And if I really want to torture myself, I could ask it to point out my grammar mistakes, but I won’t fix them, I’ll just let them sit. There’s plenty more for me to write and I can leave this post here with all its meandering, stream-of-concious, style as some kind of evidence of my humanity.
So here we are, LLM’s are exciting new tool, and one I want to play and explore with more. I think with experience I’ll get a sense of how to best use them, but for now I’ll just accept that disquieted feeling. Over time I’m sure I’ll develop guidelines, consciously or not, about when and how to use them, and when and how to attribute them. This is something we’re all figuring out, and I’d be skeptical of anyone who claims they have the answers. Maybe, with more experience, I’ll find some and then I’ll have one or two of those elusive, valuable, original thoughts to post on.