Why I cannot accept the "AI is bad, end of" argument
Nov. 4th, 2025 12:45 pmYou may wonder why I give the subject line I do for this post, so I will tell you. If you have been following my posts about the abuse of actress Sandra Peabody while making The Last House on the Left over fifty years ago, you will know that the evidence is scattered. There are a lot of rumours out there online, as well as warped versions of the truth. Many people seem to find summarised versions of one or two of the best known stories, publish those, and leave it at that. I wanted to dig deeper, and I eventually found a true smoking gun in David Hess's horrific quote from 2008.
The blunt fact is that, barring extraordinary good fortune, I could not have found the crucial Vanity Fair article without AI help. I didn't really know what I was looking for until I saw it, so a traditional Google search was not enough. The article seems to be almost completely unknown even by people who write about the movie, so it's not referenced elsewhere. And Hess's appalling words are buried without surrounding context 2,000 words into a 5,000-word paywalled article. AI found this for me when nobody else had, and when for a lone amateur like me perhaps nobody else could. This matters.
I have used several AI assistants (all on their free tiers) during my research. Doing this has been more useful than using one alone. I have especially used Claude, which is the most perceptive and can read scanned pages and interpret images, but has a very limited free tier. ChatGPT, which has been very variable: sometimes superb at digging out evidence, sometimes useless at more straightforward tasks. Copilot, which lacks the vital "edit prompt" option for repetitive tasks but is good at giving (simulated, of course) ethical judgements. And Gemini, which is clearly less advanced but is useful when working with the Google ecosystem, eg searching in Google Books.
Of course using AI output uncritically is a disastrous approach, and that is often what people look at when they call AI worthless. What I did was to use the information AI gave me as a starting point and manually check its sources. I could also spend extended periods asking different AIs slightly different versions of the same question, something you simply couldn't realistically do with a human collaborator. AIs can look dispassionately at evidence without letting emotion blind them, and a human can make moral judgements that an AI can't. I needed both acting in concert.
There are indeed serious, legitimate questions to be asked about AI, ranging from its effect on water resources to the impact on the justice system of near-perfect deepfakes. But my own experience has convinced me that a completely one-sided "AI is bad" approach is, at least for the sort of work I have been doing, simply wrong. If you want someone who hates AI in all its forms, then you are not going to find that here. In truth AI is more than large language models, as the image analysis I've used shows. But LLMs themselves have been a crucial part of my uncovering this evidence.
I have more work to do on this. It will definitely include using AI as a part, though never a whole, of my toolkit.












