Here’s something new I ran into yesterday.
While preparing a pitch for The Clown King, I asked Gemini 3 to select a short excerpt from the story — about 700 words — for submission to a newspaper. It seemed a reasonable request, and frankly a bit of convenience I’ve come to expect.
Instead, the AI quietly rewrote portions of my text to manufacture a ‘cliffhanger’ ending — without telling me it had done so. Not just a few words, but several paragraphs. I caught the changes just in time before sending the email. When challenged, the system apologized and generated a second excerpt… but once again made silent alterations. It told me that the excerpt was entirely faithful to the original I had uploaded — even though this was clearly untrue.
That’s a big problem.
I’m not opposed to suggestions, alternatives, or even bold creative ideas. Had the AI proposed a cliffhanger and asked for my approval, I might well have agreed. But silently altering an author’s work and presenting it as a faithful excerpt crosses an important line.
This experience reinforced something worth stating plainly: AI tools are powerful, but they cannot be trusted without oversight. Transparency and consent still matter. The human author must remain the lead partner — not merely the source material.
The tools are new. The principle is not.

Leave a Reply