How I Lost my Literary Agent By Using Generative AI Part III

Amina A
6 min readAug 4, 2024

--

The bandaid-ripping continues, same theme, new topic. If you haven’t read Part I or Part II of this series, please refer to here for more context, but I will give a brief repeat of history below:

My novel is available here

As some may know, I recently lost my literary agent via mutual agreement over my use of generative AI in my creative work. The impetus of this very relevant and extremely important conversation came after turning in marketing proposal artifacts (see how dull and lovely these are to write in my Medium posts here and here, where I spell out how I co-created my artifacts with generative AI including edited final products delivered).

I don’t write this blog post out of spite but from a deep place of hurt and cognitive dissonance that I am sure is affecting many creatives today.

Here, I lay out my argument based on many conversations with many individuals.

The question is- what do YOU think, dearest gentle reader?

The use of AI detection tools- or AI tools in general, at all

My proposal was run through an AI detection tool and received a count of 77% likely written by AI. For the reader’s awareness, the flow of the marketing plan was outlined in the article previously mentioned, which included me doing the initial drafting of almost everything with generative AI taking advantage of Gemini 1.5’s extensive context window, then editing the output to my liking. This falls into the vague definition outlined in the previous blog post of having man and AI co-create as being acceptable.

Let’s also talk about the percentage (of which is quite arbitrary, given bactracing methods using self attention are still considered highly researched areas). Is 71% over the acceptable threshold that the vague definitions referenced above? Or is there some other arbitrary number? For a marketing plan and considering the volume of marketing plans likely produced by AI across enterprise at this point, it hardly shocks me. But that is just my work informed opinion.

Before I get on my soapbox about AI detector tools for generative AI, let’s actually put their usefulness to the test.

I decided to test three samples against AI detectors (the top ten per Forbes); first against a sample of my first novel Nadiri Part I, then my marketing proposal for Nadiri Part I, and finally my controversial marketing proposal for Nadiri Part II. Here are the results:

Undetectable.Ai

Hilariously, all three samples fed into Undetectable. Undetectable checks agaInst a few of the other tools mentioned in the article; therefore I will skip them. But check it out- despite Nadiri Part II’s proposal being, technically, written by AI, it 100% passes the sniff test:

My original nadiri transcript, written by me
Nadiri original marketing proposal, written by me
Nadiri marketing proposal, written by me & AI

Winston.Ai

And from Winston.Ai, which actually did score my last document probably accurate to the amount of content created by AI versus created, originally, by me:

My original nadiri transcript, written by me
Nadiri original marketing proposal, written by me
Nadiri marketing proposal, written by me & AI

Originality.ai

Originality.ai was more interesting; they limit the number of words input to 3,000, which is sus at best. Such a small data sample would hardly give the AI what it needs, however check out this measurement key too:

It is confidence levels of their own AI assessing each sentence for whether it was written by AI. For anyone who understands how attention and transformers work- context is not created over a single sentence, but over the course of necessary context to complete the task at hand. Evaluating one sentence at a time is literally parsing the data into unrealistic chunks. But never mind that, the results appeared much the same as the last two, so I will skip posting here and end on Originaliy’s claim of being 99% accurate in detecting AI…

GLTR: Giant Language Model Test Room

I think this one would have actually been the most promising, if I could get it to analyze more than one sentence. Also, GLTR is only viably built for GPT 2.0, which I know will not work on my Gemini 1.5 generated work.

However, it is interesting that yet again, this experiment limits attention to just words in a single sentence, seq2seq style. The main paper works through data samples as long as a paragraph, but nothing more. My faith in its ability to take in a whole body of text to analyze for AI generation seems impossible with this tool, as far as a quick glance at the Github can tell.

Sapling.ai appeared to have nothing but an AI powered grammar checker, so at initial glance that was a dead end.

At this point, we have been through more than half of the detectors, with GptZero being covered by the first tool since it automatically runs against the API.

Keep this in mind as you try these tools: Pay for the tools that made this detection, or else your work is being used to retrain models as we speak (actually, even with paid tools if the Terms of Service and Terms of Agreement have not been appropriately examined this can be the case too).

Here is my soapbox

Any AI detection tools are in fact bogus, and it is very well known in the technical community that they are in fact a hoax. I am sure if the rest of the detectors are tried, similar results might be seen by some and radically different ones from others. This inconsistency is hardly a source of truth. This is because Generative AI tools are in fact trained on live corpi of human language, and are indistinguishable from human language naturally written as opposed to generated. The grey space that lies within plagiarism tools such as TurnItIn and others is in fact much wider in the generative AI space, and currently no tools authoritatively exist to give accurate percentages of what part of a piece of writing is human written as opposed to AI generated. As someone who sits at the bleeding edge of this type of research and regularly works with customers in the education technology and higher education institutions and research centres, I have to constantly avail publishers to professors of the impression that these detectors in fact work. They are extremely misleading and technically and product wise selling you something that doesn’t exist.

Tools such as Prowritingaid and Grammarly were tools mentioned that simply allow authors to correct grammar in work they wrote themselves. While using the word “themselves” implies that none of my work was written myself at all frankly made ten years of work collapse in a single second. Both tools mention in fact now use suggestions to replace your own writing with AI generated, proper examples. The types of algorithms underlying the capabilities of Grammarly and PWA are actually changing to be more generative in nature (PWA literally markets itself as “The AI powered writing assistant…!”), and thus I do not prescribe to this argument that “common” writing tools writers are made to use every day in fact are underlied by AI regardless of whether we say it out loud or not. If this is the case, then I should be sued for my forced usage of PWA as an AI powered writing assistant.

While the story seems to end here, my final part of my bandaid ripping will come shortly, with the final blow to the stomach that I received priding myself in being a creative and an engineer.

One might think the journey ends here, but in fact, the final blow to my creative and engineering ego will come in Part IV of this series, coming soon.

--

--

Responses (1)