Concerns Around Ethical Use of AI in Creative Works
The bandaid-ripping continues. If you haven’t read Part I of this series, please refer to here for more context, but I will give a brief repeat of history below:
As some may know, I recently lost my literary agent via mutual agreement over my use of generative AI in my creative work. The impetus of this very relevant and extremely important conversation came after turning in marketing proposal artifacts (see how dull and lovely these are to write in my Medium posts here and here, where I spell out how I co-created my artifacts with generative AI including edited final products delivered).
I don’t write this blog post out of spite but from a deep place of hurt and cognitive dissonance that I am sure is affecting many creatives today.
Here, I lay out my argument based on many conversations with many individuals.
The question is- what do YOU think, dearest gentle reader?
Is Using Generative AI in Creative Writing Ethical? What Does Ethical Use Might Look Like?
I was initially pointed to this article published by the Author’s Guild on the use of Generative AI. The Author’s Guild is perceived as one of the main authorities on writing. A couple of points in the attempt to lay out a framework for generative AI use in creative writing outlined in the article:
Many LLMs are trained on content that was copyright infringement
While many companies have tried to sue large LLM producers, it has largely been found by the courts and precedent [I AM NOT A LAWYER] that in fact, ingesting these materials are considered fair use. This had been established in previous court cases such as Authors Guild vs. HathiTrust and upheld in AuthorsGuild vs Google, and also outlined in the US Copyright to AI. But the bottom line of what I have found from official US Courts is: we haven’t figured it out, and it is still murky space, but precedent says we can just keep it that way.
Companies have made moves to allow authors and book writers to include indicators when they do not wish to have their works trained by AI. But what has been done has been done, and per usual, human nature changes as rapidly as AI does which is bound to leave room for mistakes to be made.
My take on the matter:
A la Grimes, why the hell wouldn’t a creative want their works included in basically the new way the globe will interact with search? I love the approach Grimes took; if her voice or music is used she must be paid royalties from the work, and given IP acknowledgement. This could encourage things like fanfiction to be written so much better, and actually give authors the opportunity to earn more money from it. But going back to search; the reality is LLMs will be the new way to search, and that is how your content will be discovered. Whatcha going to do about it?
Back to the Author’s Guild, here are their published fair and ethical uses of AI in writing:
- Use AI as an assistant for brainstorming, editing, and refining ideas rather than a primary source of work, with the goal of maintaining the unique spirit that defines human creativity. Use AI to support, not replace, this process.
Valid. Unless you DO want to see what kind of creative, crazy thing AI can produce. For example, as of there are techniques to extend LLM outputs to up to 200k tokens, which could well author a book. However, getting to this output token limit takes skill. And the extensive amount of prompt engineering required to have an LLM write an entire book exactly to your liking… maybe instead of a plain old writer you become a prompt engineer and an editor. So what?
“As long as your audience knows you have used AI tools, then that is complete honesty to your following. At least for now. Although we hardly have to declare that we use automated editing tools and grammar checking technology all powered by AI.”
- To the extent you use AI to generate text, be sure to rewrite it in your own voice before adopting it. If you are claiming authorship, then you should be the author of your work.
Valid, and one of the biggest offending things about using LLMs is in fact that the produced content is rather flat from a voice perspective. However, with clever prompt engineering and some editing… yes, you can have a co-writen book.
Also, can we talk about how vague this guideline is?
- If an appreciable amount of AI-generated text, characters, or plot are incorporated in your manuscript, you must disclose it to your publisher and should also disclose it to the reader. We don’t think it is necessary for authors to disclose generative AI use when it is employed merely as a tool for brainstorming, idea generation, or for copyediting.
Wow. Even copyediting. Which is exactly what I plan on using generative AI for next. I find copyediting (considering how extensive that process can be in changing the manuscript) is permitted by AI.
- Respect the rights of other writers when using generative AI technologies, including copyrights, trademarks, and other rights, and do not use generative AI to copy or mimic the unique styles, voices, or other distinctive attributes of other writers’ works in ways that harm the works. (Note: doing so could also be subject to claims of unfair competition).
This is fair, unless the writer has given permission a la Grimes and is excited at the prospect of making money off of fanfic and suing people who do not comply with royalties and stated IP restrictions.
As creators, leave us alone. Seriously, allow us to make our own decisions. If we want others to use our works for more creativity, go for it. If not, forbid it. If the authors’ work is considered part of the public domain- how entertaining would it be to read a twist on Dracula written in the voice of Mary Shelley using a plot from a Charlotte Bronte novel? Possibly quite confusing.
- Thoroughly review and fact-check all content generated by AI systems. As of now, you cannot trust the accuracy of any factual information provided by generative AI. All Chatbots now available make information up. They are text-completion tools, not information tools. Also, be aware and check for potential biases in the AI output, be they gender, racial, socioeconomic, or other biases that could perpetuate harmful stereotypes or misinformation.
Obviously. I think I would do this with generated or not generated works…?
- Show solidarity with and support professional creators in other fields, including voice actors and narrators, translators, illustrators, etc., as they also need to protect their professions from generative AI uses.
Okay… I somewhat agree. But this is another Industrial Revolution. If you don’t use AI, then someone using it will beat you. Not to say that folks who don’t use AI will be totally left in the dust- but they will inevitably find their respective fields vastly changed in even the next ten years.
We once needed dozens of servants in a well to do household to light and keep candles lit all over the house. Then electricity happened. And it put possibly hundreds of people out of jobs. Did they all become electricians? Maybe some of them did. Or maybe many were finally able to pursue something of higher satisfaction. Or maybe their families suffered due to the loss of a sole income.
This is life. This is our history. This is change.
All to say is- if we follow the same principle as before, we might end up very much in the same way we might with self driving cars. There will be roads for self driving cars only; and designed places where gearheads can drive their own vehicles. Just as bicycles now require their own lanes, and horse and carriage buggies are almost never seen. Maybe a cowboy or two in Texas. And so anyways, we end up with people who choose yes and people who choose no- what the creative community should really be fighting for is the right to choose what tools they use for creativity.
While I am not a lawyer and hardly an author, I can speak with a certain authority in the area of ethical use of AI. I in fact have an extensive background in ethical AI use- if you glanced at my resume and my other blog posts you will see I was a principal architect on Project Maven and the aftermath in the Google AI Ethics committee that formulated the AI Ethics Principles at Google that were the first in the industry.
While accused for not disclosing the use of AI in my works (which I have and had… many, many times. I am an ML Engineer for heaven’s sakes) I proactively informed of its use this time, which one might argue caused me to shoot myself in the foot. But it is actually a requirement in the Author’s Guild article and many others; disclosure is super important. So that a choice can be made.
So at the end of the day, the Author’s Guild encourages everyone to sign a model clause that prohibits the use of AI entirely (after spending a whole article talking about its ethical use) however IT ONLY PERTAINS TO AUTHORS PRETRAINING LLMs ON THEIR OWN WORK OR TRAINING ML MODELS. Of which none is happening in my world at the moment, but gosh how cool would it be to just do a foundational model of just my own writing? Hmmm…
I will end this section with a quote directly from the Author’s Guild, and you, dearest reader, can determine which side of the law I am on:
“Keep in mind, however, that this clause is only intended to apply to the use of an author’s work to train AI, not to prohibit publishers from using AI to perform common tasks such as proofing, editing, or generating marketing copy. As expected, publishers are starting to explore using AI as a tool in the usual course of their operations, including editorial and marketing uses, so they may not agree to contractual language disclaiming AI use generally. Those types of internal, operational uses are very different from using the work to train AI that can create similar works or to license the work to an AI company to develop new AI models. The internal, operational uses of AI don’t raise the same concerns of authors’ works being used to create technologies capable of generating competing works.”