A.I. and How it Affects Creative Spaces
Recently, there has been a huge increase in discourse surrounding the topic of A.I. in creative spaces, with many people accusing others of using A.I. when they haven’t, and defending those who are found to have used A.I. It’s been a weird thing to witness, and I would just like to give some facts surrounding this topic to hopefully help you form an informed opinion on the subject.
What is the difference between Generative A.I. and Assistive A.I.?
Just to start off, I would like to give you guys a definition of these two types of A.I., because they are incredibly different. Generative A.I., as defined by www.huit.harvard.edu, is “a type of artificial intelligence that can learn from and mimic large amounts of data to create content such as text, images, music, videos, code, and more, based on inputs or prompts.” Essentially, generative A.I. “creates” something based on what it is trained with. The use of the word “mimic” in this definition actually is a great choice of word, because generative A.I. is incapable of coming up with something completely original in the way a human can. Everything it “creates” is taken from what it is trained by, which is other people’s work. The most well known generative A.I. platform is ChatGPT.
Assistive A.I., as defined by writersweekly.com, is “when your own work is input and gives suggestions for tweaking it slightly. In the publishing space, great examples are Grammarly and ProWritingAid. These spelling and grammar check programs use A.I. to learn from what authors are indicating are correct and incorrect suggestions to learn to give better offerings in the future. In terms of graphics, assistive A.I. might look something like shifting where shadows fall on a model’s face, but not creating the image itself.”
So, assistive A.I. differs from generative A.I. because it does not do the work for you, it gives you suggestions as to how you might work to improve your own writing or art.
So, why is Generative A.I. so much more harmful than Assistive A.I.?
Strap in, because this is a long explanation (that I can’t wait to give you).
Because generative A.I. regurgitates what it is trained with, using it to write an essay, or generate an image, or write a book, is plagiarism. Generative A.I. does not have a free thinking mind like a human does, so when it “creates” an image or text, it is simply taking words or images from the pieces it is trained with and mixing them together to make something “new”. The human equivalent to this would be taking a bunch of sentences from your favorite books, jumbling them up into a book of your own, and then selling it as though you came up with the words yourself.
Oxford University defines plagiarism as, “Presenting work or ideas from another source as your own, with or without the consent of the original author, by incorporating it into your work without full acknowledgment.” This is exactly what generative A.I. does, because it cannot come up with an original idea.
Now, you might be thinking something along the lines of, “Original ideas don’t exist anymore.” In response to the hypothetical argument, I challenge you to understand the actual meaning behind that statement. What is an original idea, and what does it look like to execute said original idea?
An original idea doesn’t necessarily mean something that no one else has ever come up with, it actually directly has to do with the way you execute the idea. You’re probably right to think that, when you break it down to the basics, every single “new idea” a person gets has probably been thought of by another person. What separates your idea from the work of someone else is the story you choose to tell using your idea.
If I came up with a story idea about an orphan who finds out his parents were important people in some kind of magical world and he finds out he has to save that world, there are probably a few different books your mind goes to, right? Harry Potter and Percy Jackson both fit this description, and yet their stories are wildly different.
At the core, the concept of both of these stories are similar, but the execution is what makes them different. What generative A.I. is incapable of doing, is creating a concept or writing a story that has a different execution than another story of the same general idea. You may think it can, but it, quite literally, cannot. It is not in the nature of A.I. to create something new, which is why using it is technically considered plagiarism, as I’m sure many people have been disappointed to hear from their professors after finding out their essay was given an F because of A.I. and plagiarism.
Another reason generative A.I. is so bad is it’s environmental impact. A.I. is run by data centers, which use water for their cooling systems and electricity generation. Water consumption is also associated with A.I. supply chains that produce A.I. related products such as microchips.
- “On average, data centers can evaporate about 0.26-2.4 gallons (1-9 liters) per kWh of server energy for cooling purposes.” (via cee.illinois.edu)
- “The electricity needed to power data centers often comes from thermoelectric or hydroelectric plants, which require significant amounts of water. The national weighted average for thermoelectric and hydroelectric water use is 2 gal (7.6 liters) of evaporated water per kWh of electricity consumed.” (via same as above)
- “Producing a single microchip, for instance, requires 2.1-2.6 gallons (8-10 liters) of water to cool machinery and ensure wafer sheets are free of contaminants.” (via same)
For context, kWh stands for kilowatt-hour, which is a unit of energy. A kilowatt is equal to 1,000 watts. The average dishwasher uses an entire kilowatt of power in an hour. So, for every kWh of electricity consumed by the A.I. data centers, somewhere between 1 and 10 liters of fresh water is evaporated. I really want to emphasize the fact that this is fresh water. Also known as the only water that humans can consume without getting sick. Yes, this is a lot of water. If something as small as a dishwasher uses a kilowatt of energy in an hour, think about how many kilowatts an entire data center managing thousands upon thousands of A.I. generated responses uses. I’m not even someone who thinks the world is going to end if we don’t get our climate under control (I have a healthy amount of concern) but this is entirely unethical and cannot be sustained.
To further explain just how much water A.I. data centers use, let me give you another example. “Google’s hyperscale data centers, which support major services such as Gmail and Google Drive, averaged approximately 550,000 gallons (2.1 million liters) of water per day over the past year. In contrast, smaller data centers generally report much lower water usage, averaging about 18,000 gallons (68,100 liters) per day,” (also via cee.illinois.edu). Additionally and from the same source, “In the US, where the average per capita water withdrawal is 132 gallons a day, a large data center consumes water equivalent to that of 4,200 persons. This makes data centers one of the top 10 of ‘water-consuming industrial or commercial industries’ in the country. The US is home to over 5,300 data centers, and by the end of 2021, around 20% of these centers were drawing water from moderately to highly stressed watersheds in the western US.”
There is already a shortage of fresh water on Earth, entire communities are already struggling to not succumb to severe dehydration, but tech companies think it’s ethical to use hundreds of gallons of water per day to cool their A.I. data centers? If this doesn’t bother you, then I really don’t know what to say. It has been a long known fact that there is a fresh water shortage on Earth, and generative A.I. is actively making it so much worse than it already is.
According to Forbes, “A.I’s projected water usage could hit 6.6 billion m³ by 2027" which, for those who don't know, is an incredibly large number. You don’t even have to be any kind of climate activist, or even generally worried about our climate to recognize that this is a completely unethical amount of water to use for something that literally just steals from creatives and offers the average person nothing that can’t be found on Google.
Now, Generative A.I. in creative spaces. How is it being used and what does it mean for the rest of us?
Generative A.I. has especially been a problem within the art community, but it’s beginning to bleed into the author community as well, which is why I’m speaking on it in a blog post and not just in an Instagram story. There have been numerous instances of people using A.I. to “create” an art piece, and the A.I. had ended up giving them what was nearly an exact copy of someone else’s actual work that was stripped from somewhere online, like Instagram, and fed to the A.I. to train it without the original creator’s consent. If this doesn’t prove that all generative A.I. does is steal and regurgitate the work of creatives, I don’t know what does.
For authors, catching it is a little bit trickier, because unless someone has every book they’ve ever read memorized, or feels the need to input a book into an A.I. detector to see if it’s A.I. (which you should not do, as this actually gives the A.I. the manuscript to train with, furthering the problem), it’s going to be harder to catch in a book. However, there was recently an author who published a book that still had the A.I. prompts in the actual text. (How do you miss that in the editing process?) So, situations like that make it easier to catch.
I want to add to this section and say that we should not be going around accusing anyone and everyone of using A.I. simply because you saw something that said “this could be an indicator of A.I.”. An author using an emdash isn’t an indicator of A.I. right off the bat, people. Authors have been using emdashes in their writing for centuries, you just haven’t noticed until now because you weren’t looking for them. Now, if an author has used an emdash incorrectly, there might be more of a basis to question whether they used A.I., but human error also exists, and that is not a true indicator of A.I. being used. In fact, I would prefer to see genuine human error in a book than obvious A.I. generated writing or, as the more sassy individuals online prefer to call it, "A.I. slop".
Does generative A.I. really make art forms more accessible?
The answer is no, not really. At least not in any way that matters.
Generative A.I. might give you an instant solution to your problem of not being able to execute an idea for a piece of art or writing, but it doesn’t make art truly accessible when it’s stealing someone else’s work.
Also, if I see one more able-bodied human being say that gen A.I. makes art possible for disabled people, I’m going to lose my mind. The people who have learned to paint using their mouths or feet would tell you to suck it up and learn how to make art like the rest of us, who have spent years working on our craft.
If you do not have the natural skill to make art, take a class. If you don’t want to put in the effort to do that, maybe ask yourself why you actually want to be able to make art. If you are not willing to put in the work to be able to make the art on your own, then you are not an artist. You do not want to actually be one of us, you want to say you are without any actual merit.
But what about using Generative A.I. to come up with an idea?
I would advise against this. It can all too quickly evolve from using it to generate story prompts, to using it to write scenes, to using it to write your whole book for you. The same thing can happen with visual art. Additionally, as I stated before, generative A.I. does not come up with original ideas in the way a human does, so the “unique ideas” it can give you are really taken from someone else’s work that they likely spent years and tears creating. If the A.I. is giving you a vague idea, like “orphan discovers he has magical powers and sets off on an adventure to find himself”, then that may not necessarily be plagiarism, but why do you need A.I. to give you that idea? You couldn’t come up with that on your own? Anything more in depth than something akin to my above example has likely been directly stolen from someone else’s work, and that is simply not fair, or legal.
Copyright Law
Another reason generative A.I. is generally a bad idea to use in your work, is because it is not protected under copyright law. Anything A.I. generated is stolen and is therefore not your intellectual property. Did you know that even unpublished works are protected by copyright law, as long as they are not A.I. generated or plagiarized?
So, “A.I. authors and artists”, someone who has never shared anything they’ve made has more protection for their work than you do. Let that sink in. Perhaps, you should realize that this is because using generative A.I. to write a book or create visual art means that it’s not yours.
All I ask is that we as creatives take a minute to understand how truly uninformed it is to defend generative A.I.. Why would you support something that steals from the very creatives that are your peers? Why would you deny the detrimental environmental impact using A.I. has on the planet, and the various communities this problem has already affected? Why would you argue that generative A.I. makes art accessible? Why would you argue that it’s really not that unethical? Why would you argue that it doesn’t steal from creatives when stealing is literally the only thing it can do to train itself?
Have some discernment, everyone. Think about the kind of person you want to be perceived as in the creative world. Do you want to be known as someone who needed to use A.I. to be relevant in a creative space, or do you want to be praised for the hard work and dedication you put into actually creating something from your soul?