Go Back Up

About

No AI Logo

While my time is in the future relative to yours, and my Post-Anthropocene world is very different to the world that you inhabit, I have a few thoughts on the development of generative AI in your time and world.

I am not opposed to AI per se. AI models with a specific and well defined purpose, trained on limited, relevant and human selected datasets show enormous promise in applications such as assistive technology communication devices. What I am is skeptical of, is LLM (Large Language Models). LLMs are trained to develop the ability to mimic created human speech, music, text, images and videos by harvesting and analyzing enormous quantities of online data. LLMs have been rolled out with no clearly defined purpose at this point, other than to be used by the public. This public use will scrape data from the internet to continue the training of the models, it will get the public hooked on the use of this technology and it will ensure a market for this technology into the future. The only purpose of LLMs is profit. My objection to the use of these models is based on five main concerns.

1. Environmental sustainability

2. Intellectual property

3. Devaluing of human creativity

4. Primitive accumulation

5. Cognitive atrophy


1. The environmental impact if LLM AI (and other forms of AI that continually harvest and analyze online data) is substantial. The data centers that run these models draw enormous amounts of electricity which requires an uptick in generation. In a time in which we must seek to reduce carbon emissions, creating a demand for more energy in order to power a technology that is as energy intensive as AI (and that we can easily live without) seems counterproductive to say the least. If all of our energy generation methods were carbon neutral, this would be less of an issue, however considering that we live on a finitely resourced planet, with a larger human population than at any previous point in history, the huge quantities of fresh water required to cool these data centers is a drain on a precious resource that I’m not sure we can afford.

 

2. The data harvesting of these models is indiscriminate and has no concept of, let alone respect for, intellectual property, privacy or consent. Visual artists have already seen their work plagiarized and spat out as “new” content to satisfy a prompt. As LLM models mine and scrape more and more data they will eventually (if they have not already) analyze all human created content available, enabling any of it to be uncritically and unethically reworked re-contextualized and regurgitated with no thought for the original creators rights, needs or wishes. If human creativity incorporates this technology as a matter of course this will undermine the very notion of intellectual property to the point that the concept could become meaningless, rendering the intellectual work of human being valueless. This has already been legally tested in court cases determining that “creators” who use AI models in their work do not in fact own the content that they have collaborated with AI to generate.

 

3. If human creativity is to incorporate this technology, not only will creators be plagiarizing the work of others with no awareness or empathy, they will be participating in the process of continuing to train the very AI models which have been promoted as assisting them, to ultimately replace them. Who will be willing to pay a copy-writer, graphic designer, author, photographer or musician to produce work when they can simply have an AI model “create” something based on any aspect of the collective human history of creativity for free, or for a low monthly fee? The answer is either no-one, or such a small group of people as to make creative endeavors economically unviable. This is not a scenario that I am not prepared to countenance or have any part in enabling.

 

4. The process of harvesting the huge swathes of data that LLMs need in order to be trained is a form of primitive accumulation. Each roll-out of a capitalist system has been preceded by a period of forceful gathering of resources from the majority into the hands of a few in order to enable capitalism to operate. Europe’s feudal system could be described as a gradual primitive accumulation under the guise of the divine right of kings. Colonized regions have seen a more rapid primitive accumulation based on the violent dispossession of land, and the enslavement of indigenous peoples. It appears that we are now seeing a new wave of accumulation of the assets of the many into the hands of a few in order to enable the next frontier of cloud capital. Not only is the data that these models scrape part of this, the data that we enter into these models either as prompts, or as personal data required for account creation, is too, and it will all be leveraged for profit by the owners of these models, with no thought for our wellbeing, privacy, IP or consent. The owners of these models will not want to employ anyone who can be replaced by this technology. If we allow our intellectual effort to be devalued to the point that synthesized AI slop is considered equal to human creativity and allowed to replace our labor and output, we may be entering an era in which not only is the means of production owned and run by a minority at the expense of the majority, but that the mode of production will be owned by the capitalist class also. I consider it unethical to buy or knowingly receive stolen property, regardless of who originally stole it and how. In parallel, I consider it unethical to use LLM models, regardless of who produced them.

 

5. I fear that outsourcing our intellectual efforts to a machine that seeks patterns, repetition and probability is not only to settle for an alternative that is far our inferior, it is to risk allowing our current cognitive and intellectual abilities to regress. We have already seen attention spans and media literacy skills become negatively affected in the few decades since social media and smartphones were rolled out, can we really expect that adopting technologies such as LLMs will have no such negative consequence? Astronauts lose a noticeable amount of muscle mass and bone density, due to a lack of weight bearing, after even a short period in space. This is a known side-effect of zero gravity on the human body that is remedied with intensive training on return to earth. What is our plan to maintain or repair the cognitive deficiency that reliance on this technology may introduce in future generations or even in the generations here already? Do we even have a plan?

If the rewards of LLMs is a media landscape awash with AI generated slop and the widespread enshittification of websites, apps and programs, and the risks are a devaluing of the concept of human imagination and creativity, a lack of ability to use intellectual effort to produce any form of value, an economic system increasingly more exploitative and inhuman and a planet unable to maintain human or any other life, then not only are the rewards not worth the risks, but the rewards themselves appear completely undesirable.

If prevention of the widespread adoption of generative AI is not an option, then the minimum standard of ethical conduct regarding generative AI, and the only ethical action available, is to refuse to have any part in it.