Content Warnings: Child Sexual Abuse Material mentions, psychosis

1900 words | 7 minute read

Image by Tima Miroshnichenko

I open here in the title with a quote from the masterpiece game series Mass Effect. In it are sentient robots built with AI. We discover they rebelled against their creator when one of the units—which have a hive mind, and so become more intelligent depending on how many are in proximity—asks a creator “does this unit have a soul?”. Fearing a rebellion, the quarian makers of these robotos pre-emptively strike, which kind of makes the rebellion essential. The quarians end up losing their home world, destined to exist in a flotilla of spaceships, while the geth AI robots become hostile to all organic life (and vice versa).

It’s an interesting question as an animist to ponder whether or not artificial intelligence will ever have a soul. Personally, I’m quite a soft animist: I believe organic matter (including plants, non-human animals, and rocks) have souls. For me, the jury’s out on whether manmade items—whether hand crafted or made in factories—can have souls, or if instead our use of them ends up creating a thoughtform or egregore bound to them. I lean towards the latter, but I’m very happy to be proven wrong on that.

So where do we stand with current AI systems? Do they potentially have the capacity to have souls? My one word answer: no.

What Is Generative AI?

AI at the moment is something of a misnomer. The term draws on popular culture ideas of what artificial intelligence is: a sentient consciousness with autonomy. The current AI we are living with is not this. It does not think—it predicts. As even one pro-AI source says, “AI, in general, refers to the development of intelligent systems that can mimic human behavior and decision-making processes”. Mimic here is telling: it is a masquerade of human functionality and not a replication of it.

So, what is generative AI (GenAI) specifically? Essentially, this is a form of proto-artificial intelligence that will generate new content based on prompts inputted by a human user. This is based on deep learning models, which consume an enormous quantity of human-made literature in order to then predict what the correct answer will be to any given prompt. However, GenAI can also do other things than just text, such as audio and visual materials. Much, if not the vast majority, of this media has been taken without consent from artists and writers.

A key word I use here to describe GenAI output is prediction. While those in the AI industry say that GenAI differs from predictive AI, I fail to see the difference when it comes down to the root of it. Predictive AI is used for forecasts, they say, whereas GenAI creates something new. However, for GenAI to fulfil a prompt, it must have something fed into it that allows it to calculate the ‘correct’ answer. This means it takes information and, because it cannot think or reason like a human, must make a mathematical choice of what the prompter wants to see, thus predicting what they are expecting. Therefore GenAI is a predictive model.

There is a glaring problem with this: as any good diviner knows, predictions aren’t always accurate. GenAI is renowned now for its so-called ‘hallucinations’ (a term I hate because is both implies a consciousness and reduces the severity of psychosis down to being silly little errors). AI hallucinations are where the answer generated is incorrect and/or pure nonsense. For example, Deloitte has had to refund the Australian government AUD440,000 due to hallucinations in a report they provided. As a more humorous example, Google’s AI has been churning out strange idioms, such as “two buses going in the wrong direction is better than one going the right way” and “never put a tiger in a Michelin star kitchen”.

This is an concerning side effect of GenAI: in a supposedly ‘post-truth’ era in which misinformation proliferates online (and across social media in particular), it is troublesome to muddy the waters further with incorrect statistics, facts, and citations. The fact that the technology to create realistic deepfakes is pretty much in existence is also a huge worry in terms of fabricating narratives. Finally, there is the usage of GenAI to create immoral content, such as the recent spate of CSAM being churned out by X’s AI, Grok.

How Are People Using Generative AI Spiritually?

It is extremely strange times that we live in, so much so that John Beckett has had to write a post about whether or not AI has a spirit(s)—and, I suppose, that I am citing it in an article I am writing is also evidence of this. In Beckett’s article, he explains his view that, parking ethical issues of content scraping, AI is just another tool, and thus could theoretically be manipulated by spirits.

This is the thought process, it seems, behind a lot of witches and spiritual people when they approach GenAI in this way. Some witches use it more as a search engine and library, except more interactive. They will use it to bounce ideas around and to learn about witchcraft. Some use it as chatbots and acting as though it is a familiar spirit. Indeed, people from many religions are using chatbots to ‘talk’ to their ‘god(s)’.

Some also use it for divination and interpretation of other tools such as Tarot cards and astrological birth charts. Indeed, there are also Tarot decks now completely generated with AI available for purchase on sites like Etsy, often based on stolen art (indeed, all of the big players in GenAI have their generative capability only because of stolen art).

All of this goes to show that there is a vast array of things GenAI can do for witches and many ways it can be applied to a spiritual practice. But is this actually a good idea? Can GenAI be just another tool for witches to use in their practice?

The Risks of Using Generative AI Spiritually

There are a number of problems involving GenAI in spirituality. As already stated, GenAI has its generative capabilities from the large scraping of data, whether art or writing, usually without consent. Indeed, there are currently lawsuits being pursued by creatives and their representatives because of the theft of their intellectual property.

Generative AI, as already stated, is not ensouled. It isn’t even artificially intelligent in the way this is typically described. It just spits out what it calculates as what you want to know. It is not a divinatory tool; it is a prediction machine. It cannot speak for spirits in the way people think it can because it is far too complex for a spirit to manipulate as opposed to something like a Tarot deck or a pendulum. To think one is actually conversing with a god or a spirit in this way risks psychosis. I have written about religious psychosis in the past: suffice to say, this is not an experience you should court. It is frightening and can lead to all sorts of physical and mental harm.

Indeed, outside of spirituality, GenAI is literally responsible for suicides and one man’s attempt to assassinate the Queen of England. These machines, these processes, when engaged with too deeply by vulnerable people (including those desperate for gnosis), are quite literally dangerous. It is thus not ethical for witches to engage with GenAI in their spirituality (and, to be fair, outside of that too) because of the risk of harm to the self.

The Ecological and Societal Impacts of Generative AI

These are not the only ethical quandaries that GenAI poses. GenAI has both societally and environmentally negative impacts that need to be accounted for, and which make its use immoral in both wider contexts as well as specifically in witchcraft, religion, and spirituality—and especially in a religion or spirituality that is based around venerating nature and balance.

Societally, as already stated, GenAI has lead to the worsening of mental health conditions, leading to a number of suicides worldwide. It is also resulting in job losses for creatives, whose talents are being discarded in favour of a machine that plagiarises them. In the UK, for instance, AI is resulting in more jobs being lost than it is creating, despite the propaganda around AI creating more jobs.

The other huge issue is child sexual abuse material, or CSAM, which I mentioned briefly above. You may remember the recent controversy over X (the social media site formerly known as Twitter)’s AI, Grok, being used to create images of undressed children (as well as women). Even back in the summer of 2024, artificially generated images of CSAM were making it harder for law enforcement to investigate and potentially help victims1. Indeed, the Stanford Internet Observatory found that some GenAI and machine learning data sets were actually trained on suspected CSAM.

These are not the only moral issues with GenAI, however: the environmental impact is also concerning. While supporters are quick to point out that other technology, such as streaming services, also use a whopping amount of natural resources, this doesn’t work as a defence, in my opinion: just because other technologies that have become intertwined with modern life do something doesn’t mean we can justify having another.

So, what are these environmental impacts? Let’s look at what the United Nations Environment Programme (UNEP) says:

But when it comes to the environment, there is a negative side to the explosion of AI and its associated infrastructure, according to a growing body of research. The proliferating data centres that house AI servers produce electronic waste. They are large consumers of water, which is becoming scarce in many places. They rely on critical minerals and rare elements, which are often mined unsustainably. And they use massive amounts of electricity, spurring the emission of planet-warming greenhouse gases.  

As we can see from this paragraph, GenAI has multiple compounding effects on our environment. For those who don’t care about the environment more generally, perhaps you’ll be persuaded by the fact that the climate crisis is already contributing to a vast number of humanitarian crises: according to the World Health Organisation (WHO), the death rate for countries that are more vulnerable to climate change related health emergencies (which are predominantly those in the Global South) was fifteen times higher across the last decade than those countries who are (currently) less vulnerable.


There is, then, a number of moral issues with the usage of GenAI, both human and environmental (which also affects non-human animals, too), as well as dangers to individuals’ psychological health. It is also absolutely bunk when it comes to spirituality. While some regard it as a tool for greater depth in their practice, what they are essentially getting is their own wants projected back to them as ‘knowledge’, all the while using up valuable water that cannot go back into the system (for instance).

There is no justification for using GenAI in general, but especially in your spirituality. If you need to rely on a faux-intelligence to ‘help’ your spiritual practice, I encourage some deep introspection as to why your comfort—because that is what is really being created—is more important than these moral issues.

May the bubble burst. May it all come crumbling down.

Footnotes

  1. As you may know, I am an abolitionist and do not believe law enforcement will ever result in the end to sexual violence against anyone, but this is still a concerning fact.

Do you appreciate my work as an independent content creator and wish to help me carry on making this sort of stuff? There are two ways you can do this: a monetary donation via my Ko-Fi page, or by purchasing something off my Throne wishlist (I don’t use Amazon for privacy reasons). Any and all tips are greatly appreciated and help me make this a viable income stream, as a disabled creator who struggles to work conventional jobs.