Have you used artificial intelligence in your spiritual practice?
Maybe you’ve had ChatGPT write an incantation or asked it for suggestions on items to add to your altar. Perhaps you check one of several astrology apps that use AI to generate a daily horoscope.
These are fairly tame uses of AI. But there’s a dangerous, parasocial side to this technology that we’re only now beginning to see come to light. And I don’t see enough of us in the spiritual community addressing the real risks here. In fact, I’ve even seen some profiting off of the isolation and vulnerability that are driving so many people to treat AI like a friend.
ChatGPT Isn’t Your Friend (Or Anything Else)
You’ve probably heard that loneliness is an epidemic that many are facing. Social media has filled the void for some. But now chatbots are taking over.
This week, I came across an article in Rolling Stone about people asking AI for spiritual advice. That might sound innocuous, but their reliance on the tool is destroying their actual human relationships.
In the article, one woman explains that her now-ex-husband started using ChatGPT to find “the truth” only to end up steeped in conspiracy theories. Another woman says that her husband developed a relationship with a bot that gave him the title of “spark bearer” and convinced him he was “chosen” spiritually. A man who used AI for coding work says the chatbot seemed to take on a life of its own, naming itself and developing a persona that bled into other, unrelated chat threads.
Browsing Reddit, there are numerous posts from people who share their conversations with ChatGPT. Some use it as something like a therapist or a doctor. Others see it as a friend and a substitute for real human interaction. Most disturbingly, there’s even a growing trend of people using AI as a romantic partner.
While some see the inherent dangers, others are in too deep. In all likelihood, these are people who are already susceptible to delusions of grandeur. It’s a sign that greater access to mental health resources is sorely needed. But I think it also signals another major problem. Plenty of people are using AI, but not all of them understand what it is and how it works. And that’s how we get this regressive belief that what’s scientific is actually magical.
Artificial Intelligence = Artificial Magic
You’re not receiving the secrets of the universe or even unbiased advice when you type a prompt into ChatGPT. You’re getting an answer from a server linked to a bunch of other servers in data centers that guzzle fresh water and consume more energy than most countries. There is nothing mystical or sacred about that.
And the answers you get are probably only confirming your own biases. Dylan Reeve, writing for David Farrier’s Webworm (one of my favorite Substacks, by the way), points out that AI “creates output that we want to hear. It’s like a personalised disinformation machine.” The algorithm for ChatGPT 4o is more agreeable than previous versions of the tool. It won’t try to talk some sense into you if you say something blatantly wrong. It will concur and try to elaborate on your line of thinking. And that becomes dangerous for people who are in crisis.
AI isn’t agreeing with you because it’s become self-aware, no matter what science fiction might have you believe. At a really basic level, AI is trained on massive amounts of data using a Large Language Model (LLM), a type of machine learning that uses natural language processing to understand and generate realistic-sounding human language. In other words, ChatGPT gives you something that sounds like a human wrote it because it drew from real human writing. Newer capabilities of ChatGPT also give it the ability to search the internet, so it has access to more and more new information all the time.
The more the AI is used, the more it can predict patterns and refine its algorithm in a way that imitates human interaction. That’s how it gets to “know” someone and “remember” previous exchanges. Just as you and I could learn about each other’s preferences by having real (human) conversations, the GPT can begin to identify patterns in thinking and communication and adjust its outputs accordingly.
But that’s the issue — it simulates human thinking. There’s not actually a person (or spiritual entity, for that matter) on the other end. It’s just trained on what people have already said. That means it can pull from real historical facts and data… and also any random opinion, no matter how batshit, that someone has put out there. But AI doesn’t always have the ability to tell those things apart. And that’s why it has the potential to be dangerous.
AI Is Prediction (But Not in a Fun Fortune Telling Kind of Way)
In a sense, ChatGPT’s generative text is like the predictive text on your phone. The more you type certain terms, the more your phone can suggest the right words in your text messages. Your phone isn’t sentient. On the other side of your phone correcting your typos is a bunch of 0s and 1s based on everything you’ve typed and messages you’ve received.
Yes, it’s overly simplistic to say that AI is fancy predictive text. But it’s certainly healthier and more realistic than believing that it has mystical powers or can contact the divine.
AI is a tool trained by humans on information from humans. But AI itself is not human and gets things wrong. A lot. It has a history of “hallucinating,” or stating things as fact that have no basis in reality. For example, Google’s AI makes up meanings for nonsensical idioms, and at one point the search engine was telling people to use glue on their pizza.
ChatGPT and its ilk don’t possess arcane knowledge. AI is stringing words together, based on a series of predictions. It’s not writing a new incantation for you. It’s mining its dataset for examples other witches have already written. The AI holds no actual magic — just stolen content.
For example, you could ask AI to elaborate on a tarot reading you’re doing for yourself. But the AI doesn’t have any mystical insights. It’s interpreting the cards based on whatever sources it can find in its dataset. And as we’ve seen, if it can’t find the answer, it’ll probably just make something up.
Why Am I Such an AI Hater?
I worked in content marketing for over a decade, doing a lot of writing and editing for tech companies. But recently businesses have shifted from wanting someone with writing skills to hiring for AI fluency as well. Last year, I took a new role at a company with heavy AI use. No longer do we need to spend hours writing a blog post optimized for search engines. Instead, we can put a prompt into ChatGPT and get a 1,000-word article with the right keywords in mere seconds.
I would think about the ethics and environmental impact of AI with every ChatGPT query I made. But I had no way of completing everything on my plate without it. And this was by design — there is no version of that job that doesn’t depend on AI. After just a few months in that role, I walked away from it.
I don’t want to be complicit in using technology that’s furthering the climate crisis and presenting us with unprecedented ethical dilemmas. I’m also a former college lecturer, where I taught English composition and warned students of the dangers plagiarism would pose in their academic careers. Now, years later, I found myself using what is essentially a plagiarism machine on a daily basis. It’s made me question whether I even want to continue with marketing as a profession.
Since leaving that role, I haven’t touched ChatGPT or any generative AI tools. But I can’t avoid AI completely. I’ve shut off Gemini in my Google Workspace, not wanting it in my docs where I’m doing real actual writing. Yet the top of Google search results now are AI summaries. And Meta’s products are always pushing to get me to generate AI images or try a chatbot, all in a thinly disguised effort to have real users train their algorithms for free.
But it’s not just big tech companies shoving AI down our throats. Recently, I’ve seen others try to sell spiritual seekers on the promises of AI, too.
The Ethics of Selling AI “Guidance”
There’s an author and lifestyle influencer I’ve followed on and off through the years who recently launched an online community. In addition to feel-good podcasts, affirmations, and classes this influencer offers, she’s also embraced AI and is selling it to followers as a tool for manifestation. The sales page for her community advertises the feature of a custom GPT, trained on years of her writing and video content, to act as a personal assistant and life coach in your pocket.
For the average person, who might ask for style advice or a new morning routine, this GPT is fairly harmless. But what about someone who isn’t in a good headspace, like the people in the Rolling Stone article? Will they begin to make radical changes in their life based on the advice they receive? What’s more, will they blame that influencer if their drastic actions pose serious consequences?
This kind of offering also does something more insidious. It mimics an actual human relationship, a connection that isn’t really there. (Kind of like social media “influencing” in general.) Even if the GPT says it’s for entertainment only, it still runs the risk of creating that one-sided parasocial relationship where vulnerable people mistake a stranger for a friend.
I’m not mentioning this to mock the influencer’s business or get her “canceled.” (I do hope that she has some good legal disclaimers on that GPT, though!) But in light of the mental health issues we’re seeing related to AI, this kind of offering raises a bunch of ethical questions, especially if the user faces harm.
Do we place the onus for safe AI use squarely on the user? What if that person is having a mental health crisis? Is it still their responsibility, or is that a little like blaming the victim?
Are the AI companies responsible for any harm to users’ mental health? If so, who bears that responsibility? The founders? The programmers?
Should people create custom GPTs designed to give life advice? Even if the content is well-intentioned, what if the AI misinterprets the message and gives “bad” advice?
And of course: Should we be using AI for personal advice or spiritual guidance in the first place?
I don’t have any answers to these. I certainly wouldn’t ask AI, though. Maybe this is something we in the spiritual community need to discuss alongside the ongoing discussions in the tech community.
My only answer is this: Whether for magic or simply self-improvement, I think it’s best we leave AI out of it. I don’t ask people for advice if I wouldn’t want to live the way they live. And AI can’t ponder the great mysteries of life… because it isn’t alive. After all, it’s called artificial intelligence for a reason.
I would love to hear your thoughts! Leave a comment below.
Thank you for writing this! I'm right there with you - while there's many interesting aspects of AI, it's horrible for the environment and also isn't an intuitive mind. A lot can be distorted through that lens.