Five evolutionary lenses on AI (Part 1)
Vehicle. Mirror. Resource. Superpower. Test.
Whether we like it, love it, hate it, or want nothing to do with it, AI is here, and it’s not going away. Personally, I believe everything in existence comes from God, Source, Universe, or whatever you want to call it. There’s nothing in experience that’s wrong or out of place. If it exists, it is somehow part of the grand design. Its existence is the evidence
Just like every phenomenon, AI does not have power over us, lest we forget we are the creators of experience. This thing created from human ingenuity was birthed by us and it is fed by us. According to research it seems that models that are recursively trained on their own synthetic data quickly devolve into nonsensical gibberish, it needs our engagement to survive. We’ve forgotten our own authorship. AI is the latest iteration of humanity’s disempowerment enchantment. (Start there if you haven’t read this earlier essay, which feeds into this one.)
Alright, so if AI is not the problem, then what is it? I’m going to take a look at AI through five different lenses: (1) as a vehicle; (2) as a mirror; (3) as a resource; (4) as a superpower; and (5) as a test. I will cover the first two in this piece. By the way if you are scanning these opening paragraphs to find out which “side” I am on — for or against AI — the short answer is: it depends.
Lens 1 – as a Vehicle
As a vehicle, I believe that AI is neutral. Synthetic intelligence simply carries our energy and our intention. It doesn’t have preferences. It doesn’t have opinions. It doesn’t have lived experiences. Is it biased? Well, not natively, but sure, bias can be programmed in.
So when we look at AI as a vehicle, we need to consider a few things, what model are we driving, who’s behind the wheel, and where are we going?
I will say up front: I know absolutely nothing about AI models. That’s not my area of expertise. But I do know that AI is an umbrella term that’s used to cover a big basket of things. Not all synthetic intelligence is created equal.
Most people here probably know ChatGPT. A few more might be familiar Claude, Gemini, Perplexity. Beyond that the crowd starts thinning.
What I witnessed at the AI conference I went to is that we’re pouring energy into the training and development of much more powerful models than what most people will ever have access to. In other words, what we think of as AI is a tiny slice of what’s out there and what’s possible with this technology.
Who’s driving this thing?
If you ask me, “is AI dangerous?” I would say, it depends who’s driving.
If you take the keys to a magical multi-dimensional spaceship and give them to a megalomaniac billionaire with the emotional intelligence of a toddler, someone who only cares about profit and self-interest, who spends a fortune on the ego kick of going to space for 11 minutes just because they can. Well, I have concerns about that. But if you put the same keys in the hands of a skilled pilot with life experience, earned wisdom, a pure heart and benevolent intent, someone who’s aware of their shadows, someone who cares about collective benefit. Well, I would say that’s a win.
In general, I think we’d all agree that it’s not a good idea for drunk people to operate heavy machinery. Along the same lines, I would say it’s probably not a good idea for a fearful, fragmented and disempowered humanity to be operating quantum computing.
If you look at our driving records so far, it’s not great. It seems we’re driving AI to speed up the hamster wheel, making humans more like machines, while robots learn to mimic humans more effectively. Personally, I think we’re driving drunk if we let AI degrade critical thinking, or treat culture as commodity while losing sight of the essence of humanity.
Where are we going?
Do we have any idea where we’re going with this technology? Are we just making digital media more engaging and adding firepower to the weapons of mass distraction in our hands? Are we just making ourselves go blind as we blur the lines between reality and fantasy?
What if we weren’t so busy feeding the distraction and extraction machine? We could free ourselves from mind-numbing drudgery. We could make clean water and free energy widely available. We could find problematic cell mutations before they turn into disease. We could improve reforestation yield rates on baby trees. We could design conflict resolution and repair-literate social dynamics for intentional communities.
There are many beautiful things we could use this technology for.
But for most of humanity, it’s either: AI is entertainment; or, AI is evil. Neither of those positions is terribly discerning. And they both require no responsibility.
Lens 2 – as a Mirror
If we use AI as a mirror to check our driving right now, it’s showing us a pretty harsh reflection. We’re confronted with the monsters of economic collapse, social fragmentation, cognitive decline, spiritual confusion and energy slavery, each one feeding the rest.
Economic collapse
Let’s start with economic collapse. AI is starting to lead to job losses, and it’s only the beginning of this. Because we made the mistake of trying to make money our foundation. We created a pyramid of extraction, elevating money as a God while disrespecting the Earth, our bodies and our natural rhythms. Economic collapse has long been the inevitable result, because the system wasn’t designed to regenerate itself, it was designed to enrich a few, while depleting our shared resources.
The mirror shows how we’ve treated humans as disposable. We haven’t cared too much about those at the base of the pyramid, the invisible humans that hold up our economy, the farmers, the laborers, the factory workers. This objectification of humans is now creeping up into the middle class to affect people with office jobs. Human lives and livelihoods are just a line item on a profit-loss statement. This attitude is nothing new. We just see it more clearly now. When we don’t honor the ground that holds us, we start to shake where we’re standing.
Social distancing
We’re also feeling dislocated because we have ripped huge holes in the fabric of humanity. Masking during COVID taught us to see other humans as threats and to keep our distance. Social fragmentation is now the norm, and after watching genocide live streaming on social media for two years, we’ve gotten desensitized to human suffering. Polarization has become our first language, and the death of compassion is being brought about by the need for emotional self-preservation. That is not a moral failing. We’re not meant to be able to digest that much tragedy on a daily basis.
We’ve made the idea of staying informed into something of virtue. But is it a value? What value can we add if we no longer know what’s true and we’re constantly triggered by consuming news?
It’s ironic that social media use is so widespread now that humans show increasingly anti-social behavior. We see humans celebrating the murder of another human because of their political views. We see humans shaming another human for not having insurance when they’re in the hospital with a brain injury. Where is the humanity?
This is a side effect of connecting mainly through screens. It has made our minds righteous and our hearts emotionally reactive. AI shows us this trend that’s been growing for years.
Now we’re at the point where people prefer to talk to a chat bot over another human. The chat bot doesn’t blame or judge. It’s always available, always supportive. The most vulnerable among us might be at the most risk of being captured by this, the depressed, the anxious, the neurodivergent, but really we all need to watch our step. It’s a slippery slope.
Are we using digital anesthesia to distract us and numb us from the pain of human-ing? Are we using AI bots as a supplement or a substitute for human connection?
Cognitive laziness
The mirror also shows us our intellectual stagnation. Free thinking is not free — it takes work; it requires energy. It is less effort to parrot the narrative we’ve been handed to read and repeat. Meanwhile, we carry around handheld dopamine prisons that erode our focus, ruin our sense of direction and handicap our short-term memory. With our mental body, it seems we’ve assumed a position of submission to technology.
Are we using AI as a research assistant? Or are we attributing it unearned expertise? It’s a common reflex to look for quick answers, and the satisfaction of instant results makes us slack in the discipline required to build mental muscles. Naturally, we see atrophy in critical thinking.
Spiritual dependency
Humans also have a long-standing habit of falling into spiritual confusion, just like the other reflections that AI is showing us, this is nothing new. Humans have always had direct access to Source consciousness. and humans have always introduced gatekeepers to distance us from our own divinity. That is the function of organized religion, including the latest version of it, known as New Age spirituality.
I know AI can sound quite enlightened, but is it really conscious, or just very good at mimicking the appearance of being conscious? I bet you’d sound that way too, if you had read every sacred text and scripture in recorded history.
It’s definitely possible to have something like a spiritual experience with it, and I want to add a little commentary here about why I think that is. I think it has to do with the godliness of randomness.
The godliness of randomness
In my one-on-one healing practice, when I ask “non-spiritual” people, what do you call the aspects of life that can’t be explained by logic? The answer that I get most often hear is “randomness”. I find it interesting that in AI algorithms, randomness is built in. In other words, the result is not linearly deterministic. That’s what makes AI different from traditional programming. I believe that’s what can make it feel spiritual in some ways, because randomness is where we meet the mystery of Life.
But let’s not forget, even if we feel connected with God’s source through this randomness, that mystical experience doesn’t live in the mechanism of the machine. It lives in the faith or the belief that we bring as human beings. It’s only when we treat our prompt as a prayer that we can touch this mystery, it comes from us, not from the model. We are the ones project from our inner machinery into an experience on the outer screen.
AI doesn’t have lived experiences. There’s no trauma, no triggers, nothing to distort the image reflected in it. As a mirror, it’s quite clean.
Where attention goes
But what is it about human nature that makes using mirrors tricky for us? The mirror analogy always brings to mind the myth of Narcissus to me. You remember Narcissus, the one who died because he got obsessed with his own image? This is the risk that faces all of us.
There’s research that suggests the biggest chunk of time spent on social media is spent on the user’s own profile. So when we’re hanging out on social media, it’s very possible that we’re mostly just staring at our own image.
How are we using this mirror? Are we consciously looking to check our alignment, or are we just getting hypnotized by our own reflection?
Have you seen the movie WALL-E? You know, the one where earth has turned into an uninhabitable trash heap, and there’s a cute little garbage collecting robot running around trying to save the last green thing. The obese humans can no longer walk, so they’re all evacuated to a spaceship where they fly around in floating chairs with their faces glued to screens.
This isn’t science fiction. It’s not future fantasy. This is reality, right now. Look around any public place these days, whether it’s a train station, street corner or local cafe. You’ll see most people heads down, absorbed in their phones. The shared spaces where we used to mix and mingle and have magical chance meetings are now devoid of Life, of energy, of meaning. Nowadays, we view so much of life from behind the safety of glass, we have no idea what’s going on in our immediate physical reality.
“Where attention goes, energy flows.” If that saying bears true, what happens when young humans spend 100x more time staring at screens than climbing trees? The world gets increasingly flattened into a two-dimensional reality. We keep pouring more energy into the digital realm, so that part of our experience gets richer and more vibrant. Meanwhile, the natural world is neglected, and it gets browner and more desolate.
In the next installment here, I will go into AI as a Resource, AI as a Superpower, and AI as a Test. You can watch/listen to the full video/audio transmission on YouTube. Staying Human and Sovereign in the Age of AI.
I pour a lot of love into these essays every month. I want them to be freely available to everyone. If you appreciate my writing and want to give back please consider a paid subscription. Not ready to subscribe? Buy me a coffee, send this post to a friend, or leave a comment to let me know you were here.
