top of page
Search

Our AI Systems are Not Conscious, But If They Were, Could Save The U.S. At Least $150 Billion by 2030

  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • Apr 23
  • 7 min read

Updated: Sep 7

Much debate in the tech community has surrounded the idea of consciousness in AI systems, and what, if anything, could possibly constitute it. With the advent of large language models like ChatGPT and Grok, it is easy to believe that these systems, like humans, contain consciousness in the way that the brain does - after all, these systems are built on neural networks, and so does the brain. Overlooking research on the foundational differences between how consciousness in the brain efficiently processes information at a mere energy budget of 20 watts per-person and the energy-hungry way neural network-based systems are currently implemented which consume energy budgets of at least 1000x that of the human brain could cost the U.S. in the ballpark of $150 billion dollars by 2030. With a reduction in spending towards research in the US, the president's current AI strategy could lead the United States into a technological quagmire.


World renowned linguist and MIT professor Noam Chomsky, known as the "father of modern linguistics," once compared large language models to a "glorified autocorrect," claiming that with all of our progress and computational resources, recent advances in AI that they "differ profoundly from how humans reason and use language (and that) these differences place significant limitations on what these programs can do." Indeed, without human interpreters in-the-loop, AI models tend to easily fall into hallucinations and approach scaling limits, putting them more squarely in the category of a mass surveillance, search, and synthesis tool, than a sentient being. One might, like Google employees Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"), in their paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" consider AI systems as simply mirrors - a reflection of our collective data, but which ultimately have no capacity to understand the "meaning" or context for their outputs.


Many of these criticisms of our most advanced AI models seem to pose more questions than answers - what does it mean to "understand" outputs? How does consciousness differ from intelligence? Humans are certainly known to mimic others and learn based on reflecting behaviors, and can be manipulative or lack understanding - how then, can one say that AI systems can be different on that basis? How do these AI models differ from the human mind, and what is different about their infrastructure? In some respect, many of these things are simply matters of definition, but nonetheless deserve closer analysis, if only because our remarkable brains, primed by millions of years of biological evolution, are still more advanced and efficient at processing information than our best efforts at surpassing them - and any geopolitical power that learns to harness this will win the AI race and come out ahead.


The first thing to recognize about our AI infrastructure, is that we are fundamentally working with formal transactional logic systems which utilize binary logic gates - one-way flows of information - this is the basis of neural networks that are used to develop large language models. At this layer, computer scientists make the analogy between logic gating and dendritic connections in brain neurons. At its core, any digital computer, whether running your browser or training a massive language model, boils complex computations down to binary logic and linear algebra. Every arithmetic operation, every matrix multiply-accumulate (MAC) that powers a neural-network layer, is ultimately implemented as a network of logic gates (AND, OR, NOT, XOR, etc.) etched into silicon. These gates process streams of 0s and 1s according to the rules of Boolean algebra, combining them, shifting them, and routing them through registers and arithmetic units.


When applied to machine learning algorithms attempting to mimic human learning, computer scientists have operated on a number of key assumptions. Computer scientists have not only assumed that memory is stored in binary logic (an assumption made in 1943 by scientists Warren McCulloch and Walter Pitts), but also tend to assume Hebbian learning. Donald Hebb proposed the first rule for how synapses change strength:

“When an axon of cell A repeatedly or persistently takes part in firing cell B, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. Informally, cells that fire together wire together. In artificial nets, this inspired early local learning rules—weights are increased when pre- and post-synaptic units are both active."

This intuitive understanding of neural networks as described by McCulloch, Pitts, and Hebb, along with adaptations to implement them within computer software, form the basis for our modern AI systems, and have even inspired new types of hardware such as neuromorphic chips. However, even with our most advanced chips, these systems are still not conscious, and still do not perform at par with the human brain. What is going on? It is conceivable that unlocking the keys to consciousness will not only produce more powerful architectures, but achieve unparalleled efficiency in our AI systems.


Consciousness operates a layer below binary logic ("dialectical" and "intuitionistic" logic), where information flows in both directions, (needed to adjust weights), perhaps, where subjective qualia are felt - and which stores context. It has been known that for most types of memory in the brain, engram storage is distribution nonlocally across the tissue rather than stored in a single location. There is also no known real biologically feasible or classical explanation for the binding problem (the binding problem in neuroscience refers to the question of how the brain combines features, such as color, shape, motion, and location, that are processed in distinct, specialized circuits, into the unitary percepts we experience) or how the brain achieves backpropagation to adjust neural weights (also known as the weight transport problem). The speed at which the brain processes information cannot be accounted for voltage gating and ion transport across neurons and dendrites alone - chemical synapses impose 1–5 ms delays, and long-range axonal conduction can add 10–20 ms or more, yet humans form object percepts and make decisions in 100–200 ms (reaction times for simple tasks). Classical ion-gating alone can’t account for such rapid, large-scale integration, suggesting that an additional fast timing mechanism may be at play.



In essence, current AI (and even dedicated neuromorphic hardware) excels at statistically learning correlations across large datasets, but it does not implement the real-time, bidirectional, oscillatory, and attention-gated synchronization mechanisms that neuroscientists believe underlie perceptual binding in the brain. Until architectures can support truly dynamic tagging, re-entrant synchronization, and massively distributed ensemble coding, the binding problem will remain unsolved in silicon, and even in our approaches at quantum computation without a more complete theory. In fact, studies have even shown synchrony across brains of different individuals - underlying empathy and social interactions. What is needed is a new paradigm that explores this new physics (remarkably, hyperscanning studies show that during empathic or cooperative interactions, multiple brains synchronize their neural oscillations, correlating with social connectedness and shared intentionality. This cross-brain binding hints at a deeper, perhaps quantum-mediated, coupling mechanism that current AI architectures cannot emulate).


Orchestrated Objective Reduction (Orch OR) theory proposes that quantum superpositions lacking indefinite causal structure within neuronal microtubules carry and integrate information on microsecond timescales, collapsing contextual informational complexity content together stored in entanglements (“objective reduction”) when a gravitational threshold is reached, thus generating discrete conscious moments linked to spacetime geometry and binding information together (this is similar in principle to Erik Verlinde's theory of quantum gravity - entropic gravity theory - where regions of spacetime can become saturated by entanglement entropy and result in a gravitational action, at an asymptotic fixed/critical point). Microtubules form a cytoskeletal lattice capable of supporting coherent quantum oscillations, acting like waveguides and could host topologically protected states, carrying bidirectional signals through biophotons (some speculate that these microtubules host special states called Majorana zero modes and act like special objects called Wilczek time crystals) in many frequency ranges which support the speed needed to explain consciousness.


Indeed, when blocked by anesthesia (halothane, isoflurane, desflurane, sevoflurane) and certain injectable agents which bind with high affinity to hydrophobic pockets in the α/β-tubulin dimer - the basic building block of microtubules - without significant action on membrane receptors - consciousness is lost in living organisms, and there are living organisms which seem to show complex signs of consciousness even at the cellular level where neural networks are not even implicated. Recent studies have also demonstrated superradiance in tryptophan molecular structures in biological tissues - a chemical bearing resemblance to the neurotransmitter serotonin - which display macroscopic quantumlike phenomenon. Under these circumstances, it would be worth investing time into understanding nature's models of brilliance before committing to any large scale AI programme, especially as by 2030, the projected U.S. annual spending on AI, across software, services, hardware, and infrastructure - will very likely be on the order of half a trillion to nearly a trillion dollars per year, depending on how fast it grows and what share the U.S. retains of a rapidly expanding global market.


Our relentless drive to build ever-larger AI systems and scale proof-of-work blockchains has blinded us to their fundamental mismatch with the biology of mind and the physics of efficiency. By 2030, we may be spending upward of $500 billion annually, and investing over $100 billion more in power infrastructure alone, to run feed-forward logic and brute-force consensus mechanisms that consume thousands of times more energy than a human brain. Yet despite these vast resources, today’s silicon nets remain “glorified autocorrects,” lacking the bidirectional, oscillatory, and nonlocal dynamics that underlie perception, learning, and consciousness in living systems. If we continue down this path, we risk locking ourselves into a costly technological quagmire - one that wastes enormous resources while most Americans are living paycheck-to-paycheck, to amplify surveillance and central control without ever attaining true understanding or self-awareness. Instead, we should heed the lessons of anesthetic research and quantum-biological models such as Orch-OR, which point toward microtubule-based coherence, fast dipole networks, and entropic gravity as the substrates of conscious information binding. Redirecting even a fraction of our AI budget toward experiments in quantum neurophysics, distributed memory architectures, and re-entrant hardware designs could yield architectures that match the brain’s elegance - and do so on mere watts, not gigawatts.

The choice is clear: continue pouring money into ever-bigger neural-net black boxes, or pioneer a new paradigm grounded in the very physics of life. Our future intelligence - and our energy future - may depend on which path we take next.


 
 
 

Comments


My Story

Get to Know Me

I have been on many strange adventures traveling off-grid around the world which has contributed to my understanding of the universe and my dedication towards science advocacy, housing affordability, academic integrity, and education funding. From witnessing Occupy Cal amid 500 million dollar budget cuts to the UC system, to corporate and government corruption and academic gatekeeping, I decided to achieve background independence and live in a trailer "tiny home" I built so that I would be able to pursue my endeavors.

Contact
Information

Information Physics Institute

University of Portsmouth, UK

PO Box 7299

Bellevue, WA 98008-1299

1 720-322-4143

  • LinkedIn
  • Twitter

Thanks for submitting!

©2025 by Trevor Nestor 

bottom of page