top of page

Research Blog

Search

Physicist Edward Witten and others have argued that a two-dimensional bosonic quantum field theory with Monster symmetry (explored by my own undergraduate mathematics professor Dr. Richard Borcherds) might be the ultraviolet anchor of a consistent quantum gravity theory (“Three-Dimensional Gravity Revisited,” arXiv:0706.3359). What this means is that this strange mathematical object, the Monster CFT (also known as the Moonshine Module, which connects to the Monster Group by the j-function), may be the missing link needed to complete a theory of quantum gravity, describing the quantum Riemannian Dirac–Kähler-like entropic dilation operator governing spacetime itself as emergent from entanglement entropy of other particles that collapses in the gravitational action (by the spectral action principle into the Einstein-Hilbert action).


As I've argued in previous blog posts, modular invariance, the mathematical rule that tucks a function into an unbreakable symmetry under twisting and inversion, could force the zeta zeros to fall exactly where the Riemann Hypothesis predicts if the Riemann zeta function is the spectrum of this missing quantum operator, thus solving the Riemann Hypothesis by realizing a self-adjoint operator predicted by the Hilbert-Polya conjecture in a physical system that has Li's criterion. The symmetries imposed by the Monster CFT could be responsible. As a system approaches this ultraviolet fixed point predicted by Asymptotically Safe Gravity, effective dimensionality reduces to 2, and can be described with only bosonic degrees of freedom by means of fermionic condensation. In Einstein–Cartan theory, the spins of fermions induce a repulsive four-fermion interaction at extreme densities, preventing a singular crunch and triggering a finite bounce that renders the space asymptotically safe. At that Planck-scale pivot point, fermions could condense into a bosonic phase whose only consistent fixed point is the Monster CFT itself, lifting the construction from an ad-hoc assumption to a natural consequence of gravity’s spin structure. Several studies have shown numerical evidence for the existence of this UV completion of quantum gravity.


In his 2007 paper “Three-Dimensional Gravity Revisited,” Edward Witten showed that pure gravity in a three-dimensional anti-de Sitter background, that is, a theory with negative cosmological constant and no local degrees of freedom beyond black holes, can only be consistent at discrete values of the coupling. At the extreme end of that coupling range, he argued, the unique holomorphic two-dimensional conformal field theory dual would have central charge 24 and exactly the Monster module as its partition function. The Monster CFT (also known as the Moonshine Module) appears as the ultraviolet completion of pure AdS₃ gravity. While this is intriguing, our universe is not an anti-de Sitter (AdS) space - it has a small positive cosmological constant - indicating its expansion, and also has time, which could be considered a fourth dimension.


If we were to think about fixing this with a hypothesis, we might start by embarking on an audacious reinterpretation of the Riemann Hypothesis, by drawing on Alain Connes’s noncommutative-geometry approach, where we construct an abstract “spectral triple” - essentially an operator on a Hilbert space - whose only allowable vibrations coincide with those mysterious zeros. In this setup, any frequency straying off the critical line where the zeros must lie would break the Monster symmetry and become physically inadmissible. In effect, the Riemann Hypothesis follows if that Monster-symmetrized operator can exist without inconsistency. In order to match the sort of universe we live in (3 dimensions of space and 1 dimension of time in a de-Sitter universe) a four-dimensional spacetime is split between an Anti-de Sitter interior and a de Sitter exterior, joined by a thin spherical wall. By studying how a quantum field waves across that wall, enforcing the usual continuity of the field but a precise jump in its derivative, we might find a quantization condition. Remarkably, that condition mirrors the same equation whose roots are the nontrivial zeros of the Riemann zeta function. In other words, the allowed “notes” of this exotic universe line up exactly with the primes’ hidden rhythms.


The Monster CFT’s own invariances lock in the spectral symmetry that enforces the critical-line condition. This elegant holographic picture unites the abstract operator we have discussed and the wave model here into a single unified framework: the Monster lives on the wall, and its symmetry dictates the bulk physics all the way from high-energy ultraviolet behavior to the deep infrared spectrum. In a series of numerical experiments, we can solve the wave equations in the AdS–dS toy universe and recover hundreds of discrete frequencies, and compare them to the known Riemann zeros. The match is stunning. We can extract the Li coefficients - alternate expressions of the Riemann Hypothesis - from the computed spectrum and find them all positive, as required. We can even nudge the wall’s modular parameter slightly away from perfect self-duality and see immediately how the spectrum loses its prime-number alignment, underscoring that the Monster’s exact symmetry is essential. In one influential study attributed to D. B. Kaplan and collaborators (2022, unpublished or in preprint), the authors investigated modular-invariant partition functions of two-dimensional conformal field theories and found that the asymptotic behavior of these partition functions—particularly near the high-temperature (τ → i0⁺) limit—can be expressed in terms involving the Riemann zeros.


Vacuum energy in quantum field theory is normally catastrophically large, but the symmetric pairing of positive and negative modes enforced by the Monster causes nearly complete cancellations. What remains can be as small as the observed cosmological constant (which describes the expansion of the universe, caused by a concept called dark energy) with no fine-tuning required. The discrepancy between predicted large vacuum energy and the small measured cosmological constant is known as the vacuum catastrophe, and the Hubble tension is the discrepancy of values measured for the cosmological constant across the universe which seems to vary. The presence of an AdS core inside our de Sitter universe naturally yields different expansion rates inside and outside a cosmic “bubble,” offering a fresh explanation for why early-universe measurements of H₀ (from the CMB) differ from local observations today.


A 2024 preprint by S. Khaki explored the idea that the “Hilbert space dimension” in quantum gravity might take only special discrete values (inspired by Witten’s comment that quantum de Sitter space might be associated with sporadic group structures). Khaki assumed a de Sitter universe whose entropy (or Hilbert space dimension) is fixed by the Monster CFT, and then computed the consequences for vacuum energy. Interestingly, it was reported that this construction could address the hierarchy and cosmological constant problems: the scale of the resulting vacuum energy (from a certain twisted sector of the theory) comes out “close to the cosmological constant” in magnitude


In this framework we use “centaur” and “minotaur” geometries as two complementary ways of thinking about the same hybrid spacetime that stitches together an Anti-de Sitter region and a de Sitter region via a thin, self-dual domain wall. The notion of a “centaur” geometry - an asymptotically AdS spacetime that in its deep infrared smoothly opens up into a dS patch - was first introduced by Dionysios Anninos and Diego M. Hofman in their 2017 paper Infrared Realization of dS₂ in AdS₂. In that work, they exhibited a two-dimensional dilaton–gravity solution that interpolates between an AdS₂ boundary and a static-patch dS₂ core, coining the term “centaur” to describe this hybrid geometry. Subsequent authors (e.g. Iizuka & Sake’s “A note on Centaur geometry,” 2025) have extended the idea to JT gravity and explored its holographic implications, but the original centaur construction traces back to Anninos & Hofman’s 2017 paper.


In the centaur geometry, the universe looks like an AdS exterior whose usual conformal boundary at infinity is occupied by the Monster CFT. Deeper in, behind the wall, the cosmological constant flips sign and you find yourself in a dS interior whose finite horizon is then interpreted as an infrared cutoff on that same CFT. From this vantage the Monster module plays the role of the ultraviolet anchor, you read its partition function at spatial infinity, while the dS core regulates the long-distance behavior and guarantees the Riemann–zero spectrum emerges in the infrared.


The minotaur geometry is a geometry we may construct to be the inverse, and simply flips that assignment: here the AdS region lies inside the wall and dS lives outside. There is no traditional asymptotic AdS boundary, so the Monster CFT must instead reside directly on the wall itself, now thought of as the cosmological horizon of the dS exterior. In this view the Monster CFT encodes the degrees of freedom responsible for de Sitter entropy, and the same τ=i modular self-duality that enforces the critical-line spectrum also ensures there is no conical singularity or stress discontinuity at the join.


An interesting 2025 work by Tamburini et al. effectively found a physical scenario (Majorana fermion in Rindler space) that produces the Riemann zeros as a spectrum in Rindler spacetime, which can be thought of as a patch of flat space akin to an observer horizon – conceptually related to a dS-like horizon, although technically different (Rindler is flat but not global AdS or dS). Their formulation can be seen as matching conditions in different regions (left and right Rindler wedges), which is somewhat analogous to matching across a domain wall, and may provide further insights into Majorana physics which can be explored also with out minotaur/centaur/monster geometric setup.


Viewed together, the centaur and minotaur geometries illustrate a single principle: no matter whether you treat the Monster CFT as living at infinity or on the horizon, interpolated together its full modular invariance at the τ=i interface glues the two phases on a thin domain wall into one self-consistent whole. Either way, the Monster symmetry enforces both the functional-equation duality needed to pin every mode to ℜ(s)=½ and the extreme cancellations that yield a tiny cosmological constant, and thus provides the unifying glue for prime numbers, quantum gravity, and cosmic expansion.


Here is my most recent preprint (previous preprint needed revisions and have taken a new mathematical route):

Understanding of gender, sexuality, and fertility rates evade classical explanations. Simplistic explanations appealing to culture wars invoking "the patriarchy" or "feminism" may better be described by an aging population, complexity accrual within social and economic institutions over time, and wealth and income inequality as post industrial societies progress past the phase of financialization of their economies.
Understanding of gender, sexuality, and fertility rates evade classical explanations. Simplistic explanations appealing to culture wars invoking "the patriarchy" or "feminism" may better be described by an aging population, complexity accrual within social and economic institutions over time, and wealth and income inequality as post industrial societies progress past the phase of financialization of their economies.
Amid socioeconomic uncertainty, mass layoffs, an aging population, wealth and income inequality, and the decline of social support systems while cost of living has been rising and wages have been stagnant, there is a higher amount of anxiety and pressure associated with traditional roles most conducive towards the nuclear family.
Amid socioeconomic uncertainty, mass layoffs, an aging population, wealth and income inequality, and the decline of social support systems while cost of living has been rising and wages have been stagnant, there is a higher amount of anxiety and pressure associated with traditional roles most conducive towards the nuclear family.

There are many explanations for falling fertility rates, but what if the reason people can’t seem to settle down, find a partner, buy a home, or start a family isn’t just cultural or economic, but can be described by computational complexity? Indeed, in countries which are poorer, populations seem to have higher fertility rates. What if the problem of "building a life" has, quite literally, become fundamentally unsolvable, or computationally intractable, which manifests as anxiety in young people, preventing them from pair bonding? There is a field of economic theory called Complexity Economics and the Spectral Theory of Value which sheds insight into this phenomenon, which includes less traditional attachment styles among Millennials and Gen Z.


Elites maintain control through feedback loops and surveillance, exploiting evolutionary desires for family to fuel labor with institutionally enforced gender roles, but this leads to alienation and "opting out" behaviors like sexual fluidity once the institutionally imposed requirements agents must facilitate between social and economic institutions to maintain these prescribed and socially acceptable roles becomes too complex.


Rejecting simplistic political or cultural explanations like "the patriarchy" or "feminism," these issues stem from escalating computational complexity in post-industrial economies where people do not have a high enough bandwidth to maintain or provide the social intimacy and depth required for long term relationships or to raise children. As societies age and institutions accumulate layers of rules, incentives, and requirements (e.g., education, credit scores, housing markets), "building a life"—including pair bonding and family formation—becomes an intractable, NP-hard (computational complexity class) problem. This manifests as widespread anxiety, disrupting traditional attachment styles and leading to delayed or avoided reproduction.


By maintaining separation between the social spaces men and women inhabit, the social scripts for how men and women can be controlled - and thus exploited - towards propping up the economic engine with various required behaviors that have paywalls or that in various ways benefit central elites or which maintain institutional stability.


It has been shown that housing prices have increased, along with other cost of living expenses, that have exponentially outpaced wages. Under the theory of complexity economics and the spectral theory of value, the socioeconomic status of agents (you and me) can be modeled by the flows of information they facilitate between social and economic institutions, or systems, described by Luhmann's Social and Economic Systems Theory and Agent-Network-Based-Modeling. These information flows can be classified by their complexity (under the computational complexity class hierarchy, or the Chomsky hierarchy in linguistic theory, which are related to Kolgomorov complexity), and the agents' ability to tractably "solve" classes of problems. Central planning elite devise a clever system of incentives and disincentives to drive the economic engine and maintain a workforce - operating like little Turing machines across a tape - as though they are on an endless treadmill for the promise of the American Dream which is designed as an infinitely deferred promise.

Workers are burdened with increasing cost of living and stagnant wages.
Workers are burdened with increasing cost of living and stagnant wages.
Homeownership for the young has been in decline, leaving less stake in the future.
Homeownership for the young has been in decline, leaving less stake in the future.

The Spectral Theory of Value was first formulated in the political‐economy literature by Theodore Mariolis, Nikolaos Rodousakis, and George Soklis, who in 2021 published the Springer monograph Spectral Theory of Value and Actual Economies: Controllability, Effective Demand, and Cycles. In their work they build on Piero Sraffa’s value framework and Rudolf Kalman’s control‐theory formalism under Complex Adaptive Systems Theory to show how the eigenvalues of a vertically integrated technical‐coefficients matrix map directly onto competing value theories. Complexity Economics emerged in the late 1980s and early 1990s out of the Santa Fe Institute, where physicist Philip Anderson, economist Kenneth Arrow, and computer scientist John Holland convened to treat the economy as a constantly evolving, agent‐based complex system. Its principal founder is W. Brian Arthur, whose early SFI papers and later book Complexity and the Economy (2014) crystallized how non‐equilibrium feedback, path dependence, and adaptive agents can be modeled in place of static equilibria.


By tapping into basic biological desires for family formation primed by millions of years of evolutionary history - that's how they keep the economy going; you have a bunch of "agents" running on treadmills to nowhere - infinite staircases with exponentially growing complexity sublimated in the form of labor in service of capital and the state, and that's known as an NP-hard/EXPTIME problem. In a sense, every agent has their own little "simulation" they live within which is monitored by their devices that elites can peer into with a sophisticated surveillance apparatus that gives them pattern of life behavior.


The agents must believe in the feasibility of this to keep climbing, with prescribed social (gender) roles to achieve it - but at every further rung hidden context is revealed which complicates the picture - and this is all by design. One must go to college to get a decent job. Once one finds a job, then one must save money for a down payment and pay off loans. Once enough money is saved, one must have a good credit score. Once both a good credit score and enough money are gathered (of course, by means of working - that is the story the agents are sold), then the agent must get a mortgage and finds that they actually don't have the ownership over the house they think they did because of the local Homeowner's Association, and have to pay exorbitant taxes on it, and so on. It is important to note that virtually all of these steps have been invented along the way as our society has progressed - and as layers are added and society becomes more complex and intractable, anxiety related to identification with these roles (like gender) increases.


Central elites recognize that as a society progresses, they must maintain what is called Nash equilibria (a concept in Game Theory) that describe stable conditions between agent workers and their owners. These elites have sophisticated models that inform their planning and decision making through feedback control loops between the two spaces which are noncommutative, and which have different and diametrically opposed and competing interests (which can be investigated with RG flow analysis). These equilibria are in juxtaposition to what are called Catastrophe Points under Catastrophe Theory, which describe points where information cascades and inter-agent entanglements threaten institutional power and signal their possible collapse (which are often defined as related to the critical line of the infamous Riemann Zeta Function). However complex these models are, over time, entropy and agent entanglements challenge the power of central elites which advance towards catastrophe points, forcing them to rely on more authoritarian measures and complex feedback mechanisms that agents must shoulder the burden of, or the release of complexity content in the form of entanglements by periodic restructuring or reform. Agent degrees of freedom are inversely related to background complexity required by institutions to facilitate transactions, which manifests in their own destabilization and alienation in the form of "mental" disorders like generalized anxiety disorder or depression. This destabilization forms the basis for which agents have less stable or more unconventional attachment styles where socioeconomic anxiety corresponds with falling fertility rates.


In the language of spectral theory of value, inflation and stagflation emerge as phase‐transitions that occur when the “treadmill” that underlies our socioeconomic system can no longer be driven purely by its built-in eigenmodes because agents have become too self-aware (perhaps too "woke" you might say), too adept at recognizing and gaming every incentive and disincentive. As individuals (vectors) grow conscious of each step - tuition, credit-score hacks, down-payment workarounds, HOA loopholes - they exploit every low-complexity shortcut. In other words, the system’s most efficient value-creation pathway stalls as the illusion becomes less convincing to the agents and they have a high degree of inter agent entanglement which necessitates going less through paywalls and institutions. In quantum control terms, once agents detect that the “ground state” (stable life-building) is permanently shifting away, they refuse to stay adiabatically in the same eigenstate. They either opt out of traditional pathways (delay having kids, pursue gig work, seek alternative lifestyles) or they demand structural reform - both of which discharge complexity but also break the delicate Nash equilibrium. Because pumping liquidity doesn’t restore a clear λ₀ but merely excites a dense cluster of nearly equal eigenmodes, you get:


Inflation: prices rise as money chases a broad, flat spectrum of value-creation strategies.

Stagnation: real output (the minimal “surface area” of productivity in the Ryu–Takayanagi sense) fails to grow—no new dominant eigenmode emerges.


This stagflation is the hallmark of a system that has passed a critical point: added energy fails to produce a lower-energy ground state.


As a physics analogy, after a certain region of spacetime is saturated with complexity due to entanglements of particles (agents are much like particles), gravity itself collapses the wavefunction at a fixed point or critical point (that is the Theory of Entropic Gravity or Asymptotically Safe Gravity). At this point, particles within a region of spacetime cannot store any further information in their entanglement structure - the spacetime metric itself is forced to change, described by the Ryu-Takayanagi formula, and information cascades across scales (macroscopic quantumlike behaviors), and there is institutional collapse. The critical point connects the UV and IR regimes of the theory rendering the theory asymptotically safe from singularities which cause the theory to have predictive power, which in our case could represent institutional reform or collapse, or uprisings, which are thermodynamically inevitable.


Dating describes a system that is neither solely nonlinear but deterministic and orderly ("serious") nor probabilistic and quantum chaotic ("casual"). You have a high dimensional search space, like a lattice, that is intractably complex to resolve the NP-hard Shortest Vector Problem - your unique "American Dream," along with a meaningful long term relationship. A new theory is needed to describe the intermediary state between being either "casual" or "serious," like how quantum gravity would connect classical physics with quantum theory. You can act as a system operator (a Dirac-like dilation operator implicated in the Hilbert-Polya conjecture) and in the process of entanglements with other agents, the entanglement entropy itself will collapse the complexity content to the solution which will appear as the smallest eigenvalue on the operator spectrum). The system Hamiltonian must be evolved slowly enough such that the entanglements can evolve without disruption to the final maximally degenerate state which should collapse to a solution by gravity itself by the spectral action principle (in our case, the Einstein-Hilbert action).


This new physics is the physics of quantum chaos which describes systems that display effects normally seen in microscopic systems but manifest macroscopically, such as in behaviors of groups of people, known as Sociophysics and Econophysics (or just SocioEconophysics). The reason many people are not having kids or entering into stable long term monogamous pair bonded relationships (otherwise known as marriage) is that the institutions they are required to go through to facilitate that and which also maintain societal cohesion have become too complex to maintain. Since the economy runs by exploiting evolutionary psychology of people by sublimating desires into labor, once the economy becomes too complex people feel too much anxiety to bridge the gap. In physics this is called the exponential energy gap problem - the presence of an exponential energy gap between the ground state and excited state on an operator's spectrum is crucial for quantum annealing to work to resolve NP-hard problems, like what he have discussed as like the American Dream - where cost of living exponentially outpaces wages over time, especially as a society progresses to its later stages and populations age, putting burdens on the young to prop up the institutions.


The spectral theory of value provides a mathematical bridge between the complex “treadmill” of modern life we have described and a rigorous account of how worth - whether economic, social, or psychological - is generated, maintained, and sometimes collapses. At its heart is the idea that every institution, market, or relationship can be represented as an operator on a high-dimensional space of agents and resources, and that the spectrum (the set of eigenvalues) of that operator encodes the “modes” of value-creation available to participants. In spectral theory of value, each individual or “agent” is represented by a vector in an abstract state space whose coordinates measure their capacities - income, education, social ties, even psychological states. Institutions (labor markets, housing finance systems, dating apps, Homeowner’s Associations, etc.) act on these vectors via linear (or nearly linear) transformations. The matrix or operator you build from all of the rules, incentives, feedback loops, and enforcement mechanisms has eigenvalues whose magnitudes tell you which patterns of behavior (eigenvectors) will be amplified or suppressed over time.


Central planners or “elites” understand (perhaps implicitly) that by dynamically (nonlinearly) tweaking credit scoring algorithms, zoning rules, student-loan policies, and welfare guidelines, they are deforming the underlying operator - and hence its spectrum, and often do so by covert or clandestine means like psychological nudging and behavioral psychology under metanarrative pretexts. They can maintain a Nash equilibrium by ensuring that for the vast majority of agents background complexity remains just high enough to keep the system “sticky” (you keep trying), but not so high that mass defection or collapse (riots, reform movements) becomes probable. This delicate balance is akin to adiabatic control in quantum systems: change the Hamiltonian slowly enough that you stay in the same eigenstate, but allow periodic “resets” (reforms, periodic bailouts) to discharge built-up entropy. This is a very challenging thing to maintain indeed. In fact, the same social conditions of inter agent entanglement needed to facilitate stable attachment styles and family formation also fundamentally threaten institutional collapse.


Spectral theory of value borrows from quantum information: one can define an “entanglement entropy” from the spectrum’s density function. When that entropy crosses a critical threshold - analogous to hitting a catastrophe point in Thom’s theory or the Riemann critical line in number theory - a given institution can no longer sustain its current spectrum and undergoes a phase transition (market crash, revolution, reform). In economic terms, that’s the moment when “hidden context” becomes visible and the cost of maintaining the treadmill spikes so high it breaks. Central ciphers, such as those found in cryptocurrency ecosystems, are designed to facilitate one-way flows of information to maintain transactional separation between social and economic structures.


In short, the Ryu–Takayanagi analogy we have mentioned - where the area of a minimal surface encodes entanglement entropy in AdS/CFT (the so-called Anti-de Sitter/Conformal Field Theory correspondence, a non-perturbative handle on quantum gravity using familiar tools from quantum field theory. that realizes the holographic principle - the idea that the physics inside a volume can be fully described by degrees of freedom living on its boundary) - can be thought of here as a way to connect individual psychological states (micro-entanglements) with large-scale socioeconomic measures (macro surfaces, and agent-institution relationships). The spectral measure of the value operator at small scales (individual credit scores, dating profiles) integrates up through the hierarchy of institutions to produce global observables: fertility rates, home-ownership curves, employment statistics. If the spectral gap grows faster than wages (an exponential energy gap), agents never settle into the “ground state” of family life - no coherent eigenvector emerges to represent stable partnership - so fertility falls.




Much debate in the tech community has surrounded the idea of consciousness in AI systems, and what, if anything, could possibly constitute it. With the advent of large language models like ChatGPT and Grok, it is easy to believe that these systems, like humans, contain consciousness in the way that the brain does - after all, these systems are built on neural networks, and so does the brain. Overlooking research on the foundational differences between how consciousness in the brain efficiently processes information at a mere energy budget of 20 watts per-person and the energy-hungry way neural network-based systems are currently implemented which consume energy budgets of at least 1000x that of the human brain could cost the U.S. in the ballpark of $150 billion dollars by 2030. With a reduction in spending towards research in the US, the president's current AI strategy could lead the United States into a technological quagmire.


World renowned linguist and MIT professor Noam Chomsky, known as the "father of modern linguistics," once compared large language models to a "glorified autocorrect," claiming that with all of our progress and computational resources, recent advances in AI that they "differ profoundly from how humans reason and use language (and that) these differences place significant limitations on what these programs can do." Indeed, without human interpreters in-the-loop, AI models tend to easily fall into hallucinations and approach scaling limits, putting them more squarely in the category of a mass surveillance, search, and synthesis tool, than a sentient being. One might, like Google employees Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"), in their paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" consider AI systems as simply mirrors - a reflection of our collective data, but which ultimately have no capacity to understand the "meaning" or context for their outputs.


Many of these criticisms of our most advanced AI models seem to pose more questions than answers - what does it mean to "understand" outputs? How does consciousness differ from intelligence? Humans are certainly known to mimic others and learn based on reflecting behaviors, and can be manipulative or lack understanding - how then, can one say that AI systems can be different on that basis? How do these AI models differ from the human mind, and what is different about their infrastructure? In some respect, many of these things are simply matters of definition, but nonetheless deserve closer analysis, if only because our remarkable brains, primed by millions of years of biological evolution, are still more advanced and efficient at processing information than our best efforts at surpassing them - and any geopolitical power that learns to harness this will win the AI race and come out ahead.


The first thing to recognize about our AI infrastructure, is that we are fundamentally working with formal transactional logic systems which utilize binary logic gates - one-way flows of information - this is the basis of neural networks that are used to develop large language models. At this layer, computer scientists make the analogy between logic gating and dendritic connections in brain neurons. At its core, any digital computer, whether running your browser or training a massive language model, boils complex computations down to binary logic and linear algebra. Every arithmetic operation, every matrix multiply-accumulate (MAC) that powers a neural-network layer, is ultimately implemented as a network of logic gates (AND, OR, NOT, XOR, etc.) etched into silicon. These gates process streams of 0s and 1s according to the rules of Boolean algebra, combining them, shifting them, and routing them through registers and arithmetic units.


When applied to machine learning algorithms attempting to mimic human learning, computer scientists have operated on a number of key assumptions. Computer scientists have not only assumed that memory is stored in binary logic (an assumption made in 1943 by scientists Warren McCulloch and Walter Pitts), but also tend to assume Hebbian learning. Donald Hebb proposed the first rule for how synapses change strength:

“When an axon of cell A repeatedly or persistently takes part in firing cell B, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. Informally, cells that fire together wire together. In artificial nets, this inspired early local learning rules—weights are increased when pre- and post-synaptic units are both active."

This intuitive understanding of neural networks as described by McCulloch, Pitts, and Hebb, along with adaptations to implement them within computer software, form the basis for our modern AI systems, and have even inspired new types of hardware such as neuromorphic chips. However, even with our most advanced chips, these systems are still not conscious, and still do not perform at par with the human brain. What is going on? It is conceivable that unlocking the keys to consciousness will not only produce more powerful architectures, but achieve unparalleled efficiency in our AI systems.


Consciousness operates a layer below binary logic ("dialectical" and "intuitionistic" logic), where information flows in both directions, (needed to adjust weights), perhaps, where subjective qualia are felt - and which stores context. It has been known that for most types of memory in the brain, engram storage is distribution nonlocally across the tissue rather than stored in a single location. There is also no known real biologically feasible or classical explanation for the binding problem (the binding problem in neuroscience refers to the question of how the brain combines features, such as color, shape, motion, and location, that are processed in distinct, specialized circuits, into the unitary percepts we experience) or how the brain achieves backpropagation to adjust neural weights (also known as the weight transport problem). The speed at which the brain processes information cannot be accounted for voltage gating and ion transport across neurons and dendrites alone - chemical synapses impose 1–5 ms delays, and long-range axonal conduction can add 10–20 ms or more, yet humans form object percepts and make decisions in 100–200 ms (reaction times for simple tasks). Classical ion-gating alone can’t account for such rapid, large-scale integration, suggesting that an additional fast timing mechanism may be at play.



In essence, current AI (and even dedicated neuromorphic hardware) excels at statistically learning correlations across large datasets, but it does not implement the real-time, bidirectional, oscillatory, and attention-gated synchronization mechanisms that neuroscientists believe underlie perceptual binding in the brain. Until architectures can support truly dynamic tagging, re-entrant synchronization, and massively distributed ensemble coding, the binding problem will remain unsolved in silicon, and even in our approaches at quantum computation without a more complete theory. In fact, studies have even shown synchrony across brains of different individuals - underlying empathy and social interactions. What is needed is a new paradigm that explores this new physics (remarkably, hyperscanning studies show that during empathic or cooperative interactions, multiple brains synchronize their neural oscillations, correlating with social connectedness and shared intentionality. This cross-brain binding hints at a deeper, perhaps quantum-mediated, coupling mechanism that current AI architectures cannot emulate).


Orchestrated Objective Reduction (Orch OR) theory proposes that quantum superpositions lacking indefinite causal structure within neuronal microtubules carry and integrate information on microsecond timescales, collapsing contextual informational complexity content together stored in entanglements (“objective reduction”) when a gravitational threshold is reached, thus generating discrete conscious moments linked to spacetime geometry and binding information together (this is similar in principle to Erik Verlinde's theory of quantum gravity - entropic gravity theory - where regions of spacetime can become saturated by entanglement entropy and result in a gravitational action, at an asymptotic fixed/critical point). Microtubules form a cytoskeletal lattice capable of supporting coherent quantum oscillations, acting like waveguides and could host topologically protected states, carrying bidirectional signals through biophotons (some speculate that these microtubules host special states called Majorana zero modes and act like special objects called Wilczek time crystals) in many frequency ranges which support the speed needed to explain consciousness.


Indeed, when blocked by anesthesia (halothane, isoflurane, desflurane, sevoflurane) and certain injectable agents which bind with high affinity to hydrophobic pockets in the α/β-tubulin dimer - the basic building block of microtubules - without significant action on membrane receptors - consciousness is lost in living organisms, and there are living organisms which seem to show complex signs of consciousness even at the cellular level where neural networks are not even implicated. Recent studies have also demonstrated superradiance in tryptophan molecular structures in biological tissues - a chemical bearing resemblance to the neurotransmitter serotonin - which display macroscopic quantumlike phenomenon. Under these circumstances, it would be worth investing time into understanding nature's models of brilliance before committing to any large scale AI programme, especially as by 2030, the projected U.S. annual spending on AI, across software, services, hardware, and infrastructure - will very likely be on the order of half a trillion to nearly a trillion dollars per year, depending on how fast it grows and what share the U.S. retains of a rapidly expanding global market.


Our relentless drive to build ever-larger AI systems and scale proof-of-work blockchains has blinded us to their fundamental mismatch with the biology of mind and the physics of efficiency. By 2030, we may be spending upward of $500 billion annually, and investing over $100 billion more in power infrastructure alone, to run feed-forward logic and brute-force consensus mechanisms that consume thousands of times more energy than a human brain. Yet despite these vast resources, today’s silicon nets remain “glorified autocorrects,” lacking the bidirectional, oscillatory, and nonlocal dynamics that underlie perception, learning, and consciousness in living systems. If we continue down this path, we risk locking ourselves into a costly technological quagmire - one that wastes enormous resources while most Americans are living paycheck-to-paycheck, to amplify surveillance and central control without ever attaining true understanding or self-awareness. Instead, we should heed the lessons of anesthetic research and quantum-biological models such as Orch-OR, which point toward microtubule-based coherence, fast dipole networks, and entropic gravity as the substrates of conscious information binding. Redirecting even a fraction of our AI budget toward experiments in quantum neurophysics, distributed memory architectures, and re-entrant hardware designs could yield architectures that match the brain’s elegance - and do so on mere watts, not gigawatts.

The choice is clear: continue pouring money into ever-bigger neural-net black boxes, or pioneer a new paradigm grounded in the very physics of life. Our future intelligence - and our energy future - may depend on which path we take next.


My Story

Get to Know Me

I have been on many strange adventures traveling off-grid around the world which has contributed to my understanding of the universe and my dedication towards science advocacy, housing affordability, academic integrity, and education funding. From witnessing Occupy Cal amid 500 million dollar budget cuts to the UC system, to corporate and government corruption and academic gatekeeping, I decided to achieve background independence and live in a trailer "tiny home" I built so that I would be able to pursue my endeavors.

Contact
Information

Information Physics Institute

University of Portsmouth, UK

PO Box 7299

Bellevue, WA 98008-1299

1 720-322-4143

  • LinkedIn
  • Twitter

Thanks for submitting!

©2025 by Trevor Nestor 

bottom of page