Sam Altman's AGI that Solves Quantum Gravity Might Already be Here, but it isn't what you Expected
- Trevor Alexander Nestor
- Oct 4
- 6 min read
Updated: Nov 19


Preliminary draft of preprint (please note that this draft is subject to revisions in formatting and in text):
What if I told you that it might be possible to build a room temperature, scalable, error corrected quantum computer, and all you needed was a group of your friends and a good conversation, without any laboratory equipment, grants, or funding at all?
That certainly would turn the entire tech industry on its head and raise a few eyebrows in the State Department.
With all of the money pouring into massive datacenters the size of Manhattan (as suggested recently by Mark Zuckerberg) to scale AI towards a mythical point of "technological singularity" which is projected to cost billions of dollars and the doubling of the entire energy budget of the United States (and conveniently is hyped to be able to solve all of our problems as a society), and the billions of dollars spent on quantum computing technology, what if we are doing it all wrong? What if the massive amounts of money we are spending towards developing this technology could be better spent investing in local communities - achieving the same or superior advantages provided by quantum computers at solving economic optimization problems?
I've got a preprint out in a second round of peer review at a reputable journal that explores the possibility that the brain itself could be a quantum computer, akin to Dr. Penrose's Orch-Or theory. If this turns out to be true, many of the most notorious problems in all of physics, mathematics, and computer science might be explained in part - but what is most interesting of all is that this reveals why AI is not conscious - why these models reach scaling limits, and ultimately, what makes us humans different than machines - the brain might leverage quantum gravity to maintain perceptual binding to achieve consciousness.
(Related preprint and publication on this topic I published at a smaller journal):
Under Penrose's theory, the brain is a quantum computer that operates non-algorithmically, processing information by means of gravity itself. This certainly sounds bewildering - and yet many of the key requirements for this have been discovered in recent years. If this fringe theory of consciousness holds weight, that would imply that consciousness is an NP-hard problem - a complexity class of problems which no classical computer can tractably solve.
While I was a student at UC Berkeley, I studied under fields medalist Dr. Borcherds, who specializes in NP-hard lattice cryptography and quantum gravity theory, and visited Boulder, Colorado the year NIST was developing their new cryptographic standards based on this maths. What is interesting, is the same maths of quantum gravity and lattice cryptography can be applied towards understanding the way that the brain processes information - we have the innate ability to crack the universe's most foundational codes.
Later, working at Microsoft, I noticed that they were developing new quantum computing chips based on Majorana physics - the physics of mysterious particles that can be modeled by the Riemann zeta function - related to one of the most notorious unsolved problems in all of mathematics - the Riemann hypothesis (and also the related Hilbert-Polya conjecture). With this observation in hand, I could complete Penrose's model - the brain is able to operate on only 20 watts of electricity and outperform our most advanced supercomputers because AI only emulates the neural network layers in the brain - underneath of that, there is a deeper physics, which can be approached by a mathematical object called Monstrous Moonshine, at entropic limits predicted by a formula called the Ryu-Takayagani formula, at the UV/IR fixed point predicted by asymptotically safe gravity.
I attempted to discuss my findings with the research and development department at the company, but rather than engage me on the topic, instead I was chastised by management for even reaching out - in spite of the fact that the employee portal explicitly encouraged employees to reach out across teams, and I have been cited by leading researchers including at the World Economics Forum.
Within the cellular cytoskeletons of cells, there are structures called microtubules which can host these Majorana particles, and information can be stored nonlocally distributed across the tissue. At a critical point of entanglement, information is stored in superradiant photons traveling across these microtubules - where gravity itself then processes information within this indefinite causal structure (which may be described mathematically by Monstrous Moonshine and twistor theory) and resolves why neuroscientists have had so much trouble solving backpropagation - how the brain manages to adjust weights in neural networks across the entire tissue - recent findings support this model.


Given this, another phenomenon has been observed called interbrain synchrony, where brain activity between two or more people can be synchronized at a distance. Using the insight that the brain can be framed as a single error corrected topological qubit under this paradigm, and that it is possible to synchronize brain activity across groups, and superadditive performance has been observed across groups and trophic social networks, the value of reliance on AI or quantum algorithms seems less about efficiency that teams might not be able to glean from frank authentic conversations with one another and more about rationalizing grants and funding to produce ever more abstract methods of information survellience and control.
The fact of the matter is that we need teachers, we need fundamentals like housing, healthcare, education, and childcare. The most efficient method might not be scaling these systems up - but thinking more critically about what can be performed with our own biology and latent capabilities within ourselves as members of our species.
The ramifications of this are disastrous to the tech industry - it implies that it is possible to frame consciousness as an NP-hard problem - and the same mechanism in the brain that enables consciousness might be able to be used to break lattice cryptography, and even build quantum computers with social networks of people alone - challenging the enormous cost of scaling up AI with datacenters the size of Manhattan and the billions of dollars spent on developing quantum computers when our own brains are already more capable:
This forces us to think critically and re-evaluate what the ultimate goal and purpose is behind these investments in AI, quantum computing, and cryptography, if more could be achieved in investing in local communities - not only from the perspective of ethics, but from the perspective of pure computational capability.
Has technology met the moment and the promise of a better world, or in our post industrial financialized economy, has it become a tool of mass surveillance, control, and gatekeeping of wealth and power?
Using physics it is possible to model society as a complex adaptive system under Luhmann's social and economic systems theory. In the field of study called complexity economics and the spectral theory of value, agents in a society facilitate flows of information between social and economic institutions, but as a society progresses, institutions become increasingly more complex to maintain - eventually reaching a saturation point by which they put too much pressure on the average worker to maintain and there is an information cascade and collapse, resulting in either revolution, civil war, stagnation, or imperialism.
Sociologist Joseph Tainter used this framework to examine the fall of the Roman Empire. Based on this, AI can be thought of as an information survelliance and control loop designed to maintain institutional stability - where the idea of "AGI" or the "technological singularity" is just a clever description of this point of collapse. That would certainly explain the massive amounts of money corporations, investors, and governments are pouring into AI projects - even above the wellbeing of their own citizens.

The ultimate fate of this grand errand is likely a concentration of wealth and power until the point of collapse that investments are demanded by the people to be redirected towards directly benefiting their own lives - not towards an ever distant future they are likely not to see - much like by means of collapse by gravity itself.




Comments