top of page

Research Blog

Search
Sam Altman's benchmark for AGI might already be here, but not with his AI supercomputers, datacenters, or generative AI models
Sam Altman's benchmark for AGI might already be here, but not with his AI supercomputers, datacenters, or generative AI models
A bold ambitious proposal for quantum computing across social networks of people
A bold ambitious proposal for quantum computing across social networks of people

Preliminary draft of preprint (please note that this draft is subject to revisions in formatting and in text):


What if I told you that it might be possible to build a room temperature, scalable, error corrected quantum computer, and all you needed was a group of your friends and a good conversation, without any laboratory equipment, grants, or funding at all?


That certainly would turn the entire tech industry on its head and raise a few eyebrows in the State Department.


With all of the money pouring into massive datacenters the size of Manhattan (as suggested recently by Mark Zuckerberg) to scale AI towards a mythical point of "technological singularity" which is projected to cost billions of dollars and the doubling of the entire energy budget of the United States (and conveniently is hyped to be able to solve all of our problems as a society), and the billions of dollars spent on quantum computing technology, what if we are doing it all wrong? What if the massive amounts of money we are spending towards developing this technology could be better spent investing in local communities - achieving the same or superior advantages provided by quantum computers at solving economic optimization problems?


I've got a preprint out in a second round of peer review at a reputable journal that explores the possibility that the brain itself could be a quantum computer, akin to Dr. Penrose's Orch-Or theory. If this turns out to be true, many of the most notorious problems in all of physics, mathematics, and computer science might be explained in part - but what is most interesting of all is that this reveals why AI is not conscious - why these models reach scaling limits, and ultimately, what makes us humans different than machines - the brain might leverage quantum gravity to maintain perceptual binding to achieve consciousness.


(Related preprint and publication on this topic I published at a smaller journal):



Under Penrose's theory, the brain is a quantum computer that operates non-algorithmically, processing information by means of gravity itself. This certainly sounds bewildering - and yet many of the key requirements for this have been discovered in recent years. If this fringe theory of consciousness holds weight, that would imply that consciousness is an NP-hard problem - a complexity class of problems which no classical computer can tractably solve.


While I was a student at UC Berkeley, I studied under fields medalist Dr. Borcherds, who specializes in NP-hard lattice cryptography and quantum gravity theory, and visited Boulder, Colorado the year NIST was developing their new cryptographic standards based on this maths. What is interesting, is the same maths of quantum gravity and lattice cryptography can be applied towards understanding the way that the brain processes information - we have the innate ability to crack the universe's most foundational codes.


Later, working at Microsoft, I noticed that they were developing new quantum computing chips based on Majorana physics - the physics of mysterious particles that can be modeled by the Riemann zeta function - related to one of the most notorious unsolved problems in all of mathematics - the Riemann hypothesis (and also the related Hilbert-Polya conjecture). With this observation in hand, I could complete Penrose's model - the brain is able to operate on only 20 watts of electricity and outperform our most advanced supercomputers because AI only emulates the neural network layers in the brain - underneath of that, there is a deeper physics, which can be approached by a mathematical object called Monstrous Moonshine, at entropic limits predicted by a formula called the Ryu-Takayagani formula, at the UV/IR fixed point predicted by asymptotically safe gravity.



I attempted to discuss my findings with the research and development department at the company, but rather than engage me on the topic, instead I was chastised by management for even reaching out - in spite of the fact that the employee portal explicitly encouraged employees to reach out across teams, and I have been cited by leading researchers including at the World Economics Forum.


Within the cellular cytoskeletons of cells, there are structures called microtubules which can host these Majorana particles, and information can be stored nonlocally distributed across the tissue. At a critical point of entanglement, information is stored in superradiant photons traveling across these microtubules - where gravity itself then processes information within this indefinite causal structure (which may be described mathematically by Monstrous Moonshine and twistor theory) and resolves why neuroscientists have had so much trouble solving backpropagation - how the brain manages to adjust weights in neural networks across the entire tissue - recent findings support this model.


Majorana physics is implicated in new theories of consciousness and brain function.
Majorana physics is implicated in new theories of consciousness and brain function.

Consciousness may fundamentally require new physics to fundamentally understand.
Consciousness may fundamentally require new physics to fundamentally understand.

Given this, another phenomenon has been observed called interbrain synchrony, where brain activity between two or more people can be synchronized at a distance. Using the insight that the brain can be framed as a single error corrected topological qubit under this paradigm, and that it is possible to synchronize brain activity across groups, and superadditive performance has been observed across groups and trophic social networks, the value of reliance on AI or quantum algorithms seems less about efficiency that teams might not be able to glean from frank authentic conversations with one another and more about rationalizing grants and funding to produce ever more abstract methods of information survellience and control.


The fact of the matter is that we need teachers, we need fundamentals like housing, healthcare, education, and childcare. The most efficient method might not be scaling these systems up - but thinking more critically about what can be performed with our own biology and latent capabilities within ourselves as members of our species.


The ramifications of this are disastrous to the tech industry - it implies that it is possible to frame consciousness as an NP-hard problem - and the same mechanism in the brain that enables consciousness might be able to be used to break lattice cryptography, and even build quantum computers with social networks of people alone - challenging the enormous cost of scaling up AI with datacenters the size of Manhattan and the billions of dollars spent on developing quantum computers when our own brains are already more capable:






This forces us to think critically and re-evaluate what the ultimate goal and purpose is behind these investments in AI, quantum computing, and cryptography, if more could be achieved in investing in local communities - not only from the perspective of ethics, but from the perspective of pure computational capability.




Has technology met the moment and the promise of a better world, or in our post industrial financialized economy, has it become a tool of mass surveillance, control, and gatekeeping of wealth and power?



Using physics it is possible to model society as a complex adaptive system under Luhmann's social and economic systems theory. In the field of study called complexity economics and the spectral theory of value, agents in a society facilitate flows of information between social and economic institutions, but as a society progresses, institutions become increasingly more complex to maintain - eventually reaching a saturation point by which they put too much pressure on the average worker to maintain and there is an information cascade and collapse, resulting in either revolution, civil war, stagnation, or imperialism.


Sociologist Joseph Tainter used this framework to examine the fall of the Roman Empire. Based on this, AI can be thought of as an information survelliance and control loop designed to maintain institutional stability - where the idea of "AGI" or the "technological singularity" is just a clever description of this point of collapse. That would certainly explain the massive amounts of money corporations, investors, and governments are pouring into AI projects - even above the wellbeing of their own citizens.


ree

The ultimate fate of this grand errand is likely a concentration of wealth and power until the point of collapse that investments are demanded by the people to be redirected towards directly benefiting their own lives - not towards an ever distant future they are likely not to see - much like by means of collapse by gravity itself.


The economics of AI and crypto meet the same fate as black holes - thermodynamic inevitability of collapse and evaporation.
The economics of AI and crypto meet the same fate as black holes - thermodynamic inevitability of collapse and evaporation.

Updated: Nov 28

Dr. Zhang's research page.
Dr. Zhang's research page.





I had published a previous article on my recent trip to Beijing where a lab is investigating possible new methods to break postquantum lattice cryptography, and I initially considered that I should delete the post, but after some consideration, I've decided to re-upload the original article - for now. Will report back with further developments after revising the drafts of my work.


Postdoc I met in Japan
Postdoc I met in Japan

In the meantime, I have been extended offers towards positions of leadership and professor positions at prominent schools and universities in China, and have been in touch with venture capitalists towards this research program.


ree

My sense is that to request of the universe any class of uncrackable encryptions that could be used to hide information from the public is too hubristic a request by any group of elite individuals.

My conversation with Ed Witten about possible compromises to postquantum cryptography.
My conversation with Ed Witten about possible compromises to postquantum cryptography.
ree

  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • Sep 12
  • 9 min read

Updated: Nov 19

Trevor Nestor v Microsoft state of Washington and federal government investigation into Microsoft, forming the basis for a possible class action lawsuit and broader movement for tech company accountability.
Trevor Nestor v Microsoft state of Washington and federal government investigation into Microsoft, forming the basis for a possible class action lawsuit and broader movement for tech company accountability.
Employees are often put on retaliatory or inactionable PIPs at these tech companies as a form of gaslighting to scapegoat their own engineers for poor management decisions - violations of labor law have become normalized under a toxic culture of fear, intimidation, and concealment.
Employees are often put on retaliatory or inactionable PIPs at these tech companies as a form of gaslighting to scapegoat their own engineers for poor management decisions - violations of labor law have become normalized under a toxic culture of fear, intimidation, and concealment.
Redditor expresses commonplace fear that creates a chilling effect in reporting or discussing corporate corruption, which subverts efforts to organize workers.
Redditor expresses commonplace fear that creates a chilling effect in reporting or discussing corporate corruption, which subverts efforts to organize workers.
Notice of investigation into Microsoft with address redacted (though I don't live there so it doesn't make much of a difference).
Notice of investigation into Microsoft with address redacted (though I don't live there so it doesn't make much of a difference).

In previous blog posts I described a recent state investigation (from the state of Washington, though the representative told me that it would be forwarded to the proper federal channels as well) I triggered into Microsoft (yes, not just a complaint - the actual investigation that has started) for wrongful terminations, ADA noncompliance, and whistleblower retaliation, and while I received overwhelmingly positive reception on both Reddit and LinkedIn for the story, racking up hundreds of thousands of views across platforms, I started to find things were somewhat different in some select subreddits.




In my original article I explained the issues | witnessed at Microsoft and the sorts of emails and messages I've received on a daily basis since going public about the story mentioning things like physical stalking, whistleblower retaliation, ADA non-compliance, pathological lying and dysfunction, replacement of engineers with H1B visa holders and non functional Al tools like copilot to undercut wages and working conditions to the point that employees cannot do their jobs, the inability of Microsoft to provide their engineers the bare minimum of laptop assets that can turn on in a timely manner, the unaffordability of purchasing one's own home near the campus or metro where one is more often expected to work, the gutting of their IT support on campus, missing critical documentation, deliberate information siloing, passive aggressive behaviors, wrongful terminations, whistleblower retaliation, and so on. All details about and context for this original complaint that I will be discussing - along with the relevance of AI and the H1B visa program can be found in the link below in greater detail.


Original Article Blog Post Containing Complaint Overview:


After posting in some subreddits, responses on my thread contained a litany of logically fallacious character assassination attempts, Kafka traps, and ad hominems, and claimed that going public might damage my case or the optics, or Microsoft might retaliate (even illegally) if I do (possibly due to astroturfing - the act of manipulating social media for positive PR, which Microsoft is said to participate in): https://www.pcworld.com/article/439883/microsoft-caught-astroturfing-bloggers-again-to-promote-internet-explorer.html



In one substack article, I've been accused of vaccine skepticism, responding to and having direct affiliations to Elon Musk, harassing the CU Boulder police department, belief that space lasers caused the Colorado wildfires, distracting from domestic violence victims, and direct affiliations to DOGE.



The irony is that what I've done is advocate for Medicare for all, criticize Elon Musk for his desire to increase birth rates while failing to look at the socioeconomic conditions creating anxiety about having children, reporting declining mental health of students to the CU Boulder police department immediately before a mass shooting in the city and a violent riot of students flipping police cars, make a sarcastic comment about the wildfires, advocate for worker protections against physical intimidations reported outside of work hours, and file a complaint against wrongful terminations.


I initially deleted the original story (which I was cautious not to include any information that could possibly violate any NDAs) from my blog and LinkedIn - but after some review of this, in spite of general advice to avoid public attention for legal matters, I've decided to double down on going public, because in doing so, others both at the company and who have been recently wrongfully terminated have messaged me thanking me for doing what they have been afraid to do themselves (some have even reported physical intimidations) - which only bolsters my position. This includes even those in director level positions still employed at the company, as well as journalists. In this way, going public was actually a necessity.

Since going public others in principal level roles and even director level roles still employed at the company agree with my assessment.
Since going public others in principal level roles and even director level roles still employed at the company agree with my assessment.
Delay on your feature work outside of your control due to inadequate working conditions and ADA noncompliance, reporting violations with a report-it-now case, or you are going on family medical leave? Prepare to get gaslighted and scapegoated with an inactionable PIP.
Delay on your feature work outside of your control due to inadequate working conditions and ADA noncompliance, reporting violations with a report-it-now case, or you are going on family medical leave? Prepare to get gaslighted and scapegoated with an inactionable PIP.

The way these corporations continue to get away with violations of labor law like wrongful terminations is precisely by means of isolating individuals, controlling narratives and public perceptions, and gaslighting campaigns under a thin guise of care and plausible deniability - one might even argue that this is a part of the purpose for AI itself. The way I see it, even if gatekeepers through formal channels fail to hold Microsoft accountable, by going public and resisting fear and intimidation, I've already won. More critically, discussing these matters is not illegal - the information here is already public knowledge and taken from chats outside of work which do not contain trade secrets or internal chats/emails/code.


Microsoft layoffs do not account for the thousands of workers also gaslighted with inactionable or retaliatory PIPs. AI is likely not able to replicate creative value creation within organizations which scales exponentially within trophic networks and teams.
Microsoft layoffs do not account for the thousands of workers also gaslighted with inactionable or retaliatory PIPs. AI is likely not able to replicate creative value creation within organizations which scales exponentially within trophic networks and teams.
Can AI really replace human teams? While Microsoft claims that they run on trust, that is running thin where it appears now they are running on blatant lies and corporate corruption that trickles down throughout the company
Can AI really replace human teams? While Microsoft claims that they run on trust, that is running thin where it appears now they are running on blatant lies and corporate corruption that trickles down throughout the company

I will be deleting some of the more speculative or inflammatory comments on the matter and redact the original article, but it is important to note that the current trend at Microsoft cannot continue - regardless of the tone or the optics of my complaints - the bottom line is that Microsoft has violated labor law, and regardless if judges, attorneys, or investigators are paid off or have conflicts of interest - when your employees no longer have anything to lose, they only have things to gain by going public about it. In fact, even the CEO recently apologized about these matters.



Tone policing at the company has become a major problem, so that when issues arise, employees are too frightened to complain as it has become common to issue retaliatory PIPs, and many employees fear losing their visas.
Tone policing at the company has become a major problem, so that when issues arise, employees are too frightened to complain as it has become common to issue retaliatory PIPs, and many employees fear losing their visas.
Microsoft no longer has in house tech support and the tech support has been offshored to third party contracting companies - and they have been hacked by these other countries they have offshored their IT support to.
Microsoft no longer has in house tech support and the tech support has been offshored to third party contracting companies - and they have been hacked by these other countries they have offshored their IT support to.
Lack of proper training and support for onboarding proprietary internal systems leaves employees scapegoated when it inevitably leads to delays, and even in spite of ADA accommodations requests for proper documentation, training, and support.
Lack of proper training and support for onboarding proprietary internal systems leaves employees scapegoated when it inevitably leads to delays, and even in spite of ADA accommodations requests for proper documentation, training, and support.
Employees at the company refer to the documentation as "the dumpsterfire"
Employees at the company refer to the documentation as "the dumpsterfire"

Beyond violations of worker protections, the internal problems at Microsoft forces us to think critically and re-evaluate what the ultimate goals are and purpose is behind these investments in AI, quantum computing, and cryptography, and if more could be achieved by investing directly in local communities and teams where value in trophic social networks scales exponentially - not only from the perspective of ethics, but from the perspective of pure computational capability and product quality - and as an information survelliance, control, and synthesis tool, AIs will then thus reflect.


At Microsoft, I found myself heavily discouraged from discussing anything with teammates. In fact, during my onboarding - I was criticized harshly for asking the perfectly reasonable question about why results do not appear for a correct query in their database - where the reason was that they are kept there for a certain time window. I heard this used against me months after I asked, in spite of the fact that those in roles above me would frequently consult me to assist them with what was so poorly documented in our team, they could not even figure out.


Onboarding was so turbulent that after extended delays, the entire org was eventually forced to do a hackathon to fix critical documents, and while my manager would throughout my employment attempt to isolate me by claiming that when I encountered blockers which required outside interventions, nobody else was facing them. I would often find that after asking coworkers - virtually everybody would be facing similar blockers - all saving face - creating massive inefficiencies within the org.


One example was that with the rollout of new secure signing processes for code, the manager claimed "nobody else" was having issues with it - then when calling 2 coworkers to run the process with me - neither could get it to work, in spite of hours of time on the matter. When they were eventually forced to do a knowledge sharing session after weeks of delays, it was then revealed once again that the documentation was wrong and missing critical information, and virtually nobody could do it.


Lack of lateral information sharing is a serious problem at Microsoft where there seems to be a passive aggressive attitude when employees reach out to team mates or managers.
Lack of lateral information sharing is a serious problem at Microsoft where there seems to be a passive aggressive attitude when employees reach out to team mates or managers.

In spite of being encouraged on the company portal to reach out to coworkers across teams to foster innovation at the company (with an internal tool called "whois") I was chastised for reaching out to the research and development department regarding my insights into their Majorana One quantum computing chip (which critics say they lied to their investors about, evading public questioning and scrutiny when it wasn't actually functional and not based on any established physics, which seems to be a pattern).



I had interest in the topic as my undergraduate professor at UC Berkeley was fields medalist in mathematics Dr. Richard Borcherds who specializes in lattice maths and the physics implicated in Majorana fermion spin lattices. Now, I have been published on the topic and cited by leading scientists, and have a second paper out invited for a second round of peer review at Elsevier on the topic. In my PIP, I was both told to depend on others more, but paradoxically depend on others less.


How can creativity and innovation thrive without a sense a psychological safety and support on teams for curiousity and discussion or collaboration, where everybody is too annoyed and preoccupied?



In spite of the insistence that as a senior engineer, I should have just been "self learning" and "self unblocking" as my manager put it (who would frequently just not show up to his weekly syncs) it is not possible to grasp proprietary internal systems without proper permissions or tribal knowledge, and relying on AI tools trained on missing, outdated, wrong, or misleading documentation about evolving security processes produce garbled garbage when asked about it. This is also why these AI tools will never replace teachers, no matter how badly tech leaders would like them to. In fact, in many cases, to "self unblock" when I was asked would have been a violation of Microsoft security policy.









Microsoft released a report on careers most likely to be replaceable by AI. Interestingly, journalist appeared on the list. Microsoft leadership apparently and conveniently holds the view that there is no need to investigate any corruption and that this can be completely automated, and that no humans are even needed to proofread the content. Reads more like a wishlist than an objective report.
Microsoft released a report on careers most likely to be replaceable by AI. Interestingly, journalist appeared on the list. Microsoft leadership apparently and conveniently holds the view that there is no need to investigate any corruption and that this can be completely automated, and that no humans are even needed to proofread the content. Reads more like a wishlist than an objective report.


There is no further plausible deniability when my doctor specifically requested that | receive adequate support, documentation, and training on the team - and they could not even provide the bare minimum of any basic functional Microsoft assets for the entire duration of my PIP that could even at minimum turn on - in fact, my manager failed to answer emails for the entire duration of the PIP period. I was told that I had the option between a severance and a 45 day PIP period, and my 45 day PIP period was abruptly cut short to only 4 days (shortly after submitting a "report-it-now" case for possible security holes) where | was blamed for "not meeting expectations" when they didn't even play the period out to demonstrate that.


I have retained hours of video and audio footage unambiguoisly showing the failure of Microsoft's IT department that they offshored to 3rd party contracting companies in other countries that often interface with laptops that contain sensitive government data (where they then get hacked by those other countries while stupidly instead of fixing that continue to layer on additional security hoops for their own engineers to jump through). While Washington requires 2 party consent for audio recordings, at the beginning of each call to the IT support there was a message stating that the calls "may be recorded for quality and training purposes." I will be handing this evidence over to state investigators along with the physical assets themselves, and documentation regarding the continual and constant refusal for accommodations when requested and basic support required to do tasking (which often lacked descriptions, and for which feedback was often intentionally vague).


Manager claimed that "nobody else" was having issues with their secure access workstation (though there is video footage of a stack of these laptops with a sticky note that says "broken"), and that it was unreasonable that it was taking over 3 months to get one to the point that I could even access teams. With a full month delay due to yubikey shortages alone, this Redditor begs to differ
Manager claimed that "nobody else" was having issues with their secure access workstation (though there is video footage of a stack of these laptops with a sticky note that says "broken"), and that it was unreasonable that it was taking over 3 months to get one to the point that I could even access teams. With a full month delay due to yubikey shortages alone, this Redditor begs to differ

Institutionalized gaslighting.
Institutionalized gaslighting.

As others have pointed out, Washington is an "at will" state, meaning Microsoft reserves the right to lay workers off (within limits like those imposed by the WARN legislation), but wrongful terminations, ADA noncompliance, whisleblower retaliation, dishonest practices like I've described, and physical intimidations outside of work are not. There is no excuse for one of the largest corporations by market cap in history to fail to provide their engineers the bare minimum working conditions they need to succeed - and then to gaslight and lie about them.


With the culture of concealment and gaslighting, if you are not a culture fit, you might just be the only employee exercising your rights.
With the culture of concealment and gaslighting, if you are not a culture fit, you might just be the only employee exercising your rights.

I will provide further updates as they come, but for now, first, we will see what the response is and the actions that are taken by the state. Things will not change unless we begin to demand change and accountability.

My Story

Get to Know Me

I have been on many strange adventures traveling off-grid around the world which has contributed to my understanding of the universe and my dedication towards science advocacy, housing affordability, academic integrity, and education funding. From witnessing Occupy Cal amid 500 million dollar budget cuts to the UC system, to corporate and government corruption and academic gatekeeping, I decided to achieve background independence and live in a trailer "tiny home" I built so that I would be able to pursue my endeavors.

Contact
Information

Information Physics Institute

University of Portsmouth, UK

PO Box 7299

Bellevue, WA 98008-1299

1 720-322-4143

  • LinkedIn
  • Twitter

Thanks for submitting!

©2025 by Trevor Nestor 

bottom of page