top of page

Research Blog

Search
  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • 4 days ago
  • 2 min read

“Put all the GPUs in space” sounds clever until you run the numbers. Let's estimate the cost of launching and sustaining "all the GPUs" in orbit.


Space isn’t a freezer. In vacuum you can’t convect heat away because you can only radiate it, which means enormous, fragile radiator surfaces for MW-scale clusters. That radiator area becomes a micrometeoroid/debris risk and a control/engineering headache.


Power is worse. A serious AI cluster is multi-MW. In orbit that means huge solar arrays + storage (eclipses) or nuclear (politically/operationally hard), plus you still have to dump the waste heat.


Then there’s the business reality: GPUs refresh every ~2–3 years. On Earth you swap parts daily; in space, upgrades and repairs are “space mission” problems. Add radiation-induced errors (cosmic ray bit flips), bandwidth/latency limits vs fiber, launch mass costs, and debris risk, and the economics flip negative very obviously and very fast.


Space compute can make sense only for niche on-orbit processing (e.g., satellites that must process sensor data before downlink). For general data-center AI: Earth wins on power, cooling, maintenance, and cost.


Jon Peddie Research projects an installed base reaching ~3,008 million GPUs. Even if you pretend each GPU averages only 0.2–0.5 kg and is designed for space applications and ignore everything else, *launch-only* is:


3.0B × (0.2–0.5 kg) × $2,720/kg ≈ $1.6T–$4.1T launch-only


At a mid value (≈250 W/m²), 8 GW of compute heat needs about 32 km² of radiators. Using 5 kg/m² implies ~160,000 tons of radiators alone.


…and that’s before the manufacturing cost, failed launches, power, structure, comms, R&D, and the fact you can’t practically operate billions of separate “space PCs" (cost scale factor increase on the most *extremely unrealistically* conservative end of *at least* 10x or $160T).


So when you do these basic back of the envelope calculations, it becomes obvious that the "climate change" "it's more efficient in space" "do it to conserve the water" "for the future of humanity" arguments are pure marketing garbage designed to conceal the true purpose for these projects which is probably that they just want to conceal data from *you* or need to collect some data that only exists in space, and their project is more important to them than building things you actually need.


Operationally it’s also absurd: there are on the order of ~14,300 active satellites today. “3 billion space PCs” would be ~200,000× more spacecraft than exist, with impossible spectrum/ground-station, tracking, and debris/collision constraints. (For perspective, Starlink is ~9,400 satellites and already dominates the active satellite population.)


And with all those rocket launches... you have the "environmental impact" of that as well. Why fund things people actually need on Earth when you have all the monopoly money?


In either case, they are just lying to you and insulting your intelligence.

  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • 4 days ago
  • 3 min read

The most prominent academics, governments, and corporations are investigating something you never probably considered.


[Presentation for APS/TSC Conferences 2026]


1.) The brain is by some estimates 225000x more efficient at compute while billions of dollars are spent on scaling up data centers at enormous cost - with proposals for their own nuclear power plants and even the possibility of sending data centers into space.


2.) As AI reaches a point of diminishing returns in scaling, Collective Human intelligence partly due to interbrain synchrony has been demonstrated in groups that cannot be replicated in current AI architectures and scales nonlinearly with group size which is diminished and tapped out when individuals in groups are all independently interfacing with AI tools.


3.) By some estimates perceptual binding may be an NP-hard problem (Tsotsos, estimates) - thus because the brain performs perceptual binding and backpropagation does not plausibly map to brain tissue, the same mechanism of perceptual binding could conceivably undermine postquantum cryptography.


Understanding the way in which the brain implements backpropagation is vital for security, energy policy, and even to better frame the ethics of assigning agency or consciousness to these AI tools.


Why should we believe that the intentions for these space based data centers, is to conserve energy, advance science, or even help us connect with one another, when all the evidence suggests the exact opposite?



I just had a conversation with the CEO of Starcloud - another startup based initiative to ostensibly bypass the criticisms of AI and datacenters by sending them to space:



What concerns me is that not only is this project implausible from multiple angles (heat dissipation alone would be a nightmare), but when you send datacenters into space, this provides yet another way that organizations can evade accountability for the way in which they are using our data because there are very few laws that extend beyond the Earth.


Beyond the criticisms about cost, plausibility, or complexity is that there is little evidence that continuing to divert resources towards scaling up these systems will have any real use at all - in fact, AIs are already reaching scaling limits and sociologists like Joseph Tainter have pointed out that scaling up technology (cybernetic control loops) to stabilize institutions reaches a point of diminishing returns:


[The Collapse of Complex Societies]


We are uncritically and unskeptically taking their word for it, that the end result of this venture will be a utopian future with UBI - sacrificing everything along the way - in spite of all evidence pointing to a pattern indicating the opposite intentions - including the stripping of social safety nets, diversion of investments away from things people need like housing or healthcare or childcare or manufacturing. We will allegedly all have "universal high income" without anything to spend the money on that which we actually need.


We are now betting 80% of all stock market gains on this turning a profit, when the future customers will be more concerned with everything they have neglected to think about along the way. The CEO asked me what would change my mind, and my response was that I would need one piece of evidence:



Now I am reading that there are attempts to replace scientists with AI. How is it possible to claim anything about nature, when you are running a simulation with assumptions you code into a model, and then rule out the possibility that nature itself can falsify you?



  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • 4 days ago
  • 3 min read

Collective Intelligence is roughly defined as social capital stored in complex trophic social networks. As social networks scale linearly, collective intelligence scales nonlinearly.


This is both an asset vital for stable relationship/family formation and propping up complex economies but can be bad for institutions because unconstrained growth risks undermining institutions. Self-organized criticality can challenge institutionalized power. This collective intelligence comes in part from close social proximity and also interbrain synchrony. Beyond the Dunbar limit (the max number of connections you consciously maintain) there are layers of complex social networks which prop you up.


So what is the solution? Well, you tap into the well of social capital and frack out some of the connectivity of people and reduce their networks by alienating them through a mass survellience cybernetics feedback and control loop - everybody is interacting in an isolated way. Tools can be used to maintain worker productivity to perform their complex tasks but scrambles the networks so that this can't be used to organize.


The issue with this strategy is that in order to contain workers from organizing ("alignment"), you must then scale the AI systems nonlinearly - so that increases in compute or energy each a point of diminishing returns. This inflection point is what is being referred to as the "technological singularity" - it isn't a point at which AI becomes smarter than people - it's when as a control loop it's no longer effective at stabilizing brittle institutions (K shaped economy) and there is a risk of information cascades and emergence or Spectral Collapse - information stored in social networks begins to reach a critical tipping point where it collapses in the form of sudden collective actions not mediated through institutional control loops. This is their fear.


[The Collapse of Complex Societies]




[Tipping Points in Complex Systems]




[RAND Study Describes Brittle Institutional Stability in the US] https://lnkd.in/g6H6S8gC



[Spectral Theory of Value]




[Spectral Collapse]




[Links between entropy, complexity, and the technological singularity]




[Proximity to explosive synchronization determines network collapse and recovery trajectories in neural and economic crises]




[Self-Organized Criticality]




[Tipping Point for Advanced “Knowledge Economy,”]





[On Cybernetics]




[Nonlinear Control in Econophysics]




[SPECTRAL ANALYSIS OF RICH NETWORK TOPOLOGY IN SOCIAL NETWORKS]




[Nonlinear Control in Econophysics]




[Musk's Comments on Bureacreacy and Entropy]




[Trump's Comment on Tipping Points]



In our K-shaped economy institutions have become more brittle as they are not buffered as resiliently by local social support systems and particularly susceptible to information cascades or avalanches of collectively organized behavior. As AI serves as a tool of social compression this can create a spectral collapse catastrophe of social and economic institutions.


As local connectivity of agents in the form of community and social support systems disintegrates, institutions become more brittle - less dynamic or resilient - and more susceptible to uncertainty or unpredictable events or behaviors. AI systems brute force backpropagation from the top down - while brains in synchrony facilitate efficient feedforward and backwards flows of information. This efficiently implements alignment where the brain is 225000x more efficient than AI supercompute whereas AI system models saturate.


The technological singularity is not a point at which AI becomes conscious or outsmarts people - it is when our collective intelligence supersedes that of the AI gatekeepers that they have been trying to contain - our collective consciousness - when value of lateral information transfer between people supersedes the value of everybody interfacing alienated independently with these AI tools.

My Story

Get to Know Me

I have been on many strange adventures traveling off-grid around the world which has contributed to my understanding of the universe and my dedication towards science advocacy, housing affordability, academic integrity, and education funding. From witnessing Occupy Cal amid 500 million dollar budget cuts to the UC system, to corporate and government corruption and academic gatekeeping, I decided to achieve background independence and live in a trailer "tiny home" I built so that I would be able to pursue my endeavors.

Contact
Information

Information Physics Institute

University of Portsmouth, UK

PO Box 7299

Bellevue, WA 98008-1299

1 720-322-4143

  • LinkedIn
  • Twitter

Thanks for submitting!

©2025 by Trevor Nestor 

bottom of page