Why Understanding Consciousness is Vital for Civilization
- Trevor Alexander Nestor
- 4 days ago
- 3 min read

The most prominent academics, governments, and corporations are investigating something you never probably considered.
[Presentation for APS/TSC Conferences 2026]
1.) The brain is by some estimates 225000x more efficient at compute while billions of dollars are spent on scaling up data centers at enormous cost - with proposals for their own nuclear power plants and even the possibility of sending data centers into space.
2.) As AI reaches a point of diminishing returns in scaling, Collective Human intelligence partly due to interbrain synchrony has been demonstrated in groups that cannot be replicated in current AI architectures and scales nonlinearly with group size which is diminished and tapped out when individuals in groups are all independently interfacing with AI tools.
3.) By some estimates perceptual binding may be an NP-hard problem (Tsotsos, estimates) - thus because the brain performs perceptual binding and backpropagation does not plausibly map to brain tissue, the same mechanism of perceptual binding could conceivably undermine postquantum cryptography.
Understanding the way in which the brain implements backpropagation is vital for security, energy policy, and even to better frame the ethics of assigning agency or consciousness to these AI tools.
Why should we believe that the intentions for these space based data centers, is to conserve energy, advance science, or even help us connect with one another, when all the evidence suggests the exact opposite?
I just had a conversation with the CEO of Starcloud - another startup based initiative to ostensibly bypass the criticisms of AI and datacenters by sending them to space:
What concerns me is that not only is this project implausible from multiple angles (heat dissipation alone would be a nightmare), but when you send datacenters into space, this provides yet another way that organizations can evade accountability for the way in which they are using our data because there are very few laws that extend beyond the Earth.
Beyond the criticisms about cost, plausibility, or complexity is that there is little evidence that continuing to divert resources towards scaling up these systems will have any real use at all - in fact, AIs are already reaching scaling limits and sociologists like Joseph Tainter have pointed out that scaling up technology (cybernetic control loops) to stabilize institutions reaches a point of diminishing returns:
[The Collapse of Complex Societies]
We are uncritically and unskeptically taking their word for it, that the end result of this venture will be a utopian future with UBI - sacrificing everything along the way - in spite of all evidence pointing to a pattern indicating the opposite intentions - including the stripping of social safety nets, diversion of investments away from things people need like housing or healthcare or childcare or manufacturing. We will allegedly all have "universal high income" without anything to spend the money on that which we actually need.
We are now betting 80% of all stock market gains on this turning a profit, when the future customers will be more concerned with everything they have neglected to think about along the way. The CEO asked me what would change my mind, and my response was that I would need one piece of evidence:
Now I am reading that there are attempts to replace scientists with AI. How is it possible to claim anything about nature, when you are running a simulation with assumptions you code into a model, and then rule out the possibility that nature itself can falsify you?



Comments