AI is Not Conscious and the So-Called "Technological Singularity" is Us
- Trevor Alexander Nestor
- Jun 19
- 4 min read
Updated: Jul 7
I have recently been drafting some papers on the idea of AGSI (artificial general superintelligence) as well as the so-called technological singularity described by Sam Altman, and in my view, most people I've seen working on the subject have the wrong view.
Keep in mind this is a preprint so while the main ideas outlined are likely to hold during peer review any small discrepancies may be resolved.
Most commentary around the “technological singularity” frames it as an inevitable up-tick in raw computational power with no limits that will enslave us all, or an exponential curve that finally crosses some mystical threshold of general intelligence. Sam Altman’s recent formulations, for instance, focus squarely on algorithmic scale and model size. But if we step back and treat Artificial General Superintelligence (AGSI) not simply as bigger neural nets (that only model the upper layers of the way in which human agents process information) but as an institutional instrument, a very different picture emerges.
AI is just a tool of surveillance, information control, and plausible deniability with the added benefit of helping people to search through information they are allowed to see and synthesize plausible generated content that sounds correct. What things mean is ultimately socially constructed and up to us, which the AI is supposed to reflect, not dictate, as we are the interpreters in the loop.
The way I see Artificial General Superintelligence (AGSI) in late stage societies is as a surveillance and Information control loop that is appropriated by central elite or central planners to maintain institutional stability. As entropy accrues in social and economic institutions (described by Luhmann's systems theory) over time and requires increasingly more precise instruments to maintain, I argue there will eventually reach a thermodynamic limit by which lateral information sharing between agents supersedes value creation by additionally scaling up AI infrastructure, where there are diminishing returns (the so-called "technological singularity").

The challenge is that at this catastrophe point, the possibility of information cascades threaten institutional stability. When information transfer is deferred to a central AI, where recent research from Microsoft indicates that employees are using it in place of lateral information transfer between other employees, for example (or any other group of human agents), what happens is that alienation between agents increases where a tipping point is eventually reached where value creation by scaling up AI reaches a point of diminishing returns - a hypothesis validated by sociologists like Joseph Tainter. As an information surveillance and control loop, AI also requires feedback of agents in-the-loop to interpret its outputs.
Central elites or planners will appropriate these systems to maintain the coherence of sprawling bureaucracies, counteracting organizational entropy and cognitive limits of individual human agents (the Dunbar limit). As each layer of regulation, audit, and reporting compounds, institutions demand ever-more precise feedback mechanisms are required to maintain institutional stability. Yet this drive toward precision reaches thermodynamic and economic limits. Past a certain point, “buying” incremental control with more compute (we are seeing now plans for trillions of dollars in investments with nuclear powered data centers while most Americans are living paycheck-to-paycheck and socioeconomic anxiety is a leading reported reason for falling fertility rates) yields vanishing returns: the true singularity is not a technological leap forward but a turning point where lateral, human-to-human information exchange becomes more valuable than additional centralized processing.
According to Niklas Luhmann in Social Systems, p. 2 (1995), the fundamental operation of any social system is the reduction of complexity. Without reducing the complexity of the environment, a system cannot make decisions.
Noncommutative geometry has been widely used as a mathematical framework to model nested hierarchies like complex adaptive systems or bureaucracies. Consequently, in the field of complexity economics, noncommutative geometry and spectral triples have been appropriated to formulate the spectral theory of value, attributing value to information flows in social networks (social capital) that is diminished by centralization. Information is stored collectively across agents within social networks that scales exponentially, which, as a mirror of collective communications, AI needs to feed on (for example, the usefulness of understanding the word "car" depends on a collective interpretation of the object - if the meaning is only discernible to an individual, the AI would seem to produce nonsense). Tapping into inter-agent connectivity, agents facilitate flows of information between social and economic institutions under Luhmann's systems theory, where their socioeconomic status can be measured by computational complexity, and where the American dream can be seen as a spectral energy gap of exponential complexity, by design - an infinitely deferred promise - that keeps the economic engine afloat. In this view, the AI keeps human agents trapped in an infinite staircase or loop as a final surveillance and control loop in our late stage society.
Pushing the metaphor further, we can map this framework onto the Orchestrated Objective Reduction (Orch-OR) theory of consciousness. Orch-OR posits that microtubule quantum processes solve the “binding problem,” unifying disparate sensory and cognitive features into a coherent percept. If we view societal information exchange as an analogous binding process; merging individual insights into shared meaning - then consciousness itself becomes a model for robust, distributed intelligence.
When appropriated to the brain, noncommutative geometry allows for a model that can accommodate the physics implicated by Orch-Or theory - one controversial theory of consciousness which accommodates the subtle difference between intelligence and consciousness. I discuss in one of my papers how this might be used to solve lattice cryptography, and relate the shortest vector problem to the binding problem, casting consciousness as an NP-hard, and accounts for why the brain is vastly more energy efficient than current AI systems, how information processing differs fundamentally in the brain, and how it allows for inter-agent collective intelligence which scales much faster than current AI infrastructure can model.
Comments