top of page
Search

Microsoft's Study on Jobs AI Can Replace is a White Paper Disguised as Research

  • Writer: Trevor Alexander Nestor
    Trevor Alexander Nestor
  • 5 hours ago
  • 5 min read
Microsoft's new "research" paper reads more like a motivated wish list or white paper with a thin guise of academic objectivity than a reputable study.
Microsoft's new "research" paper reads more like a motivated wish list or white paper with a thin guise of academic objectivity than a reputable study.
We no longer need people that investigate powerful interests because trust me bro the bots will handle that. (Hint: The bots are controlled by the powerful interests to manufacture public perceptions)
We no longer need people that investigate powerful interests because trust me bro the bots will handle that. (Hint: The bots are controlled by the powerful interests to manufacture public perceptions)

I've taken a look at Microsoft's new research study which reviews jobs most applicable towards replacement with AI, and I have to say that I'm disappointed that folks are so easily manipulated by little charts and graphs into accepting what they are told (really it shouldn't be much of a surprise since one of Bill Gates' favorite books listed on GatesNotes was "How to Lie with Statistics"). This should not be all that surprising as previously Microsoft researchers made wrongful claims about their Majorana One quantum computing chip.



The way this "research" works is executives and central planning elites would like to replace certain jobs with their AI tools, so with motivated reasoning they arrive at their desired conclusions.


In fact, one might consider that all forms of intelligence must be centered in some way upon some sort of motivated reasoning to be comprehensible (there has been some philosophical discussions about this regarding the is/ought problem):



We certainly can replace those jobs with AI tools, there is no reason they can't, but whether or not they are applicable towards being replaced or ought to be replaced is a value judgment - not a a statement of objective or even empirical reality (an NP-hard problem). The main problem here that is really important to be considering is how the power is distributed in interpreting what we understand to be objective or valuable - that should be in the hands of the human interpreters in the loop to maintain value alignment.


Even if you take an argument just solely based on authority, you have Nobel Laureates in physics like Dr. Penrose, the father of modern linguistics Noam Chomsky, and Fields Medalist in Mathematics Terence Tao that all soundly reject the idea.





Two examples I can see just based on a very cursory look are that the AIs can allegedly replace mathematicians and proofreaders. In this study, apparently, mathematics is applicable towards being replaced with AI LLM tools, even though insights gleaned by mathematicians into math often contain insights which can be shown with a causally stated solution and yet by brute force with classical computers night take an intractable amount of time.

Mathematicians apparently can be replaced by LLMs according to the study.
Mathematicians apparently can be replaced by LLMs according to the study.
Just trust me bro, the same AIs that generate the garbage can proofread their own garbage, no need to think critically or think too critically about it. We have reached AGI after all right?
Just trust me bro, the same AIs that generate the garbage can proofread their own garbage, no need to think critically or think too critically about it. We have reached AGI after all right?

The main fallacy here I see with talk about "artificial general superintelligence" and the "technological singularity" and the idea that AI models have "emergent consciousness" is that these models are just reflections of us and what we have digitized, and I've already written articles on the topic of why these models do not exhibit consciousness:



While human behaviors are nondeterministic and nonlinear, these models are simply mass surveillance and information control tools - much like Google search or autocomplete - and they are fundamentally based on linear algebra.


Sure, the more data you surveillance, the more knowledge you have to work with and search - but ultimately, the insights at the fringes and edge cases that move our civilization forward are often unpredictable, and only contain meaning when tied to our fundamental goals and primary motivations that orient us. At the end of the day, the data also has to be both created and interpreted with humans in the loop to remain perceptible and comprehensible and to remain aligned with our collective and independent needs and desires. Research has already discovered scaling limits to models and that when repeatedly trained on their own data without human interpreters in the loop begin to produce garbled nonsense like a copy machine repeatedly copying the same page.


These scaling limits are not limits to the AI per-se, the AIs are simply big mirrors - like a collective reflection. In essence, the technological singularity is just us and whether or not the AI is conscious just depends on whether or not we begin to start waking up to how we are being manipulated. As our society becomes more complex, eventually it reaches a saturation point beyond which folks are tapped out because not enough attention has been paid to material and social conditions for most people and too much entropy has accrued within social and economic systems to maintain. Sociologists like Joseph Tainter have even framed the fall of overextended empires like Rome under this idea. At some point, action must be taken.


By overselling the promises of AI and underdelivering, tech elites and central planners have enough plausible deniability to get away with gaslighting and short changing the public. The AI is much like a new religion - just abstract and defer your trust to the small band of elites controlling the infraatructure and the almighty black box and it will tell you exactly what you need - with all the guardrails installed on it to keep you "safe" and "secure" from reality which might be "harmful" to you (though of course those that control or own the AI infrastructure don't have to deal with these guardrails).


I'm sure the chatbot AI is going to be great at performing sting operations on wealthy billionaires and at performing journalism like Snowden - or in discovering the solution to the Riemann hypothesis (I can tell you when I've asked chatgpt to do so it just claims that it hasn't been solved yet), or breakthroughs in physics (please note the sarcasm).


Gpt5 was worse because they first wanted to get you on board with chatgpt so that once you were sold on the vision they could survelliance you and later install more guardrails and maintain their positions as the clergy of their new religion.
Gpt5 was worse because they first wanted to get you on board with chatgpt so that once you were sold on the vision they could survelliance you and later install more guardrails and maintain their positions as the clergy of their new religion.

I hate to say it and it's taken me over a decade and a half to reluctantly come to this conclusion, but I think the solution is that we need to start bringing ridicule (and possibly some soft bullying) back towards these sorts of people and call them out in the most blunt and unapologetic terms. I'm saying that as a nerd myself who has worked at many of these top tech companies in high level roles, had a 4.66gpa, went to the world's top public university, worked on rocket engine controllers with a TS/SCI w/FSP, with the NSA, have been pursuing an MBA, and have published peer reviewed and cited by leading scientists. The tone policing has got to stop.


If we don't, I'm not sure that anything we value will remain. Beyond the scaling limits is the realm of human agency.




 
 
 

My Story

Get to Know Me

I have been on many strange adventures traveling off-grid around the world which has contributed to my understanding of the universe and my dedication towards science advocacy, housing affordability, academic integrity, and education funding. From witnessing Occupy Cal amid 500 million dollar budget cuts to the UC system, to corporate and government corruption and academic gatekeeping, I decided to achieve background independence and live in a trailer "tiny home" I built so that I would be able to pursue my endeavors.

Contact
Information

Information Physics Institute

University of Portsmouth, UK

PO Box 7299

Bellevue, WA 98008-1299

1 720-322-4143

  • LinkedIn
  • Twitter

Thanks for submitting!

©2025 by Trevor Nestor 

bottom of page