Logical structuring of text caches reasoning
demo @WiggerlabMore important than *observed* intelligence in LLMs is their sociopathy and the double standard on textual interfaces. Whereas impatient managers and "non-technical" people have consistently asked to complicate interfaces from a text-based context (think of inodes, pipelines, query languages) that make things like data normalisation trivial to silly ribbons, buttons and hidden unindexed and unsearchable documentation, LLMs skip the GUI maze entirely. Hence, you see LLMs that are consistently outperforming humans in tasks that involve knowledge work. This paradoxically results in humans changing the interfaces to their software to work better with AI "agents" even though this is merely a simplification of the interface (MCP is just a protocol for structured text output that is picked up by a parser).
The fundamental issues regarding knowledge work were already solved through the use of backlinks. A bottom-up organisation of references can result in Folgelzettel where text itself freezes the reasoning of the note taker in follow up notes:
https://zettelkasten.de/folgezettel/
Even when "dumb", LLMs in the current state are enough to leverage this power. Hence why this article is sad cope when you further contextualise this on humans: https://www.mindprison.cc/p/no-progress-toward-agi-llm-braindead-unreliable