In AI, intelligence can be simulated but not instantiated. That's the conclusion in this very interesting paper. The logical framing of the problem is based on physical systems and their states, mapping physical states to abstract symbolic states and causality. Here are some pertinent definitions from the paper
1. The Physical States (𝑝):
These are the symbols (the vehicle). They are objective physical entities (e.g., voltage gradients), possessing zero intrinsic semantic content.
2.The Abstract States (𝐴):
These are concepts (the content). As established, these are grounded physiological states existing exclusively within the mapmaker who holds the computation’s meaning.
3. The Mapping Function (𝑓):
This is the alphabetization. It represents the assigned association held in the mapmaker’s mind, actively bridging the machine’s blind physics (𝑝) to the
mapmaker’s grounded concepts (𝐴)
4. Simulation:
The syntactic manipulation of physical vehicles (𝑝) to track the abstract
5The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness relationship between concepts (𝐴).
5. Instantiation:
The replication of the intrinsic, constitutive dynamics (𝑃) of the process itself.
Instead of a review of the paper, I will provided some pertinent quotes from the paper.
"The limits of physical simulation elsewhere in biology make the point explicit. A GPU that simulates photosynthesis may accurately model the abstract transformation from sunlight, water, and carbon dioxide (𝐴) to oxygen and glucose (𝐴′), but it will not synthesize a single molecule of glucose or release oxygen. So while perfectly simulating the process, it lacks the causal capacity
to perform the underlying biochemical work. To suggest that simulating the “software” of the brain avoids this physical constraint introduces a category error (Searle,1980). It conflates the algorithmic description of a process with the intrinsic physics required to instantiate it"
"In a digital simulation, the causal chain is driven entirely by the vehicle (𝑝). The logic gate does not switch because it ‘hurts’ (content causality driven by 𝐴). Instead, it switches because the voltage crosses a defined physical threshold (vehicle causality driven by 𝑝). The physical state of
the system alone determines its evolution. The semantic content of the symbol (𝐴) plays no causal role, since the machine would perform the same physical operations even if the symbol referred to nothing at all. Assuming otherwise would mean to fall victim to the abstraction fallacy."
"Moving from concepts to symbols is not a step in abstraction. It is a lateral act of assignment where a mapmaker forcibly binds a physical token to a mental concept. It is precisely this unbridgeable lateral step that exposes the causality gap, permanently cutting off any intrinsic path leading back from the symbol to the original experience."
Instead of appealing to intuitions about what is absent, we examine how abstraction arises in the first place. If computation depends on a mapmaker who extracts invariants from experience and assigns symbols, then the dependency is built into the structure. Any
computational map presupposes an experiencing agent who performs the alphabetization. Making the algorithm more complex does not undo this order of dependence. No increase in scale allows the map to generate the subject whose activity is required for computation to count as such at all.
### Sergio
Noi non solo evidenziamo l’esistenza del ruolo del mapmaker (che anche nell’articolo è in parte anche l’osservatore ‘attivo’), ma addirittura lo includiamo esplicitamente chiarendo definitivamente il suo ruolo di riconoscitore dei propri significati.
Nell’articolo il ruolo attivo è quello di mappare I simboli computazionali ai significati che comunque sembrano essere assunti come oggettivi
quello che chiama "continuous physics" per noi e' il concetto, che non puo' essere espresso direttamente, ma si puo' esprimere le condizioni per la verifica. Il concetto di tipo G, e' l'enliving di un interagire continuo verso la potenzialita' (che non e' fisica)
The Nyquist-Shannon theorem is about sampling — reconstructing a continuous signal from discrete measurements. That's what happens in:
Microphones capturing audio
Cameras capturing video from the real world
But in video games, frames are generated, not sampled. There's no underlying continuous signal being measured. The GPU is constructing each frame from scratch — geometry, shaders, lighting calculations, etc.
So the analogy breaks down fundamentally. The question of "how many frames per second" in games is purely about:
How fast the hardware can complete the rendering pipeline for each frame
Perception thresholds — what frame rates the human visual system perceives as smooth motion (which is a psychophysics question, not an information theory one)
Game loop timing — how game logic, physics, and input polling are scheduled
There's no theorem from information theory that governs this in a meaningful way. Shannon's work simply doesn't apply to the generation side.
The smoothness of high frame rates (60 vs 120 vs 240fps) is better explained by human visual persistence, motion blur perception, and input latency — none of which are Shannon's domain.
### Sergio
Quello che dice sui videogiochi è corretto, la differenza è che il nostro ’codice’ non sta implementando leggi della fisica per generare frames di una realtà modellata.
Sta generando un movimento che permette all’osservatore di riconoscere una continuità di significato avvalorata dalla possibilità di effettuare tagli e analizzare frames