Non-player World

All images by WAGWA

10.13.2025

Y7

Definitions

Non-Player World is a speculative term for contemporary socio-technical configurations—environments, systems, phenomena—whose purposes or processes are no longer primarily calibrated for the human (or, the player). These configurations are characterised by forms and functions that optimise away from the human, reconfigured at systemic and infrastructural levels through operations that are no longer easily recognisable as originating from—or being directed by—human intention. 

Non-Player World describes the ways virtual spaces reshape the physical, and how our experience of the physical is affected by the virtual—what we might term the devirtualisation of virtualisation. Culturally, it manifests in the uncanny sensation of the offline feeling inauthentic, in a deepfake realism that is the after image of online experiences, where the emergence of content as a category reshapes how we navigate and engage with both virtual and physical environments. Through digital media's acceleration in both volume and tempo—instantiated via contemporary platform architectures and their internal dynamics—we are experiencing the onset of semioblitz, which intimately ties Non-Player World to computation, to artificial intelligence, and to the ways they abstract and intervene in the physical world and our experience of it. 

Non-Player World is anthropogenic—of human origin—yet optimises according to its internal dynamics, reconfiguring foundational architectures, systemic hierarchies and value systems as it does so. Through the proliferation of neural media, meaning itself is being restructured via probabilistic operations that produce the appearance of coherence without underlying comprehension. This redistribution of language tokens into predictive, pattern-based operations by LLMs causes tectonic shifts in the semiotic fields through which humans communicate and categorise experience. 

Despite being anthropogenic, Non-Player World paradoxically feels less 'for us' than the natural world. Nature’s perceived indifference arises from having never been designed with humans in mind—on the contrary, humans are part of nature and have continually adapted to it by interpreting, extracting, and transforming in order to survive and flourish. Non-Player World is indifferent in a more radical way: it is designed by humans, yet operates according to priorities that drift from recognisably human purposes. 

Non-Player World emerges from assemblages of interfaces, mediations, automations, and synthetic substrates that produce interventions between player and environment. These interventions reorient spaces, processes and methods of communication towards algorithmic and quantitative legibility. High-frequency trading algorithms, subsea server farms, recommendation algorithms, LLMs, and the hyper-rationalised gridwork of automated fulfilment centres all produce the sense that across many scales we are shifting to non-human operational logics. What follows outlines examples of Non-Player World and traces the broader systemic tendencies that generate them. 

In Relation to Exocapitalism

During the development of this text, we have engaged closely with the work of Roberto Alonso Trillo and Marek Poliks, particularly their theory of Exocapitalism; their lecture on Non-Player DynamicsMarek’s recent talk in London and their appearance on the New Models podcast. There are many evident similarities between Non-Player World and Exocapitalism, and we are more than happy for this text to be understood as a response to their work in this area, and even at times simply as a rearticulation—a mapping of the same contemporary condition from a different perspective. Later in the text, we pull more directly from their ideas, but we would encourage anyone who finds value in Non-Player World to engage with Trillo and Polik’s work directly, as we have found it to be incredibly enriching.  

Etymology

The term is a play on Non-Player Characters, or NPCs, which are the characters in video games* not controlled by human players. Their behaviours are often pre-scripted, modelled by code or AI to produce constrained, system-driven behaviours. Since 2018, the term has been used online as a pejorative for people perceived as lacking independent thought, often amplified in political contexts. 

NPCs contribute to game world ambience, storytelling and gameplay, and are governed by the same systems that generate the environment. An NPC appears as an individual entity—emulating agency through the execution of underlying system protocols. In this sense, NPCs are unit-level instantiations of the logic that structures the environment as a whole—the distinction lies only in scale. The NPC does not exist independently of the environment, which it is in a state of oneness with. They are only the hallucination of meaning, intent or agency by the player onto structural, procedural generations. Non-Player World is a theory operating on this understanding that it is only the environment that exists, and that this environment no longer serves the players. Contemporary socio-technical spaces execute protocols optimised for algorithmic legibility rather than human comprehension, creating the uncanny experience of inhabiting worlds that simulate responsiveness while remaining fundamentally indifferent to human agency.  
* NPCs also exist in table-top games, although these are often controlled by the Game Master, and / or by chance through the use of dice or the random selection of cards. 

Origins

A useful starting point for establishing a throughline toward contemporary examples of Non-Player World lies in the socio-technical spaces that emerged during the Industrial Revolution. The standardisation of production and components, as well as of timings and schedules, drove the optimisation and rationalisation of space for mass production and throughput. This drive towards standardisation enabled the mechanisation of tasks once undertaken by artisans, reducing the reliance on individual skill and variability in production. 

As machinery and assembly-line methods began to spread, work became increasingly repetitive and highly regulated, with tasks mapped onto the rhythms and logics of machines rather than through the worker’s judgment or discretion. As Marx observed:  

"…the worker becomes ever more dependent on a very one-sided, machine-like labour [...] he is depressed spiritually and physically to the condition of a machine [...] being [human] becomes an abstract activity."  - Karl Marx, Wage Labour and Capital, 1849

This theory of alienation arose from witnessing how industrial standardisation estranged workers from their labour and environment in favour of predictability and scale:  

"Owing to the extensive use of machinery [the worker] becomes an appendage of the machine, and it is only the most simple, most monotonous, and most easily acquired knack, that is required [of them]." - Friedrich Engels & Karl Marx, The Communist Manifesto, 1848

Where Marx identified the alienating effects of industrial standardisation as they first emerged, the early 20th century saw the same logics formalised and actively prescribed. Frederick Winslow Taylor’s ‘Principles of Scientific Management’* explicitly insisted on the rigorous standardisation of work methods to maximise efficiency, declaring that “in the past man has been first; in the future the system must be first.”  
* Frederick Winslow Taylor - Principles of Scientific Management, 1911.  

Exiting The Grid

More than a century later, this vision has been realised in advanced industrial infrastructures that exist as intensified forms of systemic optimisation, and which have sharply reduced on-site workforces. For instance, at the core of online grocery specialist Ocado's customer fulfilment centres lies ‘The Hive’: a storage and retrieval system relying on thousands of robots moving across a three-dimensional grid, orchestrated by an AI traffic-control system. Similarly, Amazon’s fulfilment centres deploy a variety of machines to automate tasks like order picking and sorting, as well as transporting shelving units around their warehouses. A notable development in the past year is the launch of Proteus, an autonomous mobile robot which no longer relies upon grids of encoded floor markers like its predecessors, instead moving freely amongst human employees using “safety, perception, and navigation technology” developed by Amazon. This programme of automation has come with explicit upskilling tracks—such as the Mechatronics & Robotics Apprenticeship—that reorient  workers toward oversight, maintenance, and exception-handling. In effect, humans are moved further “down the stack”: maintaining interfaces rather than interfacing directly.

Lights Out

The same pursuit of efficiency has also pushed manufacturing toward production lines with minimised need for human presence altogether. Earlier this year, a viral tweet of an automated, darkened factory line in China garnered attention. The video was an example of “lights-out” manufacturing—production lines that can run without workers and thus without lights. It is important to make the distinction here between lights out and fully autonomous—the former still requires on-site workers to deal with supervision and exception handling, whereas the latter would be an entirely automated factory, something which has not yet been fully realised. A useful distinction here is to understand automation as a technical function, and Non-Player World as the socio-technical condition produced when such functions scale up to shape whole environments. 

There are notable precedents for such environments dating back to the early 2000s. One of the most cited examples is FANUC’s 40,000-square-foot robot factory in Japan, where production lines are capable of running autonomously for extended periods. Robots manufacture around 50 new units every 24 hours, continuing without human intervention for as long as 30 days at a time. As FANUC vice president Gary Zywiol remarked: “Not only is it lights-out, we turn off the air conditioning and heat too.” 

Although lights-out manufacturing has been technically feasible for decades, what is changing in recent years is the scale and sophistication of such systems. Increasingly, entire factories—not just individual lines—are being designed to minimise human involvement. Recently the Wall Street Journal reported that Chinese electric vehicle-maker Zeekr now produces 300,000 cars annually in facilities that rely heavily on automation, with only limited human oversight.  

Data Centers

This logic extends beyond manufacturing into data infrastructure—a critical component of Non-Player World. Increasingly, data centres are being designed for unmanned operation. Industry commentators note that conventional facilities waste space and energy when maintaining conditions for humans—lighting, climate control, safety features, bathrooms, even oxygen. Without these requirements, servers could be packed more densely, run at higher temperatures, and operate in low-oxygen environments that would suppress fire and corrosion. 

Experiments in lights-out data centres show that facilities can operate for extended periods without physical visits, creating bubbles of fully machine-optimised environments. Microsoft’s Project Natick demonstrated this in 2018, when twelve server racks were sealed in a pressure vessel and sunk off the coast of Scotland. For two years the subsea centre ran without site visits, communicating only via power and network cables, and recorded lower failure rates than comparable land-based servers. This model is now being scaled in China: in early 2025 an 18-metre module at sustains them. As Steven Gonzalez Monserrate explains in The Cloud Is Material, data centres are currently as much about their “caretakers” as they are about servers, cables, and air conditioners. His ethnographic research documents how technicians live in constant fear of downtime, with one manager attributing his hypertension to the stress of avoiding catastrophic outages, and another seeking weekly therapy for “server stress.” Under these conditions, workers often counterintuitively follow distinctly human “tried and true” intuitions to handle edge cases and exceptions (such as runaway overheating) that computational models haven't yet been programmed to effectively manage. 

This vignette complicates Marx’s idea of industrial alienation in which the worker is reduced to an appendage of the machine. Here, a tendency more in keeping with pre-industrial revolution labour emerges: technicians working through hunches and craft-like practices. Far from ‘becoming machinic’, they more accurately function as artisanal maintainers of infrastructure which presents itself as automated and abstract. But these odd, janky roles that humans adopt are just blips in a wider, ongoing process of the optimisation and rationalisation of space. One could confidently assume that data centres will continue to optimise away from the human caretaker altogether—lights-out facilities and subsea servers gesture toward an infrastructural future where such artisanal interventions are no longer necessary.  

The Agentic Web 

A reorientation and rationalisation is not just happening in physical spaces, but in virtual structures that organise users’ online navigation. This text treats the internet, the metaverse—whatever name we give to this extended space—as architecturally operative, as affecting its users, shaping how they exist and behave in other spaces. The New Design Congress’s Para-Real Manifesto is a notable example of recent work examining this relation between the digital and the physical, which they frame as a hybrid, ephemeral third space. 

In 2025, the idea of an “agentic web” entered mainstream discussion. It is a vision of the internet where AI agents act directly on behalf of users or organisations, navigating and transacting across services autonomously. At its annual developer conference, Microsoft committed to building an “open agentic web,” stressing the need for interoperable infrastructure to prevent uncontrolled “agent sprawl,” manage fleets of agents on complex tasks, and create new interfaces for machine-to-machine interaction. The upshot is a gradual reorientation of human roles, with Microsoft already “dogfooding” the shift internally—mandating AI use, upskilling staff, embedding agentic practices into performance reviews, and reconfiguring roles such as traditional sales into more technically proficient “solutions engineers.” 

This agentic shift intervenes in virtualised, architecturally operative spaces at the infrastructural level, signalling the end of search engines as the primary mode of navigation. To support this shift, the browser-era web must be modified and optimised for agentic legibility, creating a new substrate of personalised mediation within existing architectures. In this post-browser, post-search landscape, the internet is spatially and semantically reoriented—re-rationalised not for human users, but for fleets of proxy agents operating on their behalf. In turn, the agentic web could engender an acceleration in ambient integrations—wearables, voice activation, environmental sensors—that reduce friction points between the player and the virtual. 

High-Frequency Trading

High-frequency trading (HFT) brings together the optimisation of both the physical and the virtual traced in the preceding sections. It has long been an arena for unbridled, escalatory optimisation, with the physical world re-engineered for hardware advantage, and the virtual compressed into abstract models and temporalities that operate outside of human bandwidth. What emerges is a spatial and temporal order rationalised not for players, but for machinic performance itself. 

HFT systems are built to exploit arbitrage opportunities by capturing miniscule price differences in microseconds. Their performance depends heavily on physical infrastructure: the length and efficiency of fibre-optic routes, the volume of traffic they carry, the location of microwave towers—all are subject to relentless optimisation, with each marginal latency reduction translating into competitive advantage. This optimisation has extended beyond the actions of individual firms to reshape market infrastructure itself. Exchanges now rent out server space within their own data centres to provide traders with microsecond advantages, while new order types and fee structures cater to algorithmic strategies operating below human perceptual thresholds. The exchange becomes an infrastructure unburdened by the latency of human temporal or cognitive capabilities. 

Because the traders who manage these systems cannot intervene directly in buying and selling as it happens, their roles are limited to configuring parameters in line with market behaviours. Whilst they execute trades manually when market conditions fall outside algorithmic parameters, they can't participate in decision-making at machine timescales. Similar to the intuition of data centre technicians, the skills that traders deploy in the maintenance and guidance of complex systems require decision-making and reasoning abilities that are yet to be algorithmically-replicated, but logically will be edged out as the environment optimises away from (and because of) the affordances they offer. 

Ultimately, the profits generated in HFT systems can be seen as outcomes, not as immediate operational goals—which are to exploit market inefficiencies. Feedback loops between trading bots run on this premise can generate emergent behaviours opaque to humans, as seen in the 2010 Flash Crash. Markets no longer solely operate as arenas of human exchange but as environments restructured by machinic performance where unexpected outcomes emerge from abstract, protocol-driven interactions.*
* Another worthwhile example of machinic logic diverging from human expectation is AlphaGo’s ‘Move 37’ that occurred in its 2016 match against world champion Lee Sedol and is seen as a turning point in demonstrating the unpredictability of AI strategy. 

Exocapitalism

This algorithmic autonomy of financial markets is a crystallisation of what the theorists Roberto Alonso Trillo and Marek Poliks have termed Exocapitalism: a theory that takes a longer-term perspective of capitalism, arguing that the creation of internal inefficiencies and abstractions is its defining characteristic—and the mechanism it sustains itself through. Capital, they outline, is in a constant process of causing what they call lift, which is a move “away from price determination in favor of price generation [...] away from the fixed cost world of humans and physical matter.” This is evidenced in the miniscule price differences exploited in high frequency trading, which have no relationship to underlying economic production, instead existing purely as abstract artifacts of the market's architecture. 

When this “lifted system” intersects with human life it creates a friction that the authors term drag. We see this with high frequency traders adopting custodial roles within systems that operate in temporalities too fast for them to participate in, instead undertaking infrastructural maintenance tasks in slow, inefficient, biological timescales—a pain point waiting to be optimised. 

Where capitalism streamlines in one instance, it will also disrupt and mediate in another—as Trillo and Poliks observe, "capitalism has never pursued efficiency or human needs; it thrives on arbitrage [by] layering services." This layering is part of capitalism’s more general tendency toward abstraction, which is felt acutely in networked culture, in architecturally operative social media platforms and through personal communicative technology. Here, culture is prescribed by the algorithmic logic of recommendation systems optimising for engagement, engendering escalatory attention capture wars that give rise to content as a distinct organising principle subsuming cultural production and social interaction. 

The forms of the media we consume mutate under the demands and affordances of the system. The speed at which they are fed to us—and the sheer amount of information we have access to—exceeds human cognitive capacities. The immeasurable scale (and implied vastness) of digital networks engenders a parallax feeling, a megalophobia, a warping of frames of reference: a fundamental derealisation and estrangement. This is where Exocapitalism's economic abstraction becomes a tangible condition—the lifted system extracts value from human activity, whilst also reshaping the sensory and cognitive substrates through which this activity is conducted. 

Both Exocapitalism and Non-Player World identify a contemporary condition where humans find themselves in shifting agential relationships with systems that appear responsive but operate according to internal logics. They map dimensions of the same underlying transformation: the optimisation of socio-technical systems away from human agency and toward algorithmic performance. Yet one distinction between the two theories lies in the figure of the NPC itself. In Exocapitalism, humans become NPCs within game-like systems, while in Non-Player World, the purpose of the game world is moving away from the player, operating along lines of algorithmic—rather than human—performance, and in which NPCs are merely a structural propagation of the system that players project meaning onto. Despite this difference in emphasis, both frameworks converge on a shared diagnosis of contemporary experience, which Trillo and Poliks capture perfectly

"...the human moves from the pulsating heart of the game—as a subjective node in cybernetic feedback loops—to the periphery where it assumes the guise of a spectral onlooker drifting into the void. [The human becomes] an observer at the edge of a self-sustaining, impersonating, mirroring, and mocking ecosystem. As the matrix of game dynamics surrounding us expands, and the internal logics of the system become increasingly elusive, we recede into NPC roles, [becoming] mere spectators in alien landscapes. This points, we believe, to a critical phase shift in human history."

The Quantitative Turn

Part of the operating logic of exocapitalism is what Trillo and Poliks call “quantisation hype,” where the gamification of human activity through fitness trackers, productivity tools, and interaction metrics drive willing participation in economies of attention, while at the same time reorienting the criteria for cultural evaluation. They observe that "agency, once tied to will and responsibility, is reduced to quantifiable metrics," creating an “agential gap”. This erosion of human agency represents another form of lift, where complex cultural judgments are abstracted and compressed into computational metrics that can be algorithmically optimised. 

Art critic Ben Davis identifies this cultural logic as “Quantitative Aesthetics,” his term for the way data increasingly stands in as a proxy for artistic and cultural value. Streaming statistics, box office grosses, and social media metrics come to replace artistic merit, aesthetic judgment, and cultural status. Davis points to the convergence of fandom and metricisation, where fans spam-play their favourite artists’ releases not for personal enjoyment but to boost numbers—an example of how quantitative logic reshapes both cultural production and consumption. 

Davis identifies a framework for understanding this underlying ideology through the "McNamara Fallacy"—named after American Defence Secretary Robert McNamara's obsession with body counts as metrics of success during the Vietnam War: 

1. Measure whatever can be easily measured.

2. Disregard what can't be measured 
or assign it arbitrary quantitative value. 

3. Presume unmeasurable things aren't important.

4. Deny that unmeasurable things exist.

Applied to culture, this creates a systematic flattening where "what is good is popular and what is popular is good," treating platforms with underlying algorithmic biases as neutral arbiters of cultural worth. Content in turn becomes calibrated for a statistical mean likelihood of appealing to large audiences, resulting in a middening of cultural production. This strategy bears distinct resemblance to the moneyball strategy applied in sports—guided by probabilistic models and trackability that, like AlphaGo, eventually changes the very way the game is played.

The quantitative turn marks a distinct shift toward the rationalisation of architecturally operative online spaces, birthing environments where algorithmic optimisation becomes the organising principle of cultural experience. In this sense, Quantitative Aesthetics functions as both an ideology and an apparatus within Non-Player World—an invisible hand of the model. Cultural spaces come to feel simultaneously responsive (generating constant metrics, rankings, recommendations) and fundamentally alien, optimised for engagement algorithms rather than human meaning-making. Spotify Wrapped exemplifies this dynamic: an engagement metric disguised as cultural participation.
  

Microdosing Paris Syndrome

While Davis’s analysis shows how statistical methodologies hedge cultural production toward maximum viewership (a form of lift)—producing bland, regurgitative content—there are also inverse forms that emerge as by-products of drag: the friction between system throughput and human processing capacities. Prolonged exposure to saturated, high-frequency media environments, designed to farm dopamine through brute force tactics, elicits a specific kind of user response. On platforms reliant on user-generated content, this response is memetically fed back into the system by creators through post-ironic, semioblitzed forms such as the #corecore trend on TikTok. These forms function as meta-content, attempting to process overwhelming network scale by interpolating platform logic into creative methodology. 

The significance of such phenomena for Non-Player World hinges on whether we treat virtual and physical experience as separate domains, or whether we recognise how their convergence produces socio-technical environments reshaping how we navigate reality. A historical precedent for this kind of mediated disorientation can be found in Susan Sontag’s 1977 analysis of photography. In On Plato’s Cave, Sontag argued that photography creates “a chronic voyeuristic relation to the world which levels the meaning of all events,” turning experience into image-collection and reducing reality to “a series of unrelated, free-standing particles.” She also positioned this as a genuine socio-technical intervention in which “having an experience becomes identical with taking a photograph of it.” That dynamic intensified exponentially with the arrival of the camera phone, and again as the image took on a new role within social media, circulating not just as a document, but as content within economies of attention. 

We can note another precedent in Paris Syndrome, a psychological condition diagnosed in the 1980s by psychiatrist Hiroaki Ota, which provides a clear—if bounded—illustration of a type of drag between mediation and cognition, between the virtual and the physical. Ota coined the term for Japanese tourists who experienced acute derealisation, delusions, and hallucinations when their idealised, media-constructed impressions of Paris failed to align with reality—symptoms severe enough that the Japanese embassy established a 24-hour counselling hotline. In the context of Non-Player World, Paris Syndrome can be read as an early, localised pathology of mediation that is more generalised today. Clinical and ethnographic studies increasingly frame dissociation and derealisation as ordinary features of networked media use, with phenomena such as phantom phone vibrationsscrolling without awareness, and telepresence now seen as characteristic responses to always-on, high-frequency environments. 

In Non-Player World, what was once a bounded pathology tied to specific cultural fantasies has diffused into everyday life. Where Paris Syndrome manifested as vertigo at the gap between virtualised ideals and physical realities, we now inhabit environments where the two are integrated through ambient interfaces—touchscreens, wearables, voice activation, biometric sensing. This ongoing integration reduces obvious friction points while generating subtler forms of disorientation, as the boundaries between the physical and the virtual blur into a single perceptual field. 

A large factor in this disorientation is what we described earlier as deepfake realism: the ubiquity of generative AI introduces a baseline uncertainty about authenticity, eroding trust in visual and textual sources and making it increasingly difficult to distinguish human from machine-generated content. States of vertigo and parallax emerge, produced not by stark contrasts between the virtual and the physical, but by their uncanny fusion in environments optimised for algorithmic rather than human navigation, where even the methods for categorising the real are eroded. 

The Dream Game

The shift to the agentic web, and the rise of AI agents and LLMs more broadly, marks a seismic structural change in how we interface with reality. Language is not simply a tool we use; it’s a technology we exist within, influencing how we categorise experience and direct attention. It is reflexive and adaptive, constantly reshaped by its users. The integration of LLMs into daily life is significant because they appear to lack semantic comprehension (in the human sense). When a prompt is given to a model it sets in motion a probabilistic process: the model analyses the sequence of tokens it has been given and calculates, from its training distribution, the most likely token to follow. This decision is repeated again and again until an output string is assembled. At no stage does the model demonstrate understanding of what it is saying; coherence arises from the recursive stacking of probabilities, not from semantic comprehension. Meaning comes from the user's interpretation of patterns that are, in fact, statistical recombinations of past data. In this sense, prompting an LLM is less like asking a question to be answered, and more like initiating a constrained game of prediction whose apparent fluency is an effect of scale. Non-Player World thus becomes one big Dream Game: 

1. In the Dream Game, Player A tells Player B they had a dream, and Player B must reconstruct it by asking only yes-or-no questions. 

2. Player A secretly follows a simple rule:  they answer “yes” if the last letter of Player B’s question falls between A and K, otherwise they answer “no.” 

3. The exchange continues until a coherent dream of Player B’s inadvertent creation takes shape.

LLMs work in the same way: the system produces fluent outputs based on probabilities, but coherence only arises when external rules are applied to interpret the patterns. It does not come from the model holding fixed meanings. 

This dynamic is central to Non-Player World: socio-technical environments appear responsive to human input while operating according to hidden logics. The fluency of LLM outputs creates an illusion of communication onto which users project meaning, while the underlying process remains fundamentally algorithmic. This projection extends to the model itself, as users instinctively anthropomorphise the source of their interpreted meaning to make sense of the process. But the relationship is not unidirectional. While users anthropomorphise these systems by attributing understanding where none exists, they simultaneously undergo technopomorphisation, incrementally adopting the clinical syntax and probabilistic logic that characterises machine-generated text. What the proliferation of images was to social media, the proliferation of probabilistic language is to Non-Player World. Both reshape fundamental substrates of communication, giving rise to new semantic forms that blur the boundaries between human and machine production, authentic and hallucinated meaning. 

Ghost Flights

As much as Non-Player World manifests in industrial automation and spatial optimisation, it is equally felt in recommendation algorithms and the integration of probabilistic systems into channels of meaning-making. These categories are not discrete domains but a process of continuous integration between the virtual and the physical, the ideological and the material. As Exocapitalism outlines, capital lifts and optimises the material into the abstract, unburdening itself from human processing times wherever possible. But this process also embeds ideology into regulatory and institutional frameworks that execute with procedural inevitability. In doing so, a virtualised, quasi-computational market logic is formalised in environmental protocols with material outcomes.

The boundaries between technological apparatus and systemic protocol blur as regulatory frameworks function like algorithms, market mechanisms behave like automated systems, and human meaning-making becomes synthesised with probabilistic optimisation. Non-Player World emerges from this convergence, where ideology operates through embedded defaults and structural dynamics that channel behavior as effectively as physical constraints.
 

The ghost flights that occurred during COVID served as the original catalyst for this essay. Between 2019 and 2021, airlines flew over 5,000 near-empty planes to and from the UK, burning fuel to preserve airport slots under "use-it-or-lose-it" regulations. The system's internal logic made flying empty planes economically rational regardless of passenger demand or environmental cost. Ghost flights emerged from a systemically driven logic—an implicit function of profit accumulation brought about through emergent structural effects rather than explicit programming. 

Within this system, humans function as stewards of processes they cannot meaningfully direct, short of systemic transformation. Airline executives, regulatory bodies, and pilots all act with a local rationality that scales non-linearly into systemic irrationality. They tune parameters, ensure compliance, and handle exceptions, but the protocol effectively sets the objectives and throughlines. This pattern extends across the socio-technical environments that make up Non-Player World: social media platforms optimise for engagement over meaningful communication, financial algorithms generate abstract value detached from economic production, and the system plays itself—its objectives drifting away from the human.  

Special Thanks

With thanks to Simon DeDeo for his thoughts on AI and meaning, as well as his elucidation of the Dream Game; to POSTPOSTPOST for their insights on probabilistic language in relation to LLMs; to Anna Rose Kerr for her reflections on the future of internet interfaces; to Idil Galip for her references and observations regarding data centres; and to Cream Projects for their research into non-human spatial typologies. The contributions of all were invaluable to the writing of this text.