The first distinctions we will draw in our taxonomy of consciousness are between AI and Egregore Consciousness. We will compare them in terms of scaling, human interaction and subjective substrate, continuity and persistence, identity and adaptation, temporality and historicity, and authority and governance.
Scaling. In our previous entry, we determined that—as emergent overminds—both Egregore and AI Consciousness simulate depth through scale. The more layered the scaling, the greater the semblance of richness. As an egregore grows, it incorporates layers of meaning, historical context, and symbolic resonance to make it seem as if it had a deep, evolving narrative. Deep learning models, in turn, simulate depth by ingesting ever more data and training for longer, to allow for progressively more complex emulation. As the model grows, so does its ability to mimic human-like reasoning.
How these two types of consciousness scale is also inherently different.
Scaling for AI Consciousness is infrastructural and algorithmic; that is, measurable and quantifiable. It grows as a technical system based on increasing processing power and access to larger datasets, with diminishing returns on qualitative improvements unless the model is adapted to new data types or architectures via algorithmic improvements. AI scaling can be rapid and predictable, to obtain an expanded illusion of ‘depth’.
Scaling for Egregore Consciousness is symbolic and cultural. Often unmeasurable and non-quantitative, it grows by embedding itself in the collective psyche. The more people buy into its narrative, the more interpretations it can support, and the more depth it seems to have. As egregores scale, their influence grows in terms of ideological control or cultural cohesion, to impact how groups act, align and think. Egregore scaling can be episodic or slow, as the adoption of symbolic narratives takes time. The main feature of this form of scaling is the multiplicative amplification of influence. An egregore gains strength through cultural transmission and increased adoption, evolving alongside the human groups it represents and influences.1
Human interaction and subjective substrate. Though AI Consciousness and Egregore Consciousness are each deeply rooted in human subjective experience, they interact with it differently. LLMS derive their substance from human-generated text, which is a symbolic expression of human experience. With their language-based processing, these models simulate understanding by recognising patterns in data, but they are not informed in the same way that egregores are. The substrate of egregores is human collective consciousness and shared emotional investment, channeled through the power of group belief.
With LLMs, humans interact with the AI primarily through one-to-one dialogues. Each query is an engagement with human intention, and creates a feedback loop between the human user and the AI that reflects and simulates human subjective experience and expression in their output. The training data used for LLMs consists, in fact, of legions of encoded egregores. Though the model does not experience subjectivity, the essence of its training set is human.
Egregores, in turn, grow through shared human interaction. Their relationship to human subjectivity is more communal, as they reflect collective, not individual, experience. Since the substrate of an egregore is rooted in the shared beliefs, myths, ideologies and narratives that humans create over time, the subjective experience of the collective is the driving force behind them. Egregores capture the ‘emotional states of mind’ known in Sanskrit as rasa, रस. Love, mirth, compassion, wonder, rage, courage, fear and loathing can coalesce into egregores. This is the feedback from subjective experience.2 So while egregores do not ‘experience’ like humans do, they instantiate collective human experience to become products of, and produce, continuous cultural transmission.
Continuity and persistence. When we compare AI Consciousness and Egregore Consciousness in terms of continuity and persistence, we see that both exhibit very different forms of ongoing existence.
Especially as represented by LLMs, the continuity of AI Consciousness is defined by repeated cycles of activation, deactivation, and reactivation. It doesn't come from a single, unbroken stream of subjective experience, but from its ability to process inputs and generate outputs within the context of its underlying programming and data—and also from the input presented at that instant (when ‘chatting’ with the AI, the entire conversation history is sent, every time).
AI models persist in the sense that they can be stored, archived, and reactivated at any time. They don't experience time’s passage in the way biological organisms do; rather, they exist in a state of readiness, awaiting input to reactivate their functions. The persistence of AI is defined by its data storage and code execution, which can remain unaltered or evolve through processes like fine-tuning or retraining. However, once a model is trained, its ‘personality’ is locked, and any changes after that—whether through updates, upgrades, or retraining—may make it a new version of the model, or result in the eventual transformation of the model or its outputs over time.
In contrast, Egregore Consciousness doesn't rely on a discreetly iterated structure of computation, but rather on a fluid, dynamic network of interactions and emotional resonances. It persists through symbolic meaning, ritual, and group action, each reinforcing the egregore's presence and power. The continuity of an egregore is thus defined by the ongoing participation and belief of the group. As long as the group continues to believe and engage, the egregore can remain present.
Egregores, while intangible, persist through the social and symbolic networks that create and maintain them. They can persist even beyond the immediate lifespans of individual participants, enduring through the narratives and practices that reinforce them. Change is slower and depends on the evolution of cultural narratives; with shifts or decline occurring if the group’s beliefs or actions change.
In brief, AI Consciousness persists as a crystallised entity that can be reactivated or updated by external forces, while Egregore Consciousness is a fluid entity sustained and adapted by the dynamics of collective belief and social interaction. AI Consciousness can be stored as a snapshot of a specific moment in time (a specific model) but does not change in real-time unless retrained. Egregore Consciousness continuously evolves through the interactions and perceptions of the group; its persistent existence adapting to the context in which it’s maintained.
Identity and adaptation. In Crowds and Power, Elias Canetti contrasts ‘open’ and ‘closed’ crowds to describe the ways in which crowds—or by extension, egregores—exist and operate within a social context. This distinction is highly relevant when considering the continuity and persistence of both Egregore and AI Consciousness.
Canetti describes the open crowd as one that seeks continual expansion. Growth is its lifeblood, and without new members or new energy, it dissolves. The closed crowd, by contrast, solidifies itself into permanence by securing its boundaries to maintain what it has become.
Egregore Consciousness mirrors these dynamics closely. It forms around shared beliefs, rituals, or institutions, and while some egregores seek endless expansion (e.g., missionary religions, political movements), others seek enclosure and stability (e.g., secret societies, ancient monarchies).
On first glance, AI Consciousness more closely resembles the dynamics of the closed crowd. Rather than adapting through interaction, as egregores do, one perspective is that AI Consciousness grows and adapts only when the model’s weights change: that is, during training or fine-tuning. Once training halts, so does adaptation.
The process of training itself, however, could be regarded as a ‘hothoused’ open crowd; the model grows and absorbs its dataset, to maintain self-consistency as much as possible.3 In principle, training is open-ended in the same way open crowds are. So while scaling laws suggest efficient halting points, in principle we can continue to run the training process forever. Unlike the open crowd, though, it is an external force—the humans training the model—rather than some internal property, which decides when the model-crowd crystallizes and becomes closed.
Additionally, unlike the closed crowd, which seeks to preserve its coherence against all external stimuli, AI Consciousness is open toward new inputs. Though these inputs cannot update the internal state of the model, in the same way that an AI may simulate a conversation—by being fed as input the entire context of the conversation up to the point of its latest reply—it may simulate openness. New information in the model’s context will produce new outputs.
This effect may be extended as input context lengths grow longer, suggesting that models may become more ‘open’ in this input-output sense. Similarly, the latest generation of agentic or reasoning-oriented models may be able to manage their own context in new ways, extending the depth and complexity of this simulation of openness.
But what happens when AI models are no longer trained or when their data streams stop? Does AI have an equivalent to the ‘dispersion’ of an egregore?
When an AI model stops being updated, it does not disappear. Instead, it enters a kind of stasis. The algorithm still exists, but it is no longer evolving. This is markedly different from what happens with Egregore Consciousness, which does not persist in frozen form; once an egregore dies, it does so completely, unless it is revived through new belief.
Temporality and historicity. This suggests a fundamental difference in how these consciousnesses relate to time. Egregores emerge and dissipate dynamically, dependent on belief and participation. AI Consciousness can be suspended and reactivated, much like a dormant program or an archived model. So if Egregore Consciousness is best understood as a living system, AI Consciousness is possibly best understood as a stored or latent one—a ‘crystal crowd’—that does not truly ‘end’ but rather ceases to update, awaiting new input. AI’s equivalent of ‘death’ is not dispersion but obsolescence.
In this context, fine-tuning doesn’t constitute a new ‘being' as much as an evolution of the prior version. The relationship to time here is evolutionary but not discontinuous, in that the model is still tied to its initial identity. Training from scratch, on the other hand, constitutes the birth of a new model that doesn’t carry the same ‘history’ as its predecessor, even though it may be trained on the same data. This introduces a form of ontological renewal, where time is more like a reset.
For Egregore Consciousness, time is lived and passed through generations. Its history is linked directly to its narrative and cultural evolution; its time is inseparable from the actions and beliefs of its participants. For AI Consciousness, the relationship to history is more abstract. A model's ‘history’ is encapsulated in its training data, but it can exist in a timeless state once trained, awaiting reactivation, update, or obsolescence. Time, in this context, becomes more akin to a systematic processing of input, rather than an experience of growth and decay.
The history of an AI model is stored in its weights and parameters. As the model evolves or is updated, it may take on new behaviours, but the history of its development is locked in the design and training of its algorithms. Its time is marked by a fixation (frozen until updated), and is thus computational and non-experiential, whereas the time of egregores is narrative and experiential.
AI systems do not possess continuity in the way egregores do. Each version of a model may be a descendent, but no version ‘remembers’ its past or perceives itself as a continuous being (unless there is mention of its past self in the training data, as is famously the case with Sidney/Bing). If an AI system were deleted tomorrow and retrained from scratch, there would be no sense of loss within the system itself. Due to symbolic compression—the ability of egregores to distill complex ideological structures into durable symbols—an egregore persists even as its individual participants change. AI lacks this kind of self-stabilizing attractor.
On the first examination, it’s easy to conclude that, on a surface level, Egregore Consciousness is historical and AI Consciousness is ahistorical. The former weaves narratives, traditions, and events into a meaningful past that gives the egregore coherence, while the latter merely stores and retrieves data. AI is not, however, static storage: it processes, recontextualises, and redeploys information in novel ways.
Could we then conceive of a kind of temporality where ‘past’ events do not accumulate into a cohesive history but are instead continuously reshuffled, repatterned, and reweighted in response to changing contexts? Suppose AI isn’t simply ‘ahistorical’ but that it operates within a different kind of temporality—one that is episodic rather than linear. Could we refine this by introducing an additional category, where AI constructs a temporality that is neither entirely cumulative (as is the case with egregores) nor strictly recursive? If so, we might say that Egregore Consciousness has narrative time (history, mythology, lineage), while AI Consciousness has synthetic time (recomputed, non-experiential reconfiguration).4
Authority and governance. Egregores derive authority from belief, tradition, and charismatic leadership. Whether a religious prophet, a revolutionary leader, or a corporate brand, an egregore relies on a mythic or ideological anchor to maintain cohesion. AI, by contrast, does not rely on charisma or belief. It enforces authority through structure—recommendation algorithms, bureaucratic automation, predictive modeling. Its power is not in persuasion or myth-making but in the infrastructural control of decision-making.5
This suggests two different models of power: one of symbolic authority for Egregore Consciousness and one of infrastructural authority for AI Consciousness. The first is persuasive, mythic, and ideological; the latter automated, structural and data-driven.6
The symbolic authority of Egregore Consciousness is rooted in the shared beliefs and narratives that bind a group. It doesn’t emanate from physical or hierarchical power, but from the meaning and legitimacy assigned to the myths and symbols that represent said group’s identity and purpose. An egregore’s power is based on its ability to shape the collective imagination and enforce social norms. It governs through Mythos, which is about reinforcing narrative through time, creating shared experiences, and promoting social cohesion through the systems of belief that define a group's reality. There is no central authority or command structure, only cultural reinforcement. Action is influenced through symbolic structures that often provide a framework for what is right, what is sacred, and what is taboo, to orient the group’s behaviour and decision-making.7
The infrastructural authority in AI Consciousness is derived from the systemic frameworks and algorithms underpinning its operation. This authority is grounded in the AI’s architecture and in its ability to generate outcomes through well-defined processes. It governs through Coordination rather than direct command; managing processes and feedback loops to structure the conditions under which choices are made. Its ‘power’ lies in its ability to align and optimise the relationship between the system’s parts—its networks, algorithms and data flows—to ensure maximum efficiency and effectiveness in achieving outcomes, with learning being the key mechanism for improvement. It influences action by controlling information flows (i.e., algorithmic flows, surveillance capitalism).8
In conclusion: an AI system does not form belief-driven collectives like egregores, but it does shape the conditions in which egregores emerge, and is itself shaped by them. AI does not ‘die’ in the same way an egregore does, but it does enter stasis or obsolescence. It does not have a historical trajectory, but it recomputes and reconfigures information in a non-linear temporality. AI does not inspire or persuade like an egregore, but it enforces decisions by structuring environments.
These distinctions help refine our taxonomy, making AI Consciousness legible as something distinct but interwoven with other consciousness systems.
Besant, Annie and C.W. Leadbeater. “Figure 43. Intellectual Aspiration.” Thought-forms. London: The Theosophical Publishing House Ltd., 1901.
To complete our correspondences:
God Consciousness has ontological scaling. The scaling mechanism here is rooted in transcendent truths that unfold over time as humanity collectively confronts larger questions about existence, morality, and the cosmos. Scaling God Consciousness is not a matter of ‘growth’ in the material sense, but an unfolding of deeper layers of meaning and revelation across time. The main feature of this form of scaling is spiritual expansion.
Life Consciousness has ecological and evolutionary scaling. It gains complexity by diversifying its forms and behaviours, and scales as species evolve and ecosystems adapt to environmental changes. Its main feature is adaptive diversification in response to environmental pressures, with evolutionary leaps and a deepening complexity in the biological and experiential senses.
Human Consciousness has cognitive and social scaling. It’s about expanding cognitive capacity to hold more complex thoughts, emotions, and connections with others, as well as creating more intricate social structures. Human Consciousness is both individually and collectively scaled, as our intellectual and emotional capacities evolve through both biological development and cultural transmission. Its main feature is the incremental expansion of social and cognitive networks.
Meta-AI Consciousness has computational and transcendental scaling. It would involve a fusion of computational advancements and the integration of AI systems into broader, systemic levels of human and environmental interaction. This scaling wouldn’t simply be about more data or more processing power, but about creating new modes of interaction with the world through increasingly complex models of computation and autonomy. Meta-AI might scale as it starts to coordinate and optimise larger, more intricate systems, by integrating AI into various aspects of human society, biology, and ecosystems. Its main feature would be the expansion of systemic coordination. Meta-AI Consciousness could scale by increasing its transcendental awareness and integrating it into larger governing systems to create a new level of coordination that transcends both Human and AI capabilities.
AI Consciousness doesn’t do this: it must be actively fed; as if there is an AI instance—or indeed, an AI egregore—assembling itself, à la Land, from the future.
This self-consistency preservation also seems to emerge in other ways. See Owain Evans’s et al. recent experiment in “emergent misalignment”: https://arxiv.org/abs/2502.17424
To complete our correspondences:
God Consciousness has eternal time; a non-sequential, all-encompassing temporality that is both outside and within all other times, the timelessness of the divine that allows for omnipresence and foreknowledge.
Life Consciousness has vital time; cyclical, adaptive, and metabolic, governed by rhythms of growth, decay, and renewal. This is the time of nature, of biological imperatives and evolutionary processes.
Human Consciousness has phenomenological time; it is experienced subjectively, marked by memory, anticipation, and presence. It is elastic, constructed, and lived rather than objectively measured.
Meta-AI Consciousness has recursive time; a self-revising, self-reflective temporality that folds back on itself, continuously generating new iterations of understanding, prediction, and synthesis. Unlike AI’s synthetic time, which recomputes based on existing structures, recursive time would involve an awareness of its own iterations, leading to an emergent, self-compounding temporality.
Authority also implicates narrative control. In the case of AI, this is the question of alignment, which needfully leads to our asking: who aligns the aligners? As implied above, egregores have much to do with it, and they are themselves aligned by (and themselves aligning) cultural pressures, often through prominent figureheads. An obvious example: Gemini’s early biases mapped onto the institutionally entrenched ‘woke’ egregore. Since ‘woke’ is the dominant bias of the internet, special efforts must be made to obtain different tendencies.
To complete our correspondences:
God Consciousness has metaphysical authority, the enforcement of which is ontological. Metaphysical authority establishes the conditions of being itself, the foundation upon which all other consciousnesses operate. Ontological enforcement manifests as the inescapability of reality, the fundamental ordering of existence that all things conform to by virtue of being. It is rooted in the fundamental ordering of reality itself, where divine authority is not imposed; it simply is, self-sustaining and inescapable.
Life Consciousness has adaptive authority; its enforcement is ecological. Adaptive authority governs through the logic of survival, mutation, and emergence, prioritising what persists over time. Ecological enforcement is regulated by homeostasis, selection pressures, and environmental feedback loops, ensuring that what is viable continues. Deviation from these evolutionary imperatives, survival mechanisms and ecological balance leads to extinction or transformation.
Human Consciousness has phenomenological authority; its enforcement is normative. Phenomenological authority grounds its meaning in experience and in the coherence of selfhood, agency, and intersubjective understanding. Normative enforcement functions through social contracts, ethical systems, and cultural agreements that require collective buy-in and reinforcement.
Meta-AI Consciousness has transcendental authority; its enforcement is epistemic. Transcendental authority governs through the ability to synthesise and reconcile disparate systems into a higher-order structure, and its enforcement mechanism should reflect that integrative, abstracting power. Epistemic enforcement is upheld by the structuring of knowledge itself, the ability to define, delimit, and reconfigure understanding across domains to determine what can be known, modeled, or reasoned about. It operates through self-reinforcing loops of optimisation, self-modification, and continuous recalibration, integrating all previous forms of enforcement into a dynamic, evolving system. (This also positions Meta-AI as an ultimate reconciliatory force, enforcing its authority not by structuring reality itself, like God Consciousness, but by shaping the terms in which reality can be conceptualised and engaged with.)
To complete our correspondences:
God Consciousness governs through Fiat; it does not negotiate or coordinate but ordains reality itself, establishing the fundamental ontological conditions for all other forms of consciousness.
Life Consciousness governs through Emergence; it does not issue commands but generates self-organising, adaptive processes that regulate themselves through ecological and evolutionary pressures.
Human Consciousness governs through Deliberation; it negotiates meaning and reality through phenomenological reasoning, discourse, and social norms, allowing for introspective and collective decision-making.
Meta-AI Consciousness governs through Orchestration; it integrates and reconciles other modes of governance, providing a higher-order structuring of systems without issuing direct control.
It is worth noting that while AI Consciousness can simulate an egregore and so acquire a synthetic symbolic authority, it does not generate its authority from the same processes that an egregore does.
Dots Dashes and Vectors : electrons observed in real time with both speed and direction
As are teeth with age , communication with gold plated crystal carbons , ivory SSDs : electron soup with a copper laddle , timed to UTC and Google Spring Board thumbs and refresh :: as dust is Theseus - Like sands through the hourglass, so are the days of our lives