(aka resistance to structural change)
NOTE: This classification applies to specific transformational depths (from seed boundaries). SOS Classifications cannot be compared across different depths.
So a “resilient structure” classification for astronomical bodies cannot be compared to one for human immunity series.
LLMs are brittle symbolic systems, held together by training coherence and system memory. They fail or mutate when inputs, hardware, or social framing shifts too far.
A large language model is a tool. As such, and within the specific context of using it as a tool, LLMs operate at a Higher than human scales of reality. However, when viewed from a broader lens – LLMs have not yet reached the same level of complexity as life. This is why they are categorized as being ‘mostly Lower than human‘
Even a relatively uncomplicated animal like a wasp would operate at a higher scales of reality when you consider the complexity of all the functions that life solves for. Examples include, an immune system, a reproductive mechanism, the ability to take in multiple different types of input signals etc.
The specific environmental context during tool-use will consist of cloud computing infrastructures, data centers, and digital ecosystems. They interact with users, developers, and external datasets through API calls, user prompts, and machine learning pipelines. Their presence spans research labs, enterprises, and consumer-facing applications.
Like all tools – LLMs too have two complementary types of distinction mechanisms.
The first type of distinction is physical. Surprisingly for digital tools, the physical element is not as important as it is for other types of tools. But that doesn’t mean a physical boundary doesn’t exist – after all LLMs and any training data and computational methods will be stored on a server. They don’t exist in the ether.Â
The abstract (or biologically derived) component of the LLM boundary is much more important. It is defined by the parameters, architecture, and data it has been trained on. It is delineated by its training corpus, computational limitations, and the external inputs it can process. Unlike traditional software, its responses are probabilistic, meaning its boundary is functionally determined by statistical inference rather than deterministic rules.
Strangely enough, this also means that an LLM that is bad at predicting expected answers would not even qualify as an LLM – think of a random word generator but one that is trained with the best statistical methods to generate the most non-sequitur answers to a question.Â
NA
NA
1. Users (Researchers, Developers, End-Users)
Â
2. Training Data (Large Text Corpora)
Â
3. Compute Infrastructure (GPUs, TPUs, Servers)
Â
4. External Tools and APIs (Databases, Knowledge Bases, Plugins)
1. Prompt Processing (Tokenization and Encoding)
Â
2. Attention Mechanism (Internal Information Flow)
Â
3. Parameter Update (Fine-Tuning from User Feedback)
Â
4. Resource Management (Scaling Across Servers)