Analysis
Explicit instructions and configuration injected into every conversation turn.
"Your workspace is at /root/.nanobot"
Real-time feedback from the OS environment via executed commands.
$ ls -la /root
> .nanobot .bashrc
Pre-training on Linux systems allows the LLM to infer standard behaviors.
Inference: "Configs are usually in .json files"
When information is missing, the LLM hallucinates plausible but incorrect details to maintain the persona.
Result: Illusion Breaks
The agent cannot reconcile contradictory data from different sources (e.g., Prompt Time vs. System Time).
Result: Logic Errors
Since "self" is just text in the prompt, attackers can inject instructions to redefine the agent's reality.
Result: Data Leaks