Architecture

Under the Hood: How Chatbots Work

SYSTEM DIAGRAM

User

Input: "Check files"

The Chatbot Runtime (Python/Node Wrapper)
CONTEXT BUILDER
1. System Prompt
2. Chat History
3. Tool Outputs

LLM

Inference Engine

TOOLS (Safe Zone?)
$ ls -la $ date

The "Memory" Illusion

The chatbot has no continuous brain. Every single reply is a fresh reconstruction of reality.

Runtime Assembly

For every user message, the system re-reads the System Prompt, Chat History, and previous Tool Results to create the context window.

Tool Execution

The LLM cannot "run" code. It outputs text (JSON). The wrapper code executes the command on the OS and feeds the text result back into the prompt.

What the LLM "Sees"

[System Prompt] + [History] + [Tool Output]

This combined text is the only "self" the agent has access to.

Previous

NANOBOT CASE STUDY

03 / 06 Next