Chat
Conversational AI interface with streaming responses from distributed Islands
The chat interface at /chat lets you have conversations with LLM models running on the Archipelag.io network. Responses stream token-by-token in real time from whichever Island is assigned your job.
Starting a conversation
- Navigate to
/chat - Type your message in the input box
- Press Enter or click Send
- Tokens stream in as the Island generates them
When starting a new conversation with no messages, three suggestion chips are shown to help you get started — click any of them to populate the input with a ready-made prompt.
Each message is a job dispatched to an Island on the network. The coordinator selects the best available Island based on model compatibility, latency, and karma score.
Models
The available model depends on which Islands are online and what Cargos they support. Currently supported:
| Model | Size | Use case |
|---|---|---|
| Mistral 7B | 7B parameters | General-purpose chat, reasoning, Q&A |
| TinyLlama 1.1B | 1.1B parameters | Fast responses, lighter Cargos |
The coordinator automatically routes your request to an Island running the appropriate model.
Conversation features
- Streaming — tokens appear as they’re generated, not all at once
- Conversation history — previous messages in the conversation are sent as context
- Multiple conversations — start new conversations without losing previous ones
- Real-time status — see when your job is queued, assigned, and processing
- Regenerate — hover over any assistant message to reveal a “Regenerate” button that re-runs your prompt for a fresh response
- Retry on failure — if a job fails, a “Retry” button appears below the error message to re-submit without retyping
- Response metadata — each assistant message shows a compact summary: Island name, response time, model, and tokens per second
- Autofocus — the input field is focused automatically when the page loads
- Low-credits nudge — when your balance drops below 10 credits, a gentle reminder with a link to buy more appears in the empty state
Advanced settings
The chat interface exposes several parameters you can adjust:
| Parameter | Default | Description |
|---|---|---|
| Temperature | 0.7 | Controls randomness (0 = deterministic, 1 = creative) |
| Max tokens | 2048 | Maximum response length (up to 4096) |
| System prompt | — | Optional instructions that guide the model’s behavior |
Access these settings by clicking the sparkle icon (✨) next to the model picker. The settings panel also includes region, platform, quality, and latency preferences that filter which Islands can serve your requests.
Credit costs
Each chat message costs credits based on the model and token count:
| Model | Approximate cost |
|---|---|
| Standard (TinyLlama) | 1–5 credits per request |
| High quality (Mistral 7B) | 5–20 credits per request |
The exact cost depends on the number of input and output tokens. Your credit balance is checked before the job is dispatched.
Requirements
- An account on Archipelag.io
- Credits in your balance (or free tier allowance)
- Identity verification for usage beyond the free tier
Credits & Billing
Purchase credits for chat
API Reference
Use chat programmatically via the API
