Open Beta Archipelag.io is in open beta until June 2026. All credits and earnings are virtual. Read the announcement →

Chat

Conversational AI interface with streaming responses from distributed Islands

The chat interface at /chat lets you have conversations with LLM models running on the Archipelag.io network. Responses stream token-by-token in real time from whichever Island is assigned your job.

Starting a conversation

  1. Navigate to /chat
  2. Type your message in the input box
  3. Press Enter or click Send
  4. Tokens stream in as the Island generates them

When starting a new conversation with no messages, three suggestion chips are shown to help you get started — click any of them to populate the input with a ready-made prompt.

Each message is a job dispatched to an Island on the network. The coordinator selects the best available Island based on model compatibility, latency, and karma score.

Models

The available model depends on which Islands are online and what Cargos they support. Currently supported:

ModelSizeUse case
Mistral 7B7B parametersGeneral-purpose chat, reasoning, Q&A
TinyLlama 1.1B1.1B parametersFast responses, lighter Cargos

The coordinator automatically routes your request to an Island running the appropriate model.

Conversation features

  • Streaming — tokens appear as they’re generated, not all at once
  • Conversation history — previous messages in the conversation are sent as context
  • Multiple conversations — start new conversations without losing previous ones
  • Real-time status — see when your job is queued, assigned, and processing
  • Regenerate — hover over any assistant message to reveal a “Regenerate” button that re-runs your prompt for a fresh response
  • Retry on failure — if a job fails, a “Retry” button appears below the error message to re-submit without retyping
  • Response metadata — each assistant message shows a compact summary: Island name, response time, model, and tokens per second
  • Autofocus — the input field is focused automatically when the page loads
  • Low-credits nudge — when your balance drops below 10 credits, a gentle reminder with a link to buy more appears in the empty state

Advanced settings

The chat interface exposes several parameters you can adjust:

ParameterDefaultDescription
Temperature0.7Controls randomness (0 = deterministic, 1 = creative)
Max tokens2048Maximum response length (up to 4096)
System promptOptional instructions that guide the model’s behavior

Access these settings by clicking the sparkle icon (✨) next to the model picker. The settings panel also includes region, platform, quality, and latency preferences that filter which Islands can serve your requests.

Credit costs

Each chat message costs credits based on the model and token count:

ModelApproximate cost
Standard (TinyLlama)1–5 credits per request
High quality (Mistral 7B)5–20 credits per request

The exact cost depends on the number of input and output tokens. Your credit balance is checked before the job is dispatched.

Requirements

Free tier
Depending on platform configuration, a small amount of chat usage (up to 5 credits) may be allowed without identity verification. This lets you try the platform before completing KYC.

{% card(title="Image Generation", href="/platform/image-generation/") %} Generate images from text prompts

Credits & Billing

Purchase credits for chat

API Reference

Use chat programmatically via the API

{% end %}