Browser Island
Run AI inference directly in your browser and earn credits
The Browser Island at /host lets you contribute compute power to the network directly from your browser — no downloads or installation required.
How it works
When you visit /host, your browser:
- Registers as an Island with the coordinator
- Downloads the TinyLlama 1.1B Chat model (~700MB, cached after first load)
- Runs inference using llama.cpp compiled to WebAssembly (via wllama)
- Streams results back through the coordinator to the requesting consumer
Your browser acts as a full Island on the network, receiving job assignments and executing them locally.
Dashboard
The Island page shows real-time statistics:
| Metric | Description |
|---|---|
| Status | Current state: initializing → loading model → ready → processing job |
| Model progress | Download and loading progress bar |
| Active job | Details of the currently executing job |
| Tokens generated | Running count of tokens produced |
| Tokens/sec | Current inference speed |
| Karma | Your accumulated reputation score |
| Compute minutes | Total time spent processing jobs |
| Jobs completed | Successful job count |
| Jobs failed | Failed job count |
Requirements
- Modern browser with WebAssembly support (Chrome 90+, Firefox 89+, Safari 15+, Edge 90+)
- Stable internet connection for receiving jobs and streaming results
- ~1GB free memory for the model and inference
- Tab must stay open — inference stops if you close or navigate away
Earnings
You earn credits for every job your Browser Island completes successfully:
- Credits accumulate as you complete jobs
- Your karma score builds over time with successful completions
- Higher karma means you’re prioritized for more jobs
- View your earnings in settings under “Your Islands”
Choosing Which Models to Serve
The Models panel on the Island page gives you full control over which Cargos your Island accepts:
Browsing Models
Models are grouped by category using filter tabs:
| Category | Examples |
|---|---|
| LLM | TinyLlama, Mistral 7B, Qwen3.5, Llama 3.1 |
| Vision | YOLOv8 detection, BLIP captioning, depth estimation |
| Audio | Whisper speech-to-text, Kokoro TTS, VibeVoice |
| Text | Sentiment analysis, embeddings, NER, QA, translation |
| Utility | PDF conversion, image resize, video transcode |
Each model shows its RAM and VRAM requirements, estimated speed on your hardware, and a fit score (Perfect / Good / Marginal).
Enabling Models
- Tap a model to toggle it on or off
- Enable All activates every model compatible with your hardware
- None disables all models
- Enabled models appear with a teal checkbox and highlighted border
When you enable a model:
- Your Island’s
supported_runtimesupdates automatically - The coordinator starts dispatching matching jobs to you
- Models are downloaded on-demand when the first job arrives — no upfront download needed
What Happens When a Job Arrives
- Coordinator checks your enabled models and dispatches a matching job
- If the model isn’t cached yet, it downloads automatically (first job is slower)
- Inference runs on your device
- Results stream back to the consumer in real-time
- You earn credits based on the Cargo’s pricing
Lifecycle
The Browser Island follows this lifecycle:
- Initializing — Registering with the coordinator
- Model selection — Choose which models to serve via the Models panel
- Ready — Waiting for job assignments, sending heartbeats
- Processing — Executing an assigned job, streaming tokens
- Ready — Job complete, waiting for the next assignment
The cycle continues as long as the tab remains open. The Island sends regular heartbeats to the coordinator to maintain its online status.
Karma System
How reputation scoring works
Credits & Billing
How credits and earnings work
