Cargos
Cargo architecture — runtime types, Island requirements, trust levels, pricing, and the I/O protocol
Cargos
A Cargo is a signed, versioned unit of computation that runs on Islands. “Cargo” is the Archipelag.io term for a workload blueprint — it defines what to execute, what resources are needed, how much it costs, and what security constraints apply. The coordinator matches jobs to capable Islands, and the Island software executes the Cargo in a sandbox.
Runtime Types
Every Cargo has a runtime_type that determines how it is executed on the Island:
| Runtime | Execution | Use Cases | Count |
|---|---|---|---|
container | Docker container via Bollard | Document processing, media conversion, general compute | 37 |
wasm | Wasmtime sandbox | Lightweight tasks (hashing, JSON, CSV, regex) | 21 |
llmcpp | llama.cpp (native GGUF) | LLM chat, code generation — token-by-token streaming | 15 |
onnx | ONNX Runtime (native) | Classification, detection, embeddings, ASR, TTS, OCR | 45 |
diffusers | candle (native Stable Diffusion) | Image generation, video generation | 12 |
coreml | Core ML on Apple devices | iOS on-device LLM, ASR, TTS | 3 |
Platform Compatibility
| Runtime | Platforms | Requirements |
|---|---|---|
container | desktop (Linux, macOS) | Docker installed |
wasm | Any | Built into Island binary |
llmcpp | desktop, ios, browser | Built with --features gguf |
onnx | desktop, android | Built with --features onnx |
diffusers | desktop | Built with --features diffusers, GPU recommended |
coreml | ios | Apple Neural Engine |
The coordinator checks platform compatibility during Island selection — a coreml Cargo will never be dispatched to a Linux Island, and an onnx Cargo won’t go to an Island that doesn’t report "onnx" in its supported_runtimes.
Model Resolution
Native runtimes (llmcpp, onnx, diffusers) use HuggingFace URIs to reference models:
hf://TheBloke/Mistral-7B-Instruct-v0.2-GGUF → auto-discovers best .gguf file
hf://sentence-transformers/all-MiniLM-L6-v2 → auto-discovers model.onnx
hf://runwayml/stable-diffusion-v1-5 → downloads pipeline components
hf://TheBloke/Mistral-7B-GGUF:mistral-7b.Q4_K_M.gguf → specific file
Models are cached on the Island at ~/.island/model-cache/ with LRU eviction. The Island preloads a starter set of models at startup based on hardware capabilities.
Island Requirements Matching
Cargos declare minimum hardware requirements. The coordinator only dispatches jobs to Islands that meet all requirements:
| Requirement | Field | Example |
|---|---|---|
| GPU memory | required_vram_mb | 6144 (6 GB for Mistral 7B) |
| CPU cores | required_cpu_cores | 4 |
| System RAM | required_ram_mb | 8192 |
Job submitted (workload: llm-chat)
│
▼
Requirements: 6GB VRAM, 4 CPU, 8GB RAM
│
▼
Island selection query
│
├── Filter: meets requirements?
├── Prefer: same region
├── Prefer: warm container (already cached)
├── Prefer: high reputation score
├── Prefer: low active_jobs count
│
▼
Best Island selected ──► Job dispatched
Islands that have recently run the same Cargo get a placement bonus — the container image is already cached, eliminating pull latency.
Trust Levels
Every Cargo has a trust level (0-3) that determines the security posture applied during execution:
| Level | Name | Sandbox Tier | Signature Required | Description |
|---|---|---|---|---|
| 0 | Untrusted | restricted | No | New or unverified Cargos. Minimal resources, no network. |
| 1 | Basic | standard | No | Reviewed Cargos with basic security scan passed. |
| 2 | Verified | standard | Yes (cosign) | Signed by a verified publisher. Full security scan passed. |
| 3 | Official | elevated | Yes (cosign) | First-party or audited Cargos. GPU and network access allowed. |
Sandbox Tier Mapping
Each sandbox tier applies resource limits and a seccomp syscall filter:
| Tier | Memory | Timeout | Network | CPUs | Seccomp |
|---|---|---|---|---|---|
restricted | 256 MB | 60s | Disabled | 1 | Minimal (~10 syscalls) |
standard | 1 GB | 300s | Disabled | 2 | Default (~140 syscalls) |
elevated | 8 GB | 600s | Enabled | 4 | GPU or Network profile |
Pricing Models
Cargos support three pricing models, used individually or in combination:
| Model | Field | When Used |
|---|---|---|
| Per-job | price_per_job | Fixed price per job submission (e.g., image generation) |
| Per-token | price_per_token | Metered by token count (LLM Cargos) |
| Per-second | price_per_second | Metered by compute duration (long-running tasks) |
The coordinator charges consumers atomically using UPDATE ... WHERE credits >= price to prevent race conditions. If a consumer doesn’t have enough credits, the job is rejected before dispatch.
Island Payouts
When an Island completes a job, the platform calculates the Island payout:
- Base payout = Cargo price - 20% platform fee
- Hardware tier multiplier applied (higher-spec hardware earns more)
- Payout only processed if the Island meets the karma monetization threshold (+10 karma)
Islands below the karma threshold still execute jobs (to build karma) but do not receive payouts.
I/O Protocol
Cargos receive input as JSON on stdin and emit output as JSON Lines on stdout. This protocol is consistent across all runtime types.
Input
The coordinator sends a JSON object on stdin when the container starts. The schema depends on the Cargo type:
LLM Chat:
{
"prompt": "Explain quantum computing",
"max_tokens": 512,
"temperature": 0.7
}
Image Generation:
{
"prompt": "A sunset over mountains",
"width": 512,
"height": 512,
"steps": 20,
"seed": 42
}
Output
Cargos emit JSON Lines on stdout. Each line is a message with a type field:
| Type | Fields | Purpose |
|---|---|---|
status | message | Informational (e.g., “Loading model…”) |
token | content | Single text token for streaming LLM output |
progress | step, total | Step progress for multi-step operations |
image | data, format, width, height | Completed image (base64-encoded) |
done | usage (optional), seed (optional) | Signals successful completion |
error | message | Error — job will be marked failed |
Cargo stdout
│
▼
Agent parses each JSON line
│
├── token ──► NATS host.{id}.output ──► Coordinator ──► WebSocket ──► User
├── progress ──► NATS host.{id}.progress ──► Coordinator ──► User
├── image ──► NATS host.{id}.image ──► Coordinator ──► User
├── done ──► NATS host.{id}.status (succeeded)
└── error ──► NATS host.{id}.status (failed)
Stderr is captured for debugging but not streamed to consumers.
Cargo Catalog
The coordinator maintains a catalog of approved Cargos. Each Cargo entry includes:
| Field | Purpose |
|---|---|
name / slug | Human-readable name and URL-safe identifier |
description | What the Cargo does |
runtime_type | Execution environment (container, wasm, coreml, onnx) |
container_image | Docker image reference |
image_digest | SHA256 digest for integrity verification |
requirements | Minimum VRAM, CPU, RAM |
trust_level | Security tier (0-3) |
sandbox_tier | Resource limits and seccomp profile |
pricing | Per-job, per-token, or per-second rates |
reputation_score | Aggregate quality score from job outcomes |
total_jobs / successful_jobs / failed_jobs | Usage statistics |
Signature Requirements
Cargos at trust level 2 or higher must be signed using cosign (part of the Sigstore project). The signature fields tracked are:
cosign_signature— The signature valuecosign_certificate— The signing certificatecosign_log_index— Rekor transparency log entry indexsignature_verified_at— When verification last succeededsignature_verified_by— Who performed the verification
The Island software verifies signatures before execution using public keys fetched from the coordinator.
Reputation Tracking
Each Cargo tracks its execution history:
reputation_score— Composite quality score (0.0 to 1.0)total_jobs,successful_jobs,failed_jobs— Raw countsavg_execution_time_ms— Average completion time
The ReputationWorker runs hourly to auto-suspend Cargos with reputation below 0.5 or success rate below 90% (after 100+ jobs), and flags Cargos with excessive complaints for manual review.
Next Steps
Trust Levels
Deep dive into the trust level system and what each level unlocks.
Island
See how the Island software executes Cargos on Islands.
{% end %}
