Open Beta Archipelag.io is in open beta until June 2026. All credits and earnings are virtual. Read the announcement →

Cargos

Cargo architecture — runtime types, Island requirements, trust levels, pricing, and the I/O protocol

Cargos

A Cargo is a signed, versioned unit of computation that runs on Islands. “Cargo” is the Archipelag.io term for a workload blueprint — it defines what to execute, what resources are needed, how much it costs, and what security constraints apply. The coordinator matches jobs to capable Islands, and the Island software executes the Cargo in a sandbox.

Runtime Types

Every Cargo has a runtime_type that determines how it is executed on the Island:

RuntimeExecutionUse CasesCount
containerDocker container via BollardDocument processing, media conversion, general compute37
wasmWasmtime sandboxLightweight tasks (hashing, JSON, CSV, regex)21
llmcppllama.cpp (native GGUF)LLM chat, code generation — token-by-token streaming15
onnxONNX Runtime (native)Classification, detection, embeddings, ASR, TTS, OCR45
diffuserscandle (native Stable Diffusion)Image generation, video generation12
coremlCore ML on Apple devicesiOS on-device LLM, ASR, TTS3
Container is the default
When no `runtime_type` is specified, the system defaults to `container`. Native runtimes (`llmcpp`, `onnx`, `diffusers`) execute models directly on the Island without Docker overhead, downloading them from HuggingFace at runtime. See the [Native ML Runtimes guide](/guides/native-runtimes/) for details.

Platform Compatibility

RuntimePlatformsRequirements
containerdesktop (Linux, macOS)Docker installed
wasmAnyBuilt into Island binary
llmcppdesktop, ios, browserBuilt with --features gguf
onnxdesktop, androidBuilt with --features onnx
diffusersdesktopBuilt with --features diffusers, GPU recommended
coremliosApple Neural Engine

The coordinator checks platform compatibility during Island selection — a coreml Cargo will never be dispatched to a Linux Island, and an onnx Cargo won’t go to an Island that doesn’t report "onnx" in its supported_runtimes.

Model Resolution

Native runtimes (llmcpp, onnx, diffusers) use HuggingFace URIs to reference models:

hf://TheBloke/Mistral-7B-Instruct-v0.2-GGUF          → auto-discovers best .gguf file
hf://sentence-transformers/all-MiniLM-L6-v2           → auto-discovers model.onnx
hf://runwayml/stable-diffusion-v1-5                   → downloads pipeline components
hf://TheBloke/Mistral-7B-GGUF:mistral-7b.Q4_K_M.gguf → specific file

Models are cached on the Island at ~/.island/model-cache/ with LRU eviction. The Island preloads a starter set of models at startup based on hardware capabilities.

Island Requirements Matching

Cargos declare minimum hardware requirements. The coordinator only dispatches jobs to Islands that meet all requirements:

RequirementFieldExample
GPU memoryrequired_vram_mb6144 (6 GB for Mistral 7B)
CPU coresrequired_cpu_cores4
System RAMrequired_ram_mb8192
Job submitted (workload: llm-chat)
        │
        ▼
  Requirements: 6GB VRAM, 4 CPU, 8GB RAM
        │
        ▼
  Island selection query
        │
        ├── Filter: meets requirements?
        ├── Prefer: same region
        ├── Prefer: warm container (already cached)
        ├── Prefer: high reputation score
        ├── Prefer: low active_jobs count
        │
        ▼
  Best Island selected ──► Job dispatched

Islands that have recently run the same Cargo get a placement bonus — the container image is already cached, eliminating pull latency.

Trust Levels

Every Cargo has a trust level (0-3) that determines the security posture applied during execution:

LevelNameSandbox TierSignature RequiredDescription
0UntrustedrestrictedNoNew or unverified Cargos. Minimal resources, no network.
1BasicstandardNoReviewed Cargos with basic security scan passed.
2VerifiedstandardYes (cosign)Signed by a verified publisher. Full security scan passed.
3OfficialelevatedYes (cosign)First-party or audited Cargos. GPU and network access allowed.

Sandbox Tier Mapping

Each sandbox tier applies resource limits and a seccomp syscall filter:

TierMemoryTimeoutNetworkCPUsSeccomp
restricted256 MB60sDisabled1Minimal (~10 syscalls)
standard1 GB300sDisabled2Default (~140 syscalls)
elevated8 GB600sEnabled4GPU or Network profile
Trust level determines capability
A Cargo at trust level 0 cannot access the network or GPU, regardless of Island capabilities. Publishers must pass security scanning and verification to unlock higher trust levels.

Pricing Models

Cargos support three pricing models, used individually or in combination:

ModelFieldWhen Used
Per-jobprice_per_jobFixed price per job submission (e.g., image generation)
Per-tokenprice_per_tokenMetered by token count (LLM Cargos)
Per-secondprice_per_secondMetered by compute duration (long-running tasks)

The coordinator charges consumers atomically using UPDATE ... WHERE credits >= price to prevent race conditions. If a consumer doesn’t have enough credits, the job is rejected before dispatch.

Island Payouts

When an Island completes a job, the platform calculates the Island payout:

  1. Base payout = Cargo price - 20% platform fee
  2. Hardware tier multiplier applied (higher-spec hardware earns more)
  3. Payout only processed if the Island meets the karma monetization threshold (+10 karma)

Islands below the karma threshold still execute jobs (to build karma) but do not receive payouts.

I/O Protocol

Cargos receive input as JSON on stdin and emit output as JSON Lines on stdout. This protocol is consistent across all runtime types.

Input

The coordinator sends a JSON object on stdin when the container starts. The schema depends on the Cargo type:

LLM Chat:

{
  "prompt": "Explain quantum computing",
  "max_tokens": 512,
  "temperature": 0.7
}

Image Generation:

{
  "prompt": "A sunset over mountains",
  "width": 512,
  "height": 512,
  "steps": 20,
  "seed": 42
}

Output

Cargos emit JSON Lines on stdout. Each line is a message with a type field:

TypeFieldsPurpose
statusmessageInformational (e.g., “Loading model…”)
tokencontentSingle text token for streaming LLM output
progressstep, totalStep progress for multi-step operations
imagedata, format, width, heightCompleted image (base64-encoded)
doneusage (optional), seed (optional)Signals successful completion
errormessageError — job will be marked failed
Cargo stdout
        │
        ▼
  Agent parses each JSON line
        │
        ├── token ──► NATS host.{id}.output ──► Coordinator ──► WebSocket ──► User
        ├── progress ──► NATS host.{id}.progress ──► Coordinator ──► User
        ├── image ──► NATS host.{id}.image ──► Coordinator ──► User
        ├── done ──► NATS host.{id}.status (succeeded)
        └── error ──► NATS host.{id}.status (failed)

Stderr is captured for debugging but not streamed to consumers.

Cargo Catalog

The coordinator maintains a catalog of approved Cargos. Each Cargo entry includes:

FieldPurpose
name / slugHuman-readable name and URL-safe identifier
descriptionWhat the Cargo does
runtime_typeExecution environment (container, wasm, coreml, onnx)
container_imageDocker image reference
image_digestSHA256 digest for integrity verification
requirementsMinimum VRAM, CPU, RAM
trust_levelSecurity tier (0-3)
sandbox_tierResource limits and seccomp profile
pricingPer-job, per-token, or per-second rates
reputation_scoreAggregate quality score from job outcomes
total_jobs / successful_jobs / failed_jobsUsage statistics

Signature Requirements

Cargos at trust level 2 or higher must be signed using cosign (part of the Sigstore project). The signature fields tracked are:

  • cosign_signature — The signature value
  • cosign_certificate — The signing certificate
  • cosign_log_index — Rekor transparency log entry index
  • signature_verified_at — When verification last succeeded
  • signature_verified_by — Who performed the verification

The Island software verifies signatures before execution using public keys fetched from the coordinator.

Reputation Tracking

Each Cargo tracks its execution history:

  • reputation_score — Composite quality score (0.0 to 1.0)
  • total_jobs, successful_jobs, failed_jobs — Raw counts
  • avg_execution_time_ms — Average completion time

The ReputationWorker runs hourly to auto-suspend Cargos with reputation below 0.5 or success rate below 90% (after 100+ jobs), and flags Cargos with excessive complaints for manual review.

Next Steps

{% card(title="Publishing Guide", href="/guides/publishing/") %} Learn how to submit, scan, sign, and publish your own Cargos.

Trust Levels

Deep dive into the trust level system and what each level unlocks.

Island

See how the Island software executes Cargos on Islands.

{% end %}