Open Beta Archipelag.io is in open beta until June 2026. All credits and earnings are virtual. Read the announcement →

Endpoints

Complete API endpoint reference

API Endpoints

Base URL: https://api.archipelag.io/api/v1

All requests require authentication via the Authorization header:

Authorization: Bearer ak_your_api_key

Health

Check Health

GET /health

No authentication required. Returns server status.

Response

{
  "status": "ok",
  "version": "1.0.0"
}

Jobs

Submit Job

POST /jobs

Create and submit a compute job.

Request Body

{
  "workload": "llm-chat",
  "input": {
    "prompt": "Hello!",
    "max_tokens": 100,
    "temperature": 0.7
  },
  "priority": 0
}
FieldTypeRequiredDescription
workloadstringYesCargo slug (e.g., “llm-chat”, “sdxl”)
inputobjectYesCargo-specific input parameters
priorityintegerNoJob priority (0-100, default 0)

Response

{
  "data": {
    "id": "job_abc123",
    "workload_id": 1,
    "workload_slug": "llm-chat",
    "status": "submitted",
    "input": { "prompt": "Hello!" },
    "created_at": "2026-01-26T12:00:00Z"
  }
}

List Jobs

GET /jobs?limit=20&offset=0

List your recent jobs.

ParameterTypeDefaultDescription
limitinteger20Max results (1-100)
offsetinteger0Pagination offset

Response

{
  "data": [
    {
      "id": "job_abc123",
      "status": "completed",
      "created_at": "2026-01-26T12:00:00Z",
      "completed_at": "2026-01-26T12:00:05Z"
    }
  ],
  "meta": {
    "total": 42,
    "limit": 20,
    "offset": 0
  }
}

Get Job

GET /jobs/:id

Get details of a specific job.

Response

{
  "data": {
    "id": "job_abc123",
    "workload_slug": "llm-chat",
    "status": "completed",
    "input": { "prompt": "Hello!" },
    "output": "Hi there! How can I help?",
    "usage": {
      "prompt_tokens": 5,
      "completion_tokens": 8,
      "total_tokens": 13,
      "credits_used": 0.0013
    },
    "created_at": "2026-01-26T12:00:00Z",
    "started_at": "2026-01-26T12:00:01Z",
    "completed_at": "2026-01-26T12:00:05Z",
    "duration_ms": 4000
  }
}

Job Status Values

  • pending - Job created, awaiting Island
  • queued - Waiting in queue
  • running - Currently executing
  • completed - Successfully finished
  • failed - Execution failed
  • cancelled - User cancelled
  • timeout - Exceeded time limit

Cancel Job

DELETE /jobs/:id

Cancel a pending or running job.

Response

{
  "data": {
    "id": "job_abc123",
    "status": "cancelled"
  }
}

Stream Job Output

GET /jobs/:id/stream

Stream job output in real-time using Server-Sent Events (SSE).

Events

event: token
data: {"content": "Hello"}

event: progress
data: {"step": 10, "total": 30}

event: status
data: {"state": "streaming"}

event: image
data: {"data": "base64...", "format": "png"}

event: done
data: {"usage": {"total_tokens": 42}}

event: error
data: {"error": "Something went wrong"}

Chat Completions (OpenAI-Compatible)

Create Chat Completion

POST /chat/completions

OpenAI-compatible chat completion endpoint.

Request Body

{
  "model": "llm-chat",
  "messages": [
    {"role": "system", "content": "You are helpful"},
    {"role": "user", "content": "Hello!"}
  ],
  "max_tokens": 100,
  "temperature": 0.7,
  "stream": false
}

Response (non-streaming)

{
  "id": "job_abc123",
  "object": "chat.completion",
  "created": 1706270400,
  "model": "llm-chat",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hi there! How can I help?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 8,
    "total_tokens": 18
  }
}

Response (streaming)

When stream: true, returns SSE with OpenAI-format chunks:

data: {"choices":[{"delta":{"content":"Hi"}}]}

data: {"choices":[{"delta":{"content":" there"}}]}

data: [DONE]

Cargos

List Cargos (Catalog)

GET /cargos

No authentication required. Lists all enabled and approved Cargos.

Query Parameters

ParameterTypeDescription
runtime_typestringFilter by runtime: container, wasm, onnx, llmcpp, diffusers, coreml
categorystringFilter by use-case: llm, image, audio, vision, text, utility

Examples

# All LLM models
GET /cargos?category=llm

# All ONNX models
GET /cargos?runtime_type=onnx

# Image generators only
GET /cargos?category=image

Response

{
  "cargos": [
    {
      "slug": "gguf-mistral-7b",
      "name": "Mistral 7B Instruct",
      "description": "Mistral 7B Instruct v0.2 (Q4_K_M)",
      "runtime_type": "llmcpp",
      "onnx_task_type": null,
      "price_per_job": 1.0,
      "required_vram_mb": 6000,
      "required_ram_mb": 8000
    }
  ],
  "count": 15
}

Categories

CategoryIncludes
llmGGUF LLMs + container LLM Cargos
imageDiffusers image gen + container image gen
audioONNX ASR (Whisper, Voxtral) + TTS (Kokoro, Chatterbox)
visionONNX detection, segmentation, captioning, depth, OCR
textONNX classification, embeddings, NER, QA, translation, summarization
utilityWASM + container utilities (PDF, video, image processing)

Account

Get Account

GET /account

Get your account information.

Response

{
  "data": {
    "id": "user_abc123",
    "email": "user@example.com",
    "credits": 100.50,
    "created_at": "2026-01-01T00:00:00Z"
  }
}

API Keys

List API Keys

GET /api-keys

List your API keys.

Response

{
  "data": [
    {
      "id": "key_abc123",
      "name": "Production",
      "prefix": "ak_prod_",
      "created_at": "2026-01-01T00:00:00Z",
      "last_used_at": "2026-01-26T12:00:00Z"
    }
  ]
}

Create API Key

POST /api-keys

Create a new API key.

Request Body

{
  "name": "Development"
}

Response

{
  "data": {
    "id": "key_abc123",
    "name": "Development",
    "prefix": "ak_dev_"
  },
  "key": "ak_dev_full_key_only_shown_once"
}
Important
The full API key is only returned once. Store it securely!

Delete API Key

DELETE /api-keys/:id

Revoke an API key.

Island Recommendations

Get Preload Recommendations

GET /island/recommendations?host_id=YOUR_ISLAND_ID

No authentication required. Returns Cargos your Island should preload based on current network demand, filtered by your hardware capabilities.

Query Parameters

ParameterTypeRequiredDescription
host_idstring (UUID)YesYour Island’s host ID

Response

{
  "host_id": "df1745b5-...",
  "recommendations": [
    {
      "workload_slug": "gguf-qwen3.5-4b",
      "model_url": "hf://unsloth/Qwen3.5-4B-GGUF",
      "model_hash": null,
      "runtime_type": "llmcpp",
      "estimated_earnings_per_job": "0.5",
      "queued_demand": 3,
      "demand_score": 30
    }
  ]
}
FieldDescription
workload_slugCargo identifier
model_urlHuggingFace or direct download URL
runtime_typeRuntime needed (llmcpp, container, wasm, onnx, diffusers)
queued_demandNumber of jobs currently queued for this Cargo
demand_scoreComposite score: queued * 10 + recent_completions * 2
estimated_earnings_per_jobCredits earned per completed job

Results are sorted by demand_score descending (highest demand first). Only Cargos your Island’s hardware can serve are included. Islands call this automatically every 15 minutes when coordinator.api_url is configured.

Error Responses

All errors follow this format:

{
  "error": {
    "message": "Description of the error",
    "code": "error_code"
  }
}

HTTP Status Codes

CodeMeaning
200Success
201Created
400Bad request
401Authentication required
402Insufficient credits
404Not found
422Validation error
429Rate limited
500Server error

Rate Limiting

When rate limited, the response includes:

HTTP/1.1 429 Too Many Requests
Retry-After: 30
{
  "error": {
    "message": "Rate limit exceeded",
    "code": "rate_limited",
    "retry_after": 30
  }
}

Quotas

ResourceLimit
Requests per minute100
Concurrent jobs10
Concurrent streams5