Integration Guide

Implement a coda-node executor for your own QPU.

This guide shows how to connect your own QPU or simulator backend to Coda with coda-node.

Your integration is a Python package that exposes an executor. coda-node loads that executor, sends it compiled jobs, and handles the cloud-facing runtime around it.

Prerequisites

  • Python 3.11 or newer
  • A QPU registered in Coda
  • A one-time node token from Coda for first startup
  • Network access from the node host to your hardware backend
  • OpenVPN installed if your node token uses VPN mode
  • A Python package for your hardware integration

After first startup, coda-node persists JWT credentials and reconnects automatically. You only need a fresh node token after resetting the node or rotating credentials.

Create an executor package

Create a Python package for your backend. It should depend on coda-node and any vendor SDKs or internal control libraries needed by your device.

For local development, install coda-node from the repository:

$git clone https://github.com/conductorquantum/coda-node.git
$cd coda-node
$uv sync --dev

For a deployable integration, add coda-node as a dependency of your backend package.

The package installs both coda-node and coda as equivalent CLI entry points. These docs use uv run coda-node ... consistently so commands work from a local checkout without requiring a global install.

Implement run

Your executor must provide an async run(ir, shots) method. It receives a validated NativeGateIR and returns an ExecutionResult.

1from coda_node.server.executor import ExecutionResult
2from coda_node.server.ir import NativeGateIR
3
4
5class AcmeQpuExecutor:
6 def __init__(self, host: str, port: int) -> None:
7 self.host = host
8 self.port = port
9
10 async def run(self, ir: NativeGateIR, shots: int) -> ExecutionResult:
11 program = compile_native_ir_for_acme(ir)
12 raw_counts, elapsed_ms = await submit_to_acme_hardware(
13 host=self.host,
14 port=self.port,
15 program=program,
16 shots=shots,
17 )
18
19 return ExecutionResult(
20 counts=raw_counts,
21 execution_time_ms=elapsed_ms,
22 shots_completed=sum(raw_counts.values()),
23 )

The counts dictionary maps measured bitstrings to shot counts, for example {"00": 512, "11": 512}. If your backend executes fewer shots than requested, set shots_completed to the actual number.

Expose an executor factory

coda-node loads your backend through a factory. The recommended convention is:

<package>.executor_factory:create_executor

Create an executor_factory.py module in your package:

1import yaml
2from pydantic import BaseModel
3
4from coda_node.server.config import Settings
5
6from acme_qpu.executor import AcmeQpuExecutor
7
8
9class AcmeDeviceConfig(BaseModel):
10 host: str
11 port: int
12
13
14def create_executor(settings: Settings) -> AcmeQpuExecutor:
15 with open(settings.device_config) as config_file:
16 config = AcmeDeviceConfig.model_validate(yaml.safe_load(config_file))
17
18 return AcmeQpuExecutor(host=config.host, port=config.port)

Factories can either accept the Settings object or accept no arguments. They can also return a prebuilt object that already has a run method.

Define the device config

Point CODA_DEVICE_CONFIG at a YAML file that your factory knows how to read. If ./site/device.yaml exists, coda-node uses it automatically.

1executor_factory: acme_qpu.executor_factory:create_executor
2target: cz
3num_qubits: 5
4host: 192.168.1.120
5port: 9095

coda-node only reads the optional top-level executor_factory key. The remaining schema belongs to your backend package. Use it for hardware addresses, calibration files, credentials, topology, or any other device-specific settings.

If you set CODA_EXECUTOR_FACTORY, it takes priority over the executor_factory value in the YAML file.

Start the node

On first startup, provide the node token from Coda:

$export CODA_DEVICE_CONFIG=./site/device.yaml
$uv run coda-node start --token <node-token>

You can also set the token with an environment variable:

$export CODA_NODE_TOKEN=<node-token>
$uv run coda-node start

After provisioning succeeds, coda-node writes its runtime state and private key to disk. Subsequent restarts can omit the token:

$uv run coda-node start

To run the node as a background daemon:

$uv run coda-node start --daemon
$uv run coda-node status
$uv run coda-node logs -n 100

Verify readiness

The runtime exposes two local health endpoints:

$curl http://localhost:8080/health
$curl http://localhost:8080/ready

/health confirms the process is alive. /ready checks runtime dependencies and returns component status for VPN, Redis, and the current job. It returns 503 when a required dependency is degraded.

Use the CLI for a local diagnostic summary:

$uv run coda-node doctor

In Coda, the QPU should move online after the node connects and starts sending heartbeats. Missing heartbeats eventually marks the QPU offline.

Optional executor hooks

For hardware that supports batching, add batch_run(jobs):

1async def batch_run(
2 self,
3 jobs: list[tuple[NativeGateIR, int]],
4) -> list[ExecutionResult]:
5 ...

Return one ExecutionResult for each input job in the same order. coda-node detects this method automatically.

For hardware that can cancel an in-flight job, add cancel_current_job():

1def cancel_current_job(self) -> None:
2 ...

When Coda cancels a job that is already running, coda-node calls this hook before cancelling the in-process task.

Reset or rotate the node

To stop the daemon, stop the managed VPN, and remove persisted credentials:

$uv run coda-node reset

After a reset, start again with a fresh node token. Use this flow when moving the node to a new machine, rotating credentials, or clearing a broken local state.