Integration Guide
Implement a coda-node executor for your own QPU.
This guide shows how to connect your own QPU or simulator backend to Coda with coda-node.
Your integration is a Python package that exposes an executor. coda-node loads that executor, sends it compiled jobs, and handles the cloud-facing runtime around it.
Prerequisites
- Python 3.11 or newer
- A QPU registered in Coda
- A one-time node token from Coda for first startup
- Network access from the node host to your hardware backend
- OpenVPN installed if your node token uses VPN mode
- A Python package for your hardware integration
After first startup, coda-node persists JWT credentials and reconnects automatically. You only need a fresh node token after resetting the node or rotating credentials.
Create an executor package
Create a Python package for your backend. It should depend on coda-node and any vendor SDKs or internal control libraries needed by your device.
For local development, install coda-node from the repository:
For a deployable integration, add coda-node as a dependency of your backend package.
The package installs both coda-node and coda as equivalent CLI entry points. These docs use uv run coda-node ... consistently so commands work from a local checkout without requiring a global install.
Implement run
Your executor must provide an async run(ir, shots) method. It receives a validated NativeGateIR and returns an ExecutionResult.
The counts dictionary maps measured bitstrings to shot counts, for example {"00": 512, "11": 512}. If your backend executes fewer shots than requested, set shots_completed to the actual number.
Expose an executor factory
coda-node loads your backend through a factory. The recommended convention is:
Create an executor_factory.py module in your package:
Factories can either accept the Settings object or accept no arguments. They can also return a prebuilt object that already has a run method.
Define the device config
Point CODA_DEVICE_CONFIG at a YAML file that your factory knows how to read. If ./site/device.yaml exists, coda-node uses it automatically.
coda-node only reads the optional top-level executor_factory key. The remaining schema belongs to your backend package. Use it for hardware addresses, calibration files, credentials, topology, or any other device-specific settings.
If you set CODA_EXECUTOR_FACTORY, it takes priority over the executor_factory value in the YAML file.
Start the node
On first startup, provide the node token from Coda:
You can also set the token with an environment variable:
After provisioning succeeds, coda-node writes its runtime state and private key to disk. Subsequent restarts can omit the token:
To run the node as a background daemon:
Verify readiness
The runtime exposes two local health endpoints:
/health confirms the process is alive. /ready checks runtime dependencies and returns component status for VPN, Redis, and the current job. It returns 503 when a required dependency is degraded.
Use the CLI for a local diagnostic summary:
In Coda, the QPU should move online after the node connects and starts sending heartbeats. Missing heartbeats eventually marks the QPU offline.
Optional executor hooks
For hardware that supports batching, add batch_run(jobs):
Return one ExecutionResult for each input job in the same order. coda-node detects this method automatically.
For hardware that can cancel an in-flight job, add cancel_current_job():
When Coda cancels a job that is already running, coda-node calls this hook before cancelling the in-process task.
Reset or rotate the node
To stop the daemon, stop the managed VPN, and remove persisted credentials:
After a reset, start again with a fresh node token. Use this flow when moving the node to a new machine, rotating credentials, or clearing a broken local state.