@dcprotocol/relay
WebSocket relay server that enables secure communication between remote agents (via proxy) and local vaults. Self-host your own relay or use the public one at relay.dcp.1ly.store.
Installation
npm install @dcprotocol/relayQuick Start
Using Public Relay
Most users don't need to run their own relay. The public relay at relay.dcp.1ly.store is free and handles all the infrastructure for you.
# Agents automatically use public relay # No configuration needed! # Generate pairing token (uses public relay) dcp pairing generate --name my-bot # On VPS npx @dcprotocol/proxy --token dcp_pair_... # Connects to relay.dcp.1ly.store automatically
Self-Hosting Relay
Self-host for privacy, compliance, or custom deployment requirements.
# Start relay server npx @dcprotocol/relay # Output: # ✓ Relay server running # WebSocket: ws://0.0.0.0:8421 # HTTP: http://0.0.0.0:8421 # # Use wss://your-domain.com in client/proxy config
How It Works
┌─────────────────┐ ┌─────────────────┐
│ Desktop Vault │ │ VPS Proxy │
│ (Your Machine) │ │ (Cloud) │
└────────┬────────┘ └────────┬────────┘
│ │
│ 1. Connect WSS │
│ + Register vault_id │
│ │
▼ │
┌─────────────────────────────────────────┐ │
│ Relay Server │ │
│ ┌──────────────────────────────────┐ │ │
│ │ Connection Registry │ │ │
│ │ vault_xyz → WebSocket A │ │ │
│ │ agent_abc → WebSocket B │ │ │
│ └──────────────────────────────────┘ │ │
└─────────────────────────────────────────┘ │
▲ │
│ 2. Connect WSS │
│ + Pair with vault_xyz │
│ │
└───────────────────────────────────────┘
3. Route encrypted messages
(HPKE end-to-end)
Security Properties:
✓ Relay cannot decrypt messages (HPKE)
✓ Relay only routes based on vault_id
✓ No vault data stored on relay
✓ Rate limiting per vault (60 req/min)
✓ Connection timeout (5 min idle)Architecture
Components
- WebSocket Server: Fastify + @fastify/websocket
- HTTP Endpoints: Health checks and metrics
- Connection Registry: Maps vault IDs to WebSocket connections
- Rate Limiter: Per-vault request throttling
- Message Router: Forwards encrypted payloads
Message Flow
1. Vault connects: { type: "register", vault_id: "xyz" }
2. Relay stores: registry.set("xyz", websocket)
3. Proxy connects: { type: "pair", vault_id: "xyz", token: "..." }
4. Relay validates token and pairs connections
5. Agent → Proxy → Relay → Vault (encrypted)
6. Vault → Relay → Proxy → Agent (encrypted)Configuration
Command-Line Options
| Option | Default | Description |
|---|---|---|
--port | 8421 | Server port |
--host | 0.0.0.0 | Bind address |
--rate-limit | 60 | Requests per minute per vault |
--timeout | 300000 | Connection timeout (ms) |
--log-level | info | Logging level |
Environment Variables
| Variable | Default | Description |
|---|---|---|
PORT | 8421 | Server port |
HOST | 0.0.0.0 | Bind address |
DCP_RATE_LIMIT | 60 | Requests per minute |
DCP_TIMEOUT | 300000 | Connection timeout (ms) |
LOG_LEVEL | info | debug, info, warn, error |
Production Deployment
Fly.io (Recommended)
# fly.toml
app = "dcp-relay"
[build]
builder = "paketobuildpacks/builder:base"
buildpacks = ["gcr.io/paketo-buildpacks/nodejs"]
[env]
PORT = "8421"
LOG_LEVEL = "info"
[[services]]
internal_port = 8421
protocol = "tcp"
[[services.ports]]
port = 443
handlers = ["tls", "http"]
[[services.ports]]
port = 80
handlers = ["http"]
[services.concurrency]
type = "connections"
hard_limit = 1000
soft_limit = 800
# Deploy
fly launch
fly deploy
# Monitor
fly logs
fly statusRailway
# railway.toml
[build]
builder = "nixpacks"
[deploy]
startCommand = "npx @dcprotocol/relay"
healthcheckPath = "/health"
healthcheckTimeout = 10
restartPolicyType = "always"
[env]
PORT = { value = "8421" }
# Deploy via Railway CLI
railway up
# Or connect GitHub repo
# Railway auto-deploys on pushDocker + Nginx
# Dockerfile
FROM node:18-alpine
WORKDIR /app
RUN npm install -g @dcprotocol/relay
EXPOSE 8421
CMD ["dcp-relay"]
# docker-compose.yml
version: '3.8'
services:
relay:
build: .
ports:
- "127.0.0.1:8421:8421"
environment:
- LOG_LEVEL=info
- DCP_RATE_LIMIT=60
restart: always
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- relay
# nginx.conf
upstream relay {
server relay:8421;
}
server {
listen 443 ssl http2;
server_name relay.yourdomain.com;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
location / {
proxy_pass http://relay;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
# Start
docker-compose up -dsystemd Service
# /etc/systemd/system/dcp-relay.service [Unit] Description=DCP Relay Server After=network.target [Service] Type=simple User=dcp WorkingDirectory=/opt/dcp-relay ExecStart=/usr/bin/npx @dcprotocol/relay Restart=always Environment="PORT=8421" Environment="LOG_LEVEL=info" [Install] WantedBy=multi-user.target # Enable and start sudo systemctl daemon-reload sudo systemctl enable dcp-relay sudo systemctl start dcp-relay # Status and logs sudo systemctl status dcp-relay sudo journalctl -u dcp-relay -f
SSL/TLS Configuration
Production deployments must use WSS (WebSocket Secure). Use Let's Encrypt or cloud provider SSL.
Let's Encrypt with Certbot
# Install certbot sudo apt install certbot python3-certbot-nginx # Get certificate sudo certbot --nginx -d relay.yourdomain.com # Auto-renewal (already set up by certbot) sudo certbot renew --dry-run
Cloudflare
# Point DNS to your server relay.yourdomain.com → A → YOUR_SERVER_IP # In Cloudflare dashboard: # 1. SSL/TLS → Full (strict) # 2. WebSocket support is automatic # 3. DDoS protection enabled # Relay receives requests via Cloudflare proxy # SSL termination handled by Cloudflare
Monitoring
Health Check Endpoint
GET /health
Response:
{
"status": "ok",
"connections": 42,
"uptime": 86400,
"version": "0.2.0"
}
# Monitor with curl
watch -n 5 'curl -s https://relay.yourdomain.com/health | jq'Metrics Endpoint
GET /metrics
Response:
{
"total_connections": 1234,
"active_connections": 42,
"total_messages": 567890,
"errors": 12,
"uptime_seconds": 86400,
"rate_limits_hit": 5
}Prometheus Integration
# prometheus.yml
scrape_configs:
- job_name: 'dcp-relay'
static_configs:
- targets: ['relay.yourdomain.com:8421']
metrics_path: '/metrics'
scheme: https
# Grafana dashboard queries
# Active connections: dcp_active_connections
# Message rate: rate(dcp_total_messages[5m])
# Error rate: rate(dcp_errors[5m])Security Considerations
- Transport-only encryption: Relay sees encrypted payloads but cannot decrypt
- No data persistence: Messages not stored, only routed
- Rate limiting: Prevents abuse (60 req/min per vault)
- Connection timeout: Idle connections dropped after 5 minutes
- No authentication: Pairing tokens validated by vault, not relay
Threat Model
| Attack | Mitigation |
|---|---|
| MitM on relay | End-to-end HPKE encryption (relay cannot decrypt) |
| DDoS | Rate limiting + Cloudflare protection |
| Connection hijacking | Pairing token validated by vault |
| Metadata leakage | Vault IDs are random UUIDs |
Privacy
What the relay server can see:
- Connection timestamps
- Message sizes (encrypted payloads)
- Vault IDs (random UUIDs)
- IP addresses (can be hidden with VPN)
What the relay server cannot see:
- Vault contents (encrypted)
- Private keys (never transmitted)
- Transaction details (encrypted)
- Wallet addresses (encrypted in messages)
Scaling
Single Server Limits
| Metric | Limit | Notes |
|---|---|---|
| Concurrent connections | ~10,000 | Per server (depends on RAM) |
| Messages/sec | ~5,000 | Per server |
| Bandwidth | ~1 Gbps | Typical VPS limit |
Horizontal Scaling
For high availability or >10k users, run multiple relay instances with sticky sessions:
# Load balancer config (Nginx)
upstream relay_cluster {
ip_hash; # Sticky sessions
server relay1.internal:8421;
server relay2.internal:8421;
server relay3.internal:8421;
}
server {
listen 443 ssl;
server_name relay.yourdomain.com;
location / {
proxy_pass http://relay_cluster;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}Using Custom Relay
Client Configuration
// Point client to custom relay
import { createDCPClient } from '@dcprotocol/client';
const client = await createDCPClient({
agentName: 'my-bot',
relayUrl: 'wss://relay.mycompany.com'
});Proxy Configuration
# Point proxy to custom relay npx @dcprotocol/proxy \ --token dcp_pair_... \ --relay wss://relay.mycompany.com
CLI Configuration
# Generate pairing token for custom relay export DCP_RELAY_URL=wss://relay.mycompany.com dcp pairing generate --name my-bot
Troubleshooting
Connection Refused
# Check if relay is running curl https://relay.yourdomain.com/health # Check WebSocket upgrade curl -i -N \ -H "Connection: Upgrade" \ -H "Upgrade: websocket" \ https://relay.yourdomain.com/
High Error Rate
# Check logs for patterns journalctl -u dcp-relay | grep ERROR # Common causes: # - Rate limit exceeded (increase limit) # - Invalid pairing tokens (client issue) # - Network timeouts (check latency) # Increase rate limit export DCP_RATE_LIMIT=120 systemctl restart dcp-relay
Memory Leak
# Monitor memory usage watch -n 1 'ps aux | grep dcp-relay' # If growing over time, restart periodically # (Add to cron) 0 3 * * * systemctl restart dcp-relay # Or use process manager auto-restart # PM2 or systemd watchdog
Cost Estimation
Public Relay (relay.dcp.1ly.store)
Free - No cost for users
Self-Hosted
| Provider | Specs | Cost/Month | Capacity |
|---|---|---|---|
| Fly.io | Shared CPU, 256MB | $0-5 | ~100 connections |
| Railway | Shared, 512MB | $5-10 | ~200 connections |
| DigitalOcean | 1 vCPU, 1GB | $6 | ~500 connections |
| AWS Fargate | 0.25 vCPU, 512MB | $8-12 | ~200 connections |
| Hetzner | 2 vCPU, 4GB | $5 | ~2000 connections |
See Also
- @dcprotocol/proxy - VPS proxy client
- @dcprotocol/client - SDK for agents
- Architecture Overview - System design
- GitHub Repository