Skip to content
Cloudflare Docs

WebSocket adapter

Stream audio and video between WebRTC tracks and WebSocket endpoints. Supports ingesting audio from WebSocket sources and sending WebRTC audio and video to WebSocket consumers. Video egress is supported as JPEG at approximately 1 FPS.

What you can build

  • AI services with WebSocket APIs for audio processing
  • Custom audio processing pipelines
  • Legacy system bridges
  • Server-side audio generation and consumption
  • Video snapshotting and thumbnails
  • Computer vision ingestion (low FPS)

How it works

Create WebRTC tracks from external audio

Ingest audio from external sources via WebSocket to create WebRTC tracks for distribution.

graph LR
    A[External System] -->|Audio Data| B[WebSocket Endpoint]
    B -->|Adapter| C[Realtime SFU]
    C -->|New Session| D[WebRTC Track]
    D -->|WebRTC| E[WebRTC Clients]

Use cases:

  • AI text-to-speech generation streaming into WebRTC
  • Audio from backend services or databases
  • Live audio feeds from external systems

Key characteristics:

  • Creates a new session ID automatically
  • Uses buffer mode for chunked audio transmission
  • Maximum 32 KB per WebSocket message

API reference

Create adapter

POST /v1/apps/{appId}/adapters/websocket/new

Request body

{
"tracks": [
{
"location": "local",
"trackName": "string",
"endpoint": "wss://...",
"inputCodec": "pcm",
"mode": "buffer"
}
]
}

Parameters

ParameterTypeDescription
locationstringRequired. Must be "local" for ingesting audio
trackNamestringRequired. Name for the new WebRTC track to create
endpointstringRequired. WebSocket URL to receive audio from
inputCodecstringRequired. Codec of incoming audio. Currently only "pcm"
modestringRequired. Must be "buffer" for local mode

Response

{
"tracks": [
{
"trackName": "string",
"adapterId": "string",
"sessionId": "string", // New session ID generated
"endpoint": "string" // Echo of the requested endpoint
}
]
}

Close adapter

POST /v1/apps/{appId}/adapters/websocket/close

Request body

{
"tracks": [
{
"adapterId": "string"
}
]
}

Media formats

WebRTC tracks

  • Codec: Opus
  • Sample rate: 48 kHz
  • Channels: Stereo

WebSocket binary format

Media uses Protocol Buffers. Audio uses PCM payloads; video uses JPEG payloads:

  • 16-bit signed little-endian PCM
  • 48 kHz sample rate
  • Stereo (left/right interleaved)
  • Video: JPEG image payload (one frame per message)
message Packet {
uint32 sequenceNumber = 1; // Used in Stream mode only
uint32 timestamp = 2; // Used in Stream mode only
bytes payload = 5; // Media data
}

Ingest mode (buffer): Only the payload field is used, containing chunks of audio data.

Stream mode (egress):

  • For audio frames:
    • sequenceNumber: Incremental packet counter
    • timestamp: Timestamp for synchronization
    • payload: Individual PCM audio frame data
  • For video frames (JPEG):
    • timestamp: Timestamp for synchronization
    • payload: JPEG image data (one frame per message)
    • Note: sequenceNumber may be unset for video frames

Video (JPEG)

  • Supported WebRTC input codecs: H264, H265, VP8, VP9
  • Output over WebSocket: JPEG images at approximately 1 FPS

Connection protocol

Connects to your WebSocket endpoint:

  1. WebSocket upgrade handshake
  2. Secure connection for wss:// URLs
  3. Media streaming begins

Message format

Buffer mode (ingest)

  • Binary messages: PCM audio data in chunks
  • Maximum message size: 32 KB per WebSocket message
  • Important: Account for serialization overhead when chunking audio buffers
  • Send audio in small, frequent chunks rather than large batches

Stream mode (egress)

  • Binary messages: Individual frames with metadata (audio or video)
  • Audio frames include:
    • Timestamp information
    • Sequence number
    • PCM audio frame data
  • Video frames include:
    • Timestamp information
    • JPEG image data
    • Note: Sequence number may be unset for video frames
  • Frames are sent individually as they arrive from the WebRTC track
  • Video frames are emitted at approximately 1 FPS

Connection lifecycle

  1. Connects to the WebSocket endpoint
  2. Audio streaming begins
  3. Video streaming begins (if configured)
  4. Connection closes when closed or on error

Pricing

Currently in beta and free to use.

Once generally available, billing will follow standard Cloudflare Realtime pricing at $0.05 per GB egress. Only traffic originating from Cloudflare towards WebSocket endpoints incurs charges. Traffic ingested from WebSocket endpoints into Cloudflare incurs no charge.

Usage counts towards your Cloudflare Realtime free tier of 1,000 GB.

Best practices

Connection management

  • Closing an already-closed instance returns success
  • Close when sessions end
  • Implement reconnection logic for network failures

Performance

  • Deploy WebSocket endpoints close to Cloudflare edge
  • Use appropriate buffer sizes
  • Monitor connection quality

Security

  • Secure WebSocket endpoints with authentication
  • Use wss:// for production
  • Implement rate limiting

Limitations

  • WebSocket payloads: PCM (audio) for ingest and stream; JPEG (video) for stream
  • Beta status: API may change in future releases
  • Video support: Egress only (JPEG)
  • Video frame rate: Approximately 1 FPS (beta; not configurable)
  • Unidirectional flow: Each instance handles one direction

Error handling

Error CodeDescription
400Invalid request parameters
404Session or track not found
503Adapter not found (for close operations)

Reference implementations

Migration from custom bridges

  1. Replace custom signaling with adapter API calls
  2. Update WebSocket endpoints to handle PCM format
  3. Implement adapter lifecycle management
  4. Remove custom STUN/TURN configuration

FAQ

Q: Can I use the same adapter for bidirectional audio? A: No, each instance is unidirectional. Create separate adapters for send and receive.

Q: What happens if the WebSocket connection drops? A: The adapter closes and must be recreated. Implement reconnection logic in your app.

Q: Is there a limit on concurrent adapters? A: Limits follow standard Cloudflare Realtime quotas. Contact support for specific requirements.

Q: Can I change the audio format after creating an adapter? A: No, audio format is fixed at creation time. Create a new adapter for different formats.