Skip to content

Data Flow

This page walks through every stage of a measurement — from camera capture to vital signs delivery.

A complete measurement passes through these stages:

  1. Session creation — Your app requests a new session from the API. The backend generates a session ID, a secure upload URL, and a fallback vitals config.

  2. Client-side capture — The SDK (or your custom implementation) accesses the camera, detects facial features using the Vision Engine, and extracts skin regions from each frame.

  3. Encoding — Extracted regions are normalized and encoded into a compact binary format. The SDK captures approximately 24 seconds of data at 30 FPS, producing a binary payload of ~45 MB.

  4. Upload — The encoded tensor is uploaded directly to secure cloud storage via the upload URL. No data passes through the API layer during upload.

  5. Inference — The backend retrieves the tensor and runs it through the rPPG signal processing model on GPU-accelerated compute. Processing typically takes 60–90 seconds.

  6. Result delivery — By default, calibrated vital signs are returned directly in the upload-complete HTTP response. No health data is stored server-side. When PERSIST_VITALS=true is configured, results are cached for retrieval via the polling endpoint.

Circadify supports two result delivery modes:

In the default configuration (PERSIST_VITALS=false), vital sign results are returned synchronously in the upload-complete response body. No health data is written to any server-side cache or store. Session metadata used during the upload orchestration is cleaned up immediately after results are delivered.

This is the recommended mode for most integrations. The SDK uses this mode by default.

For async workflows — such as telehealth integrations where the requesting service polls for results separately — set PERSIST_VITALS=true. In this mode, completed results are cached with a configurable TTL (default 15 minutes) and can be retrieved via GET /sdk/session/result/{sessionId}. Results are automatically and irreversibly deleted when the TTL expires.

When using @circadify/sdk, the entire flow is handled by a single measureVitals() call. The SDK reports progress through five phases:

import { CircadifySDK } from '@circadify/sdk';
const sdk = new CircadifySDK({
apiKey: 'ck_test_your_key_here',
onProgress: (event) => {
// event.phase: 'initializing' | 'readiness' | 'capturing' | 'uploading' | 'processing'
// event.percent: 0-100
console.log(`[${event.phase}] ${event.percent}%`);
},
onQualityWarning: (warning) => {
// warning.type: 'lighting' | 'motion' | 'face_position'
console.warn(warning.message);
},
});
const result = await sdk.measureVitals({
container: document.getElementById('scan-container'),
demographics: { age: 35, sex: 'M' },
});

SDK phase breakdown:

PhaseProgressWhat happens
initializing0–5%Creates session, loads Vision Engine modules
readiness5–10%Opens camera, waits for face detection and quality checks to pass
capturing10–60%Captures ~24 seconds of frames, preprocesses skin regions
uploading60–80%Uploads preprocessed data via secure upload URL
processing80–100%Notifies backend, receives results from upload-complete response (or polls when persist mode is enabled)

If you are integrating without the npm package, you must call each API endpoint yourself:

  1. Start a session

    Terminal window
    curl -X POST https://api.circadify.com/sdk/session/start \
    -H "X-API-Key: ck_test_your_key_here" \
    -H "Content-Type: application/json" \
    -d '{"demographics": {"age": 35, "sex": "M"}}'

    Response:

    {
    "session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "upload_url": "https://upload.circadify.com/uploads/a1b2c3.../video.webm?signature=...",
    "expires_at": 1712001800,
    "fallback_config": {
    "heart_rate": { "min": 62, "max": 98 },
    "hrv": { "min": 25, "max": 70 },
    "respiratory_rate": { "min": 13, "max": 19 },
    "spo2": { "min": 96, "max": 99 },
    "systolic_bp": { "min": 110, "max": 135 },
    "diastolic_bp_offset": { "min": 35, "max": 50 },
    "confidence": 0.8
    }
    }
  2. Upload your preprocessed tensor to the secure upload URL

    Terminal window
    curl -X PUT "$UPLOAD_URL" \
    -H "Content-Type: video/webm" \
    --data-binary @tensor.bin
  3. Notify the backend that the upload is complete

    Terminal window
    curl -X POST https://api.circadify.com/sdk/session/$SESSION_ID/upload-complete \
    -H "X-API-Key: ck_test_your_key_here" \
    -H "Content-Type: application/json" \
    -d "{\"session_id\": \"$SESSION_ID\"}"

    Results are returned directly in the response body. Session metadata is cleaned up immediately.

  4. Poll for results (only when PERSIST_VITALS=true) — poll until status is completed or failed

    Terminal window
    curl https://api.circadify.com/sdk/session/$SESSION_ID/result \
    -H "X-API-Key: ck_test_your_key_here"

    Response (completed):

    {
    "session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "status": "completed",
    "vitals": {
    "heart_rate": 72,
    "respiratory_rate": 16,
    "hrv": 45,
    "spo2": 98,
    "systolic_bp": 122,
    "diastolic_bp": 78,
    "confidence": 0.87
    },
    "completed_at": 1712001900
    }

The SDK performs the following on each captured frame (30 FPS):

  1. Face detection — The Circadify Vision Engine (WASM) locates the face and tracks facial geometry
  2. Skin region extraction — Multiple skin regions optimized for rPPG signal quality are isolated from the detected face
  3. Normalization — Each region is geometrically normalized to a consistent size and orientation, correcting for head movement
  4. Encoding — Normalized regions are encoded into a compact binary format optimized for the backend inference model
  5. Frame accumulation — Frames are accumulated over ~24 seconds of capture

All preprocessing runs in the browser using WebAssembly. No raw video frames, face images, or identifiable data are included in the upload — only preprocessed, normalized skin region data.

The SDK produces a proprietary binary format (~45 MB) containing the preprocessed frame data with timing metadata. The Content-Type header is video/webm for compatibility with the upload URL.

If you are building a custom integration without the SDK, contact support@circadify.com for the format specification.

When the backend receives an upload-complete notification:

  1. Authentication — The API key is verified. The developer’s account status, rate limit, and usage quota are checked.
  2. Session validation — The session is verified to exist and belong to the authenticated developer.
  3. Inference — The tensor is retrieved and processed through the rPPG model on GPU-accelerated compute. The model extracts physiological signals (pulse wave, respiratory pattern) from the ROI pixel data across time.
  4. Calibration — Raw model outputs are adjusted with calibration offsets and optional demographic corrections.
  5. Result delivery — Calibrated vital signs are returned directly in the upload-complete response (default mode). When persist mode is enabled, results are also cached in the session store for polling.

If inference fails, the session is still marked completed with fallback vitals (confidence: 0.0). The client always gets a response.

DataRetention
Preprocessed tensorAutomatically deleted after processing
Session metadataDuration of request (cleaned up after result delivery)
Vitals (persist mode only)Configurable TTL, default 15 minutes (auto-deleted)
Developer accountsUntil deleted
API keys (hashed)Until revoked
Usage recordsIndefinite
Audit logsIndefinite (compliance)

No raw video, face images, or biometric data is stored at any point.

  • All data in transit is encrypted with TLS 1.3
  • API keys are securely hashed before storage and never logged
  • Uploaded tensors go directly to encrypted cloud storage — they do not pass through the API layer
  • All developer actions (session starts, key management, logins) are logged for compliance
  • The inference layer is not internet-accessible — it operates within a private network