Skip to content

Data Flow

This page walks through every stage of a measurement — from camera capture to vital signs delivery.

A complete measurement passes through these stages:

  1. Session creation — Your app requests a new session from the API. The backend generates a session ID, a secure upload URL, and a fallback vitals config.

  2. Client-side capture — The SDK (or your custom implementation) accesses the camera, detects facial features using the Vision Engine, and extracts skin regions from each frame.

  3. Encoding — Extracted regions are normalized and encoded into a compact binary format. The SDK captures approximately 24 seconds of data at 30 FPS, producing a binary payload of ~45 MB.

  4. Upload — The encoded tensor is uploaded directly to secure cloud storage via the upload URL. No data passes through the API layer during upload.

  5. Inference — Our inference engine retrieves the tensor and runs it through the rPPG signal processing model. Processing typically takes 60–90 seconds.

  6. Result delivery — Calibrated vital signs are returned directly in the upload-complete HTTP response. The tensor is discarded after inference and no health data is stored on our side. The only artifact retained from a scan is a usage record (one scan credit consumed) for billing and quota.

Vital sign results are returned synchronously in the upload-complete response body. They are not written to any server-side cache or store. Session metadata used during upload orchestration is cleaned up immediately after results are delivered. If your application needs to retain results, persist them in your own database when you receive the response.

When using @circadify/web-sdk, the entire flow is handled by a single measureVitals() call. The SDK reports progress through five phases:

import { CircadifySDK } from '@circadify/web-sdk';
const sdk = new CircadifySDK({
apiKey: 'ck_live_your_key_here',
onProgress: (event) => {
// event.phase: 'initializing' | 'readiness' | 'capturing' | 'uploading' | 'processing'
// event.percent: 0-100
console.log(`[${event.phase}] ${event.percent}%`);
},
onQualityWarning: (warning) => {
// warning.type: 'lighting' | 'motion' | 'face_position'
console.warn(warning.message);
},
});
const result = await sdk.measureVitals({
videoElement: document.getElementById('preview') as HTMLVideoElement,
demographics: { age: 35, sex: 'M' },
});

SDK phase breakdown:

PhaseProgressWhat happens
initializing0–5%Creates session, loads Vision Engine modules
readiness5–10%Opens camera, waits for face detection and quality checks to pass
capturing10–60%Captures ~24 seconds of frames, preprocesses skin regions
uploading60–80%Uploads preprocessed data via secure upload URL
processing80–100%Notifies backend; receives results from the upload-complete response

If you are integrating without the npm package, you must call each API endpoint yourself:

  1. Start a session

    Terminal window
    curl -X POST https://api.circadify.com/sdk/session/start \
    -H "X-API-Key: ck_live_your_key_here" \
    -H "Content-Type: application/json" \
    -d '{"demographics": {"age": 35, "sex": "M"}}'

    Response:

    {
    "session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "upload_url": "https://upload.circadify.com/uploads/a1b2c3.../video.webm?signature=...",
    "expires_at": 1712001800,
    "fallback_config": { /* ranges used only on inference failure */ }
    }

    The fallback_config block contains numeric ranges the SDK uses to populate a result if backend inference fails. Any result derived from these fallbacks is returned with confidence: 0.0 so your application can detect and discard it. See On-Device Processing → Fallback Behavior.

  2. Upload your preprocessed tensor to the secure upload URL

    Terminal window
    curl -X PUT "$UPLOAD_URL" \
    -H "Content-Type: video/webm" \
    --data-binary @tensor.bin
  3. Notify the backend that the upload is complete

    Terminal window
    curl -X POST https://api.circadify.com/sdk/session/$SESSION_ID/upload-complete \
    -H "X-API-Key: ck_live_your_key_here" \
    -H "Content-Type: application/json" \
    -d "{\"session_id\": \"$SESSION_ID\"}"

    Response (completed):

    {
    "session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
    "status": "completed",
    "vitals": {
    "heart_rate": 72,
    "respiratory_rate": 16,
    "hrv": 45,
    "spo2": 98,
    "systolic_bp": 122,
    "diastolic_bp": 78,
    "confidence": 0.87
    },
    "processing_time_ms": 74200
    }

    Results are returned directly in this response. The tensor and the result are not retained on Circadify’s side after the response is sent.

The SDK performs the following on each captured frame (30 FPS):

  1. Face detection — The Circadify Vision Engine (WASM) locates the face and tracks facial geometry
  2. Skin region extraction — Multiple skin regions optimized for rPPG signal quality are isolated from the detected face
  3. Normalization — Each region is geometrically normalized to a consistent size and orientation, correcting for head movement
  4. Encoding — Normalized regions are encoded into a compact binary format optimized for the backend inference model
  5. Frame accumulation — Frames are accumulated over ~24 seconds of capture

All preprocessing runs in the browser using WebAssembly. No raw video frames, face images, or identifiable data are included in the upload — only preprocessed, normalized skin region data.

The SDK produces a proprietary binary format (~45 MB) containing the preprocessed frame data with timing metadata. The Content-Type header is video/webm for compatibility with the upload URL.

If you are building a custom integration without the SDK, contact support@circadify.com for the format specification.

When the inference engine receives an upload-complete notification:

  1. Authentication — The API key is verified. The developer’s account status, rate limit, and usage quota are checked.
  2. Session validation — The session is verified to exist and belong to the authenticated developer.
  3. Inference — The tensor is retrieved and processed through our rPPG model. The model extracts physiological signals (pulse wave, respiratory pattern) from the ROI pixel data across time.
  4. Calibration — Raw model outputs are adjusted with calibration offsets and optional demographic corrections.
  5. Result delivery — Calibrated vital signs are returned directly in the upload-complete response. The tensor and the result are not retained on our side.

If inference fails, the session is still marked completed with fallback vitals (confidence: 0.0). The client always gets a response.

DataRetention
Raw video / camera framesNever uploaded — stay on the device
Preprocessed RGB tensorDiscarded after inference
Vital sign resultsReturned in the API response; not stored on our side
Developer accountsUntil deleted
API keys (hashed)Until revoked
Usage recordsDuration of contract (one entry per scan; no health data)
Audit logs6 years (HIPAA); no health data

No raw video, face images, vital sign results, or RGB tensors are stored at any point.

  • All data in transit is encrypted with TLS
  • API keys are securely hashed before storage and never logged
  • Uploaded tensors go directly to encrypted upload storage — they do not pass through the API layer
  • All developer actions (session starts, key management, logins) are logged for compliance
  • The inference layer is not internet-accessible — it operates within a private network