Data Flow
This page walks through every stage of a measurement — from camera capture to vital signs delivery.
End-to-End Flow
Section titled “End-to-End Flow”A complete measurement passes through these stages:
-
Session creation — Your app requests a new session from the API. The backend generates a session ID, a secure upload URL, and a fallback vitals config.
-
Client-side capture — The SDK (or your custom implementation) accesses the camera, detects facial features using the Vision Engine, and extracts skin regions from each frame.
-
Encoding — Extracted regions are normalized and encoded into a compact binary format. The SDK captures approximately 24 seconds of data at 30 FPS, producing a binary payload of ~45 MB.
-
Upload — The encoded tensor is uploaded directly to secure cloud storage via the upload URL. No data passes through the API layer during upload.
-
Inference — Our inference engine retrieves the tensor and runs it through the rPPG signal processing model. Processing typically takes 60–90 seconds.
-
Result delivery — Calibrated vital signs are returned directly in the
upload-completeHTTP response. The tensor is discarded after inference and no health data is stored on our side. The only artifact retained from a scan is a usage record (one scan credit consumed) for billing and quota.
Result Delivery
Section titled “Result Delivery”Vital sign results are returned synchronously in the upload-complete response body. They are not written to any server-side cache or store. Session metadata used during upload orchestration is cleaned up immediately after results are delivered. If your application needs to retain results, persist them in your own database when you receive the response.
With the SDK
Section titled “With the SDK”When using @circadify/web-sdk, the entire flow is handled by a single measureVitals() call. The SDK reports progress through five phases:
import { CircadifySDK } from '@circadify/web-sdk';
const sdk = new CircadifySDK({ apiKey: 'ck_live_your_key_here', onProgress: (event) => { // event.phase: 'initializing' | 'readiness' | 'capturing' | 'uploading' | 'processing' // event.percent: 0-100 console.log(`[${event.phase}] ${event.percent}%`); }, onQualityWarning: (warning) => { // warning.type: 'lighting' | 'motion' | 'face_position' console.warn(warning.message); },});
const result = await sdk.measureVitals({ videoElement: document.getElementById('preview') as HTMLVideoElement, demographics: { age: 35, sex: 'M' },});SDK phase breakdown:
| Phase | Progress | What happens |
|---|---|---|
initializing | 0–5% | Creates session, loads Vision Engine modules |
readiness | 5–10% | Opens camera, waits for face detection and quality checks to pass |
capturing | 10–60% | Captures ~24 seconds of frames, preprocesses skin regions |
uploading | 60–80% | Uploads preprocessed data via secure upload URL |
processing | 80–100% | Notifies backend; receives results from the upload-complete response |
Without the SDK
Section titled “Without the SDK”If you are integrating without the npm package, you must call each API endpoint yourself:
-
Start a session
Terminal window curl -X POST https://api.circadify.com/sdk/session/start \-H "X-API-Key: ck_live_your_key_here" \-H "Content-Type: application/json" \-d '{"demographics": {"age": 35, "sex": "M"}}'Response:
{"session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890","upload_url": "https://upload.circadify.com/uploads/a1b2c3.../video.webm?signature=...","expires_at": 1712001800,"fallback_config": { /* ranges used only on inference failure */ }}The
fallback_configblock contains numeric ranges the SDK uses to populate a result if backend inference fails. Any result derived from these fallbacks is returned withconfidence: 0.0so your application can detect and discard it. See On-Device Processing → Fallback Behavior. -
Upload your preprocessed tensor to the secure upload URL
Terminal window curl -X PUT "$UPLOAD_URL" \-H "Content-Type: video/webm" \--data-binary @tensor.bin -
Notify the backend that the upload is complete
Terminal window curl -X POST https://api.circadify.com/sdk/session/$SESSION_ID/upload-complete \-H "X-API-Key: ck_live_your_key_here" \-H "Content-Type: application/json" \-d "{\"session_id\": \"$SESSION_ID\"}"Response (completed):
{"session_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890","status": "completed","vitals": {"heart_rate": 72,"respiratory_rate": 16,"hrv": 45,"spo2": 98,"systolic_bp": 122,"diastolic_bp": 78,"confidence": 0.87},"processing_time_ms": 74200}Results are returned directly in this response. The tensor and the result are not retained on Circadify’s side after the response is sent.
Client-Side Preprocessing
Section titled “Client-Side Preprocessing”The SDK performs the following on each captured frame (30 FPS):
- Face detection — The Circadify Vision Engine (WASM) locates the face and tracks facial geometry
- Skin region extraction — Multiple skin regions optimized for rPPG signal quality are isolated from the detected face
- Normalization — Each region is geometrically normalized to a consistent size and orientation, correcting for head movement
- Encoding — Normalized regions are encoded into a compact binary format optimized for the backend inference model
- Frame accumulation — Frames are accumulated over ~24 seconds of capture
All preprocessing runs in the browser using WebAssembly. No raw video frames, face images, or identifiable data are included in the upload — only preprocessed, normalized skin region data.
Upload Format
Section titled “Upload Format”The SDK produces a proprietary binary format (~45 MB) containing the preprocessed frame data with timing metadata. The Content-Type header is video/webm for compatibility with the upload URL.
If you are building a custom integration without the SDK, contact support@circadify.com for the format specification.
Server-Side Pipeline
Section titled “Server-Side Pipeline”When the inference engine receives an upload-complete notification:
- Authentication — The API key is verified. The developer’s account status, rate limit, and usage quota are checked.
- Session validation — The session is verified to exist and belong to the authenticated developer.
- Inference — The tensor is retrieved and processed through our rPPG model. The model extracts physiological signals (pulse wave, respiratory pattern) from the ROI pixel data across time.
- Calibration — Raw model outputs are adjusted with calibration offsets and optional demographic corrections.
- Result delivery — Calibrated vital signs are returned directly in the
upload-completeresponse. The tensor and the result are not retained on our side.
If inference fails, the session is still marked completed with fallback vitals (confidence: 0.0). The client always gets a response.
Data Retention
Section titled “Data Retention”| Data | Retention |
|---|---|
| Raw video / camera frames | Never uploaded — stay on the device |
| Preprocessed RGB tensor | Discarded after inference |
| Vital sign results | Returned in the API response; not stored on our side |
| Developer accounts | Until deleted |
| API keys (hashed) | Until revoked |
| Usage records | Duration of contract (one entry per scan; no health data) |
| Audit logs | 6 years (HIPAA); no health data |
No raw video, face images, vital sign results, or RGB tensors are stored at any point.
Security
Section titled “Security”- All data in transit is encrypted with TLS
- API keys are securely hashed before storage and never logged
- Uploaded tensors go directly to encrypted upload storage — they do not pass through the API layer
- All developer actions (session starts, key management, logins) are logged for compliance
- The inference layer is not internet-accessible — it operates within a private network
Next Steps
Section titled “Next Steps”- On-Device Processing — Deep dive into the client-side preprocessing pipeline
- Encryption — Encryption standards and protocols
- Rate Limits — Rate limiting behavior and headers