PhotoLab User Guide
Internal diagnostic tool for validating Enobarbus trackers across all tracking modes and backends.
Overview
PhotoLab is a developer-facing tool for testing and validating the computer vision tracking pipeline. It provides live camera overlays for all tracker types, raw data inspection, and backend selection (Vision vs ARKit). No audio or MIDI processing — purely tracking diagnostics.
Getting Started
- Launch PhotoLab on an iOS device (requires camera)
- Select a tracking mode from the top bar: Hands, Body, Face, Relationships, or All
- Grant camera permission — the live camera feed appears with tracking overlays
- Use the data panel at the bottom to inspect raw source values
Tracking Modes
| Mode | Sources | Backend | Camera |
|---|---|---|---|
| Hands | Up to 4 hands (wrist + finger count + chirality) | Vision | Front or rear |
| Body | 19 joints with confidence | Vision or ARKit (A12+) | Rear preferred |
| Face | 11 base sources (3 positions + 5 expressions + 3 rotations) | Vision or ARKit (TrueDepth) | Front |
| Relationships | 7 derived metrics (distances, angles, deltas between joints) | Vision | Depends on source |
| All | Combined — one ARKit config + Vision fallback for the other | Mixed | Front or rear |
Backend Selection
PhotoLab auto-selects the best available backend:
- Face tracking: ARKit (52 blend shapes) when TrueDepth camera available, otherwise Vision (11 sources)
- Body tracking: ARKit (91 joints) on A12+ devices, otherwise Vision (19 joints)
- Hand tracking: Always Vision (ARKit doesn't provide hand tracking)
ARKit and AVCaptureSession cannot run simultaneously — PhotoLab manages session teardown/restart when switching modes.
Overlays
Each tracking mode has a dedicated overlay view that draws on top of the camera feed:
| Overlay | Display |
|---|---|
| Hand | Colored dots at wrist positions, finger count labels |
| Body | Joint dots with skeleton connections between related joints |
| Face | Landmark dots for eyes, nose, mouth, and computed expression labels |
| Relationship | Lines drawn between related joint pairs |
| Confidence | Color-coded badges (green/yellow/red) showing per-source tracking confidence |
All overlays share the same coordinate space as the camera preview layer.
Controls & Toggles
| Toggle | Effect |
|---|---|
| Show Confidence | Display confidence indicators on tracked points |
| Show Data Panel | Toggle the scrollable source data panel at screen bottom |
| Show Stabilized | Apply PositionStabilizer (EMA smoothing + velocity clamping) to displayed positions |
| Show Calibration | Open the T-pose calibration sheet (body mode only) |
| Freeze Data | Pause data updates while keeping the camera feed live |
| Front/Rear Camera | Switch camera position (affects available tracking backends) |
Data Panel
The scrollable data panel at the bottom of the screen shows all active sources in real time:
- Source ID — Stable identifier (e.g.,
body.leftShoulder,face.mouthOpen,hand.right.wrist) - Position — Normalized x/y coordinates (0–1)
- Intensity — Source-specific value (finger count for hands, expression coefficient for face, confidence for body)
- Active — Whether the source is currently tracked
Use the data panel to verify: - Source ID naming matches expectations - Position values respond correctly to movement - Expression coefficients scale appropriately - Stabilizer smoothing behavior (toggle "Show Stabilized" to compare)
Calibration (Body Mode)
The T-pose calibration flow normalizes body joint positions relative to the user's proportions:
- Switch to Body mode
- Tap Show Calibration
- Stand in a T-pose (arms extended horizontally)
- Hold until the calibration completes
- Subsequent body tracking positions are normalized to the calibrated frame
Calibration data persists via BodyCalibration and is used by RelationshipProcessor for consistent metric derivation.