Guides
Custom UI
Build your own face capture interface using the low-level SDK
Overview
If the pre-built <FaceLogin> and <FaceRegister> components don't fit your design, you can use the low-level SDK functions to build a fully custom face capture experience.
Architecture
Your UI Layer
│
â–¼
@facesmash/sdk
├── createFaceSmash() → Create client
├── client.init() → Load neural networks
├── client.analyzeFace() → Detect + quality score
├── client.login() → Match against stored faces
├── client.register() → Register new user
│
└── Low-level exports:
├── extractDescriptor() → Get 128-d vector
├── analyzeFace() → Full quality analysis
├── calculateSimilarity() → Compare two descriptors
├── enhancedMatch() → Adaptive threshold matching
└── multiTemplateMatch() → Match against templatesStep 1: Set Up Camera
Use the browser's getUserMedia API to access the camera:
async function setupCamera(videoElement: HTMLVideoElement) {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
facingMode: 'user',
width: { ideal: 640 },
height: { ideal: 480 },
},
});
videoElement.srcObject = stream;
await videoElement.play();
}
function captureFrame(video: HTMLVideoElement): string {
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
const ctx = canvas.getContext('2d')!;
ctx.drawImage(video, 0, 0);
return canvas.toDataURL('image/jpeg', 0.9);
}Step 2: Initialize and Analyze
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash({
apiUrl: 'https://api.facesmash.app',
debug: true,
});
await client.init((progress) => {
console.log(`Loading models: ${progress}%`);
});Real-time quality feedback
Run analysis on captured frames to give the user feedback before login/registration:
async function analyzeCurrentFrame(video: HTMLVideoElement) {
const frame = captureFrame(video);
const analysis = await client.analyzeFace(frame);
if (analysis) {
updateUI({
quality: analysis.qualityScore,
lighting: analysis.lightingScore,
headPose: analysis.headPose,
isFrontal: analysis.headPose.isFrontal,
rejection: analysis.rejectionReason,
});
} else {
showMessage('No face detected');
}
}
// Run every 300ms
const interval = setInterval(() => analyzeCurrentFrame(video), 300);Step 3: Capture Multiple Frames and Authenticate
When quality is good, capture several frames and login:
async function captureAndLogin(video: HTMLVideoElement) {
const images: string[] = [];
for (let i = 0; i < 3; i++) {
images.push(captureFrame(video));
await new Promise((r) => setTimeout(r, 500));
}
const result = await client.login(images);
if (result.success) {
console.log('Welcome back,', result.user.name);
console.log('Similarity:', result.similarity);
} else {
console.error('Login failed:', result.error);
}
}Step 4: Low-Level Descriptor Matching
For full control without using PocketBase, use the exported utility functions directly:
import {
extractDescriptor,
calculateSimilarity,
enhancedMatch,
multiTemplateMatch,
} from '@facesmash/sdk';
// Extract a descriptor from an image
const config = client.config; // or build your own ResolvedConfig
const descriptor = await extractDescriptor(base64Image, config);
// Compare two descriptors
const similarity = calculateSimilarity(descriptor1, descriptor2);
// 0-1 (1 = identical)
// Adaptive matching with lighting adjustment
const match = enhancedMatch(descriptor1, descriptor2, 0.45, 0, lightingScore);
// { isMatch: boolean, similarity: number, adaptedThreshold: number }
// Match against multiple stored templates
const templates = [
{ descriptor: template1, quality: 0.9, weight: 1.5 },
{ descriptor: template2, quality: 0.7, weight: 1.0 },
];
const result = multiTemplateMatch(descriptor, templates, 0.45, lightingScore);
// { isMatch, bestSimilarity, avgSimilarity, matchCount }Custom Overlay Example
Draw a face quality indicator on a canvas overlay:
function drawQualityOverlay(
ctx: CanvasRenderingContext2D,
quality: number,
isFrontal: boolean
) {
const color = quality > 0.6 ? '#22c55e' // green
: quality > 0.35 ? '#eab308' // yellow
: '#ef4444'; // red
// Draw quality bar
ctx.fillStyle = color;
ctx.fillRect(10, 10, quality * 200, 8);
// Draw label
ctx.fillStyle = color;
ctx.font = '14px Inter, sans-serif';
ctx.fillText(
`Quality: ${(quality * 100).toFixed(0)}%${isFrontal ? '' : ' — face the camera'}`,
10,
36
);
}React Custom UI
In React, use the hooks for a custom UI instead of the drop-in components:
import { useFaceSmash, useFaceLogin, useFaceAnalysis } from '@facesmash/sdk/react';
function CustomFaceLogin() {
const { isReady } = useFaceSmash();
const { login, isScanning, result } = useFaceLogin();
const { analyze, analysis } = useFaceAnalysis();
const videoRef = useRef<HTMLVideoElement>(null);
// Your custom camera setup, quality overlay, and capture logic here
// Call analyze(frame) for real-time quality feedback
// Call login(images) when ready to authenticate
}Tips
- Debounce analysis — Run
analyzeFace()every 200-300ms, not on every frame. Saves CPU/GPU. - Pre-warm models — Call
client.init()early (e.g., on app load) so models are ready when the user reaches the login page. - Capture multiple frames — Pass 3-5 images to
login()for more reliable matching. - Show quality feedback — Use
analyzeFace()results to guide users to improve lighting, face angle, and distance. - Listen to events — Use
client.on()for real-time event feedback in non-React apps.