Vanilla JavaScript
Framework-agnostic integration for any web application
The core entry point (@facesmash/sdk) provides a framework-agnostic client that works with vanilla JS, Vue, Svelte, Angular, or any other framework. No React dependency required.
Installation
npm install @facesmash/sdkFaceSmashClient API
The SDK exposes two ways to create a client:
// Factory function (recommended)
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash(config);
// Class constructor (equivalent)
import { FaceSmashClient } from '@facesmash/sdk';
const client = new FaceSmashClient(config);Both produce an identical FaceSmashClient instance. The factory function is simply a convenience wrapper.
client.init(onProgress?)
Initialize the TF.js WebGL backend and load all five neural networks. Must be called once before any face operations. Subsequent calls are no-ops that return true immediately.
init(onProgress?: (progress: number) => void): Promise<boolean>| Parameter | Type | Description |
|---|---|---|
onProgress | (progress: number) => void | Optional callback fired with 0–100 progress values |
Returns: true if models loaded successfully, false on failure.
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash({
apiUrl: 'https://api.facesmash.app',
debug: true,
});
const success = await client.init((progress) => {
document.getElementById('loader').textContent = `Loading: ${progress}%`;
});
if (!success) {
console.error('Failed to load models — check WebGL support');
}What happens during init:
- TF.js backend is set to
webglwith optimization flags (CANVAS2D_WILL_READ_FREQUENTLY,WEBGL_EXP_CONV) - Five models load in parallel from the configured
modelUrl:ssdMobilenetv1— primary face detector (~5.4 MB)tinyFaceDetector— fallback detector (~190 KB)faceLandmark68Net— landmark detection (~350 KB)faceRecognitionNet— descriptor extraction (~6.2 MB)faceExpressionNet— expression classification (~310 KB)
- Progress jumps from 0 → 10 (TF.js init) → 100 (all models loaded)
client.isReady
readonly isReady: booleanReturns true after init() completes successfully. Use this to gate UI interactions:
if (!client.isReady) {
alert('Face recognition is still loading');
return;
}client.login(images)
Authenticate a user by comparing captured face images against all registered profiles.
login(images: string[]): Promise<LoginResult>| Parameter | Type | Description |
|---|---|---|
images | string[] | Array of base64 data URLs (data:image/jpeg;base64,...) captured from webcam |
Returns: LoginResult
interface LoginResult {
success: boolean;
user?: UserProfile; // Present on success
similarity?: number; // 0–1 match score (present on success or partial match)
error?: string; // Present on failure
}
interface UserProfile {
id: string; // PocketBase record ID (e.g., "a1b2c3d4e5f6")
name: string; // Display name
email: string; // Email address
face_embedding: number[]; // 128-dimensional descriptor as plain array
created: string; // ISO date (e.g., "2026-03-07 20:15:00.000Z")
updated: string; // ISO date
}Login flow in detail:
- Each image is analyzed with
analyzeFace()— the highest quality non-rejected result is selected - If no face is detected or quality is below
minQualityScore, login fails immediately - All
user_profilesare fetched from PocketBase - For each profile:
enhancedMatch()compares the new descriptor against the storedface_embedding- If the profile has
face_templates,multiTemplateMatch()is also run; the better similarity wins - Threshold adapts based on lighting score (more lenient in poor lighting)
- If a match is found, the SDK:
- Creates a
sign_in_logsentry - Creates a
face_scansentry - If quality > 0.5: updates the stored embedding via weighted average (adaptive learning)
- If quality > 0.6: stores a new face template (evicts the oldest if at
maxTemplatesPerUser)
- Creates a
const result = await client.login(images);
if (result.success) {
console.log('Welcome back,', result.user.name);
console.log('Email:', result.user.email);
console.log('User ID:', result.user.id);
console.log('Similarity:', result.similarity); // e.g., 0.78
} else {
console.error('Login failed:', result.error);
// Possible errors:
// - "No face detected in any image"
// - "Face quality too low. Improve lighting and face the camera directly."
// - "No registered users found"
// - "Face not recognized."
// - "Face partially matched but did not meet security threshold."
}client.register(name, images, email?)
Register a new user with facial biometrics.
register(name: string, images: string[], email?: string): Promise<RegisterResult>| Parameter | Type | Description |
|---|---|---|
name | string | Display name for the new user |
images | string[] | Array of base64 data URLs |
email | string (optional) | Email address. If omitted, auto-generated as name.lowercase@facesmash.app |
Returns: RegisterResult
interface RegisterResult {
success: boolean;
user?: UserProfile; // Present on success
error?: string; // Present on failure
}Registration flow in detail:
- Each image is analyzed; the highest quality result is selected
- Quality must meet
minQualityScore(default: 0.2) - All existing
user_profilesare checked for duplicate faces (similarity ≥ 0.75) - If no duplicate found, creates:
- A
user_profilesrecord withname,email, andface_embedding - A
face_templatesrecord labeled"registration"with the descriptor and quality score - A
face_scansrecord of type"registration"
- A
const result = await client.register('Jane Doe', images, 'jane@example.com');
if (result.success) {
console.log('Registered:', result.user.name);
console.log('User ID:', result.user.id);
console.log('Email:', result.user.email);
} else {
console.error('Registration failed:', result.error);
// Possible errors:
// - "No face detected in any image"
// - "Face quality too low for registration."
// - "This face is already registered to John Doe"
}client.analyzeFace(imageData)
Analyze a single image for face quality without performing login or registration. Useful for building preview UIs or quality feedback indicators.
analyzeFace(imageData: string): Promise<FaceAnalysis | null>| Parameter | Type | Description |
|---|---|---|
imageData | string | Base64 data URL of a face image |
Returns: FaceAnalysis object, or null if no face detected. Also emits face-detected or face-lost events.
interface FaceAnalysis {
descriptor: Float32Array; // Raw 128-D face embedding
normalizedDescriptor: Float32Array; // L2-normalized version
confidence: number; // SSD detection confidence (0–1)
qualityScore: number; // Composite quality (0–1)
lightingScore: number; // Lighting quality (0–1)
headPose: HeadPose; // Yaw, pitch, roll, isFrontal
faceSizeCheck: FaceSizeCheck; // Size validation
eyeAspectRatio: number; // Eye openness (~0.2–0.35 for open eyes)
rejectionReason?: string; // Present if face should not be used
}
interface HeadPose {
yaw: number; // Left/right (-1 to 1, 0 = center)
pitch: number; // Up/down (-1 to 1, 0 = center)
roll: number; // Tilt (radians)
isFrontal: boolean; // true if |yaw| < 0.35, |pitch| < 0.4, |roll| < 0.25
}
interface FaceSizeCheck {
isValid: boolean; // true if face is 2–65% of frame and >= 80×80px
ratio: number; // face area / frame area
reason?: string; // "Face too far from camera", "Face too close", etc.
}Quality score calculation:
The quality score is a composite of several factors:
- Starts at the raw SSD detection confidence (0–1)
- Multiplied by lighting factor:
0.7 + lightingScore * 0.3 - Multiplied by size factor:
0.8 + sizeRatio * 0.2(where sizeRatio normalizes to 30% max coverage) - Multiplied by pose penalty if head is not frontal:
max(0.5, 1 - (|yaw| + |pitch|) * 0.3) - Clamped to [0, 1]
const analysis = await client.analyzeFace(base64Image);
if (!analysis) {
console.log('No face detected in image');
} else if (analysis.rejectionReason) {
console.log('Face rejected:', analysis.rejectionReason);
} else {
console.log('Quality:', (analysis.qualityScore * 100).toFixed(0) + '%');
console.log('Confidence:', (analysis.confidence * 100).toFixed(0) + '%');
console.log('Lighting:', (analysis.lightingScore * 100).toFixed(0) + '%');
console.log('Frontal:', analysis.headPose.isFrontal);
console.log('Eyes open:', analysis.eyeAspectRatio > 0.2);
console.log('Face size valid:', analysis.faceSizeCheck.isValid);
}client.on(listener)
Subscribe to all SDK events. Returns an unsubscribe function.
on(listener: FaceSmashEventListener): () => voidYou can register multiple listeners. Listener errors are caught internally and do not break the SDK.
const unsub = client.on((event) => {
console.log('[FaceSmash]', event.type, event);
});
// Later: stop listening
unsub();client.config
Access the resolved configuration (all defaults applied):
readonly config: ResolvedConfigconsole.log(client.config.apiUrl); // "https://api.facesmash.app"
console.log(client.config.modelUrl); // "https://cdn.jsdelivr.net/npm/@vladmandic/face-api/model"
console.log(client.config.matchThreshold); // 0.45
console.log(client.config.minQualityScore); // 0.2
console.log(client.config.minDetectionConfidence); // 0.3
console.log(client.config.maxTemplatesPerUser); // 10
console.log(client.config.debug); // falseclient.pb
Direct access to the underlying PocketBase client instance. Use this for custom queries against the database.
readonly pb: PocketBase// Query all registered users
const users = await client.pb.collection('user_profiles').getFullList();
console.log('Registered users:', users.length);
// Get a specific user's templates
const templates = await client.pb.collection('face_templates').getList(1, 50, {
filter: 'user_email="jane@example.com"',
sort: '-quality_score',
});
console.log('Templates:', templates.items.length);
// Get recent sign-in logs
const logs = await client.pb.collection('sign_in_logs').getList(1, 20, {
sort: '-created',
});Event System
The SDK emits typed events for every significant action. Events are delivered synchronously to all registered listeners.
All Event Types
| Event Type | Payload | When |
|---|---|---|
models-loading | { progress: number } | During init() — progress 0–100 |
models-loaded | {} | Models loaded successfully |
models-error | { error: string } | Model loading failed |
face-detected | { analysis: FaceAnalysis } | analyzeFace() found a face |
face-lost | {} | analyzeFace() found no face |
login-start | {} | login() called |
login-success | { user: UserProfile, similarity: number } | Match found |
login-failed | { error: string, bestSimilarity?: number } | No match found |
register-start | {} | register() called |
register-success | { user: UserProfile } | New user created |
register-failed | { error: string } | Registration failed |
TypeScript Event Type
type FaceSmashEvent =
| { type: 'models-loading'; progress: number }
| { type: 'models-loaded' }
| { type: 'models-error'; error: string }
| { type: 'face-detected'; analysis: FaceAnalysis }
| { type: 'face-lost' }
| { type: 'login-start' }
| { type: 'login-success'; user: UserProfile; similarity: number }
| { type: 'login-failed'; error: string; bestSimilarity?: number }
| { type: 'register-start' }
| { type: 'register-success'; user: UserProfile }
| { type: 'register-failed'; error: string };
type FaceSmashEventListener = (event: FaceSmashEvent) => void;Example: Progress Bar + Status Updates
const statusEl = document.getElementById('status');
const progressEl = document.getElementById('progress');
client.on((event) => {
switch (event.type) {
case 'models-loading':
progressEl.style.width = `${event.progress}%`;
statusEl.textContent = 'Loading face recognition...';
break;
case 'models-loaded':
statusEl.textContent = 'Ready — look at the camera';
break;
case 'models-error':
statusEl.textContent = `Error: ${event.error}`;
break;
case 'login-start':
statusEl.textContent = 'Analyzing your face...';
break;
case 'login-success':
statusEl.textContent = `Welcome back, ${event.user.name}! (${(event.similarity * 100).toFixed(0)}% match)`;
break;
case 'login-failed':
statusEl.textContent = event.bestSimilarity
? `Not recognized (best match: ${(event.bestSimilarity * 100).toFixed(0)}%)`
: `Not recognized: ${event.error}`;
break;
case 'register-success':
statusEl.textContent = `Registered as ${event.user.name}!`;
break;
case 'register-failed':
statusEl.textContent = `Registration failed: ${event.error}`;
break;
}
});Low-Level API
For full control over the face recognition pipeline, import individual detection and matching functions. These are the same functions the FaceSmashClient uses internally.
Detection Functions
import {
loadModels,
areModelsLoaded,
extractDescriptor,
analyzeFace,
processImages,
normalizeDescriptor,
} from '@facesmash/sdk';loadModels(config, onProgress?)
Load neural networks directly (without creating a client).
loadModels(config: ResolvedConfig, onProgress?: (p: number) => void): Promise<boolean>import { loadModels } from '@facesmash/sdk';
const config = {
apiUrl: 'https://api.facesmash.app',
modelUrl: 'https://cdn.jsdelivr.net/npm/@vladmandic/face-api/model',
minDetectionConfidence: 0.3,
matchThreshold: 0.45,
minQualityScore: 0.2,
maxTemplatesPerUser: 10,
debug: false,
};
await loadModels(config, (p) => console.log(`${p}%`));areModelsLoaded()
areModelsLoaded(): booleanextractDescriptor(input, config)
Extract a 128-D face descriptor from any supported input.
extractDescriptor(
input: string | HTMLVideoElement | HTMLCanvasElement | HTMLImageElement,
config: ResolvedConfig
): Promise<Float32Array | null>Uses SSD MobileNet as primary detector, falls back to TinyFaceDetector if SSD finds nothing.
import { extractDescriptor } from '@facesmash/sdk';
// From a base64 string
const desc1 = await extractDescriptor('data:image/jpeg;base64,...', config);
// From a video element (live webcam)
const desc2 = await extractDescriptor(videoElement, config);
// From a canvas
const desc3 = await extractDescriptor(canvasElement, config);
// Returns Float32Array(128) or null if no face foundanalyzeFace(imageData, config)
Full quality analysis (same function client.analyzeFace() uses internally).
analyzeFace(imageData: string, config: ResolvedConfig): Promise<FaceAnalysis | null>processImages(images, config)
Extract descriptors from multiple images and average them.
processImages(images: string[], config: ResolvedConfig): Promise<Float32Array | null>If one image is provided, returns its descriptor directly. For multiple images, averages all successfully extracted descriptors element-wise.
normalizeDescriptor(descriptor)
L2-normalize a face descriptor.
normalizeDescriptor(descriptor: Float32Array): Float32ArrayMatching Functions
import {
calculateSimilarity,
facesMatch,
enhancedMatch,
multiTemplateMatch,
calculateLearningWeight,
} from '@facesmash/sdk';calculateSimilarity(d1, d2)
calculateSimilarity(d1: Float32Array, d2: Float32Array): numberReturns 1 - euclideanDistance(d1, d2). Range: 0 (completely different) to 1 (identical).
const similarity = calculateSimilarity(descriptor1, descriptor2);
console.log(`${(similarity * 100).toFixed(1)}% similar`);facesMatch(d1, d2, threshold?)
facesMatch(d1: Float32Array, d2: Float32Array, threshold?: number): booleanSimple boolean check: calculateSimilarity(d1, d2) >= threshold. Default threshold: 0.45.
enhancedMatch(d1, d2, baseThreshold?, confidenceBoost?, lightingScore?)
Adaptive matching that adjusts the threshold based on conditions.
enhancedMatch(
d1: Float32Array,
d2: Float32Array,
baseThreshold?: number, // default: 0.45
confidenceBoost?: number, // default: 0
lightingScore?: number // default: 0.5
): MatchResultThreshold adaptation rules:
- Poor lighting (score < 0.4): threshold drops by 0.05 (more lenient)
- Good lighting (score > 0.8): threshold increases by 0.02 (more strict)
- Confidence boost reduces threshold by
confidenceBoost * 0.05 - Threshold is clamped to minimum 0.35
interface MatchResult {
isMatch: boolean;
similarity: number;
adaptedThreshold: number;
}const result = enhancedMatch(newDescriptor, storedDescriptor, 0.45, 0, 0.7);
console.log(result);
// { isMatch: true, similarity: 0.72, adaptedThreshold: 0.47 }multiTemplateMatch(descriptor, templates, baseThreshold, lightingScore?)
Match against multiple stored templates with quality-weighted averaging.
multiTemplateMatch(
newDescriptor: Float32Array,
templates: { descriptor: Float32Array; quality: number; weight: number }[],
baseThreshold: number,
lightingScore?: number
): MultiTemplateMatchResultA match is declared if the best similarity meets the threshold OR 60%+ of templates match.
interface MultiTemplateMatchResult {
isMatch: boolean;
bestSimilarity: number;
avgSimilarity: number; // Quality-weighted average
matchCount: number; // How many templates matched
}const templates = [
{ descriptor: template1, quality: 0.9, weight: 1.5 },
{ descriptor: template2, quality: 0.7, weight: 1.0 },
{ descriptor: template3, quality: 0.85, weight: 1.2 },
];
const result = multiTemplateMatch(newDescriptor, templates, 0.45, 0.8);
console.log(result);
// { isMatch: true, bestSimilarity: 0.78, avgSimilarity: 0.71, matchCount: 3 }calculateLearningWeight(qualityScore, lightingScore, confidence)
Calculate a weight for adaptive learning (used internally to decide how much a new scan should influence the stored embedding).
calculateLearningWeight(
qualityScore: number,
lightingScore: number,
confidence: number
): number // 0.1 to 3.0Weight factors:
- Quality > 0.8 → ×1.5 | Quality > 0.6 → ×1.2 | Quality < 0.4 → ×0.5
- Lighting > 0.7 → ×1.3 | Lighting < 0.4 → ×0.7
- Confidence > 0.8 → ×1.2 | Confidence < 0.5 → ×0.8
Capturing Images from Webcam
The SDK expects base64 data URLs (data:image/jpeg;base64,...). Here are complete examples for capturing webcam frames.
Minimal Capture Function
async function captureFromWebcam(count = 3, delay = 500) {
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 640, height: 480, facingMode: 'user' },
});
const video = document.createElement('video');
video.srcObject = stream;
video.autoplay = true;
video.playsInline = true;
await new Promise((r) => video.addEventListener('playing', r, { once: true }));
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
const ctx = canvas.getContext('2d');
const images = [];
for (let i = 0; i < count; i++) {
ctx.drawImage(video, 0, 0);
images.push(canvas.toDataURL('image/jpeg', 0.9));
if (i < count - 1) await new Promise((r) => setTimeout(r, delay));
}
stream.getTracks().forEach((t) => t.stop());
return images;
}Full Login Flow with Visible Camera
<div id="camera-container">
<video id="camera" autoplay playsinline muted></video>
<div id="status">Initializing...</div>
<div id="progress-bar" style="width: 0%; height: 4px; background: #10b981;"></div>
<button id="login-btn" disabled>Sign In</button>
</div>
<script type="module">
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash({
apiUrl: 'https://api.facesmash.app',
debug: true,
});
const video = document.getElementById('camera');
const status = document.getElementById('status');
const progress = document.getElementById('progress-bar');
const loginBtn = document.getElementById('login-btn');
// Track events
client.on((event) => {
switch (event.type) {
case 'models-loading':
progress.style.width = `${event.progress}%`;
break;
case 'models-loaded':
status.textContent = 'Ready — click Sign In';
break;
case 'login-start':
status.textContent = 'Scanning...';
loginBtn.disabled = true;
break;
case 'login-success':
status.textContent = `Welcome, ${event.user.name}!`;
break;
case 'login-failed':
status.textContent = `Failed: ${event.error}`;
loginBtn.disabled = false;
break;
}
});
// Init models
await client.init();
// Start camera
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 640, height: 480, facingMode: 'user' },
});
video.srcObject = stream;
loginBtn.disabled = false;
// Capture + login on click
loginBtn.addEventListener('click', async () => {
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
const ctx = canvas.getContext('2d');
const images = [];
for (let i = 0; i < 3; i++) {
ctx.drawImage(video, 0, 0);
images.push(canvas.toDataURL('image/jpeg', 0.9));
await new Promise((r) => setTimeout(r, 500));
}
const result = await client.login(images);
if (result.success) {
// Redirect or update UI
stream.getTracks().forEach((t) => t.stop());
}
});
</script>Error Handling
All client methods catch errors internally and return structured results. They never throw. However, calling methods before init() throws an error:
try {
// This throws if models aren't loaded
const result = await client.login(images);
} catch (err) {
// "FaceSmash models not loaded. Call client.init() first."
console.error(err.message);
}Best practice: Always check client.isReady before calling login(), register(), or analyzeFace().
Using with Other Frameworks
Vue 3
<script setup lang="ts">
import { ref, onMounted, onUnmounted } from 'vue';
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash({ apiUrl: 'https://api.facesmash.app' });
const isReady = ref(false);
const progress = ref(0);
onMounted(async () => {
await client.init((p) => { progress.value = p; });
isReady.value = true;
});
onUnmounted(() => {
// Cleanup if needed
});
</script>Svelte
<script lang="ts">
import { onMount } from 'svelte';
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash({ apiUrl: 'https://api.facesmash.app' });
let isReady = false;
let progress = 0;
onMount(async () => {
await client.init((p) => { progress = p; });
isReady = true;
});
</script>Angular
import { Component, OnInit } from '@angular/core';
import { createFaceSmash } from '@facesmash/sdk';
@Component({ selector: 'app-face-auth', template: '...' })
export class FaceAuthComponent implements OnInit {
client = createFaceSmash({ apiUrl: 'https://api.facesmash.app' });
isReady = false;
progress = 0;
async ngOnInit() {
await this.client.init((p) => { this.progress = p; });
this.isReady = true;
}
}