Quickstart
Get FaceSmash running in your app in 5 minutes
Add face login to your web application in under 5 minutes. This guide walks you through installation, setup, and your first working face authentication flow.
Prerequisites
| Requirement | Details |
|---|---|
| Browser | Chrome 80+, Firefox 78+, Safari 14+, or Edge 80+ with WebGL |
| Camera | Webcam, phone camera, or tablet camera |
| HTTPS | Required for getUserMedia — localhost works for development |
| Node.js | 18+ (for npm install) |
| React (optional) | 17+ if using the React bindings |
Step 1: Install the SDK
npm install @facesmash/sdkThis installs both the core client and the React bindings. The package is ~200 KB (excluding peer dependencies). The neural network models (~12.5 MB) are loaded from a CDN at runtime and cached by the browser.
Using yarn or pnpm? Those work too: yarn add @facesmash/sdk or pnpm add @facesmash/sdk
Step 2: Choose Your Integration
Option A: React (Recommended)
The fastest path. Three components give you a complete face auth flow:
import { useState } from 'react';
import { FaceSmashProvider, FaceLogin, FaceRegister } from '@facesmash/sdk/react';
function App() {
const [user, setUser] = useState(null);
const [mode, setMode] = useState<'login' | 'register'>('login');
if (user) {
return (
<div>
<h1>Welcome, {user.name}!</h1>
<p>Email: {user.email}</p>
<button onClick={() => setUser(null)}>Sign Out</button>
</div>
);
}
return (
<FaceSmashProvider
config={{ apiUrl: 'https://api.facesmash.app' }}
onReady={() => console.log('Face recognition models loaded')}
onError={(err) => console.error('Failed to load models:', err)}
>
<div style={{ maxWidth: 480, margin: '0 auto', padding: 20 }}>
<h1>FaceSmash Demo</h1>
{/* Toggle between login and register */}
<div style={{ display: 'flex', gap: 8, marginBottom: 16 }}>
<button
onClick={() => setMode('login')}
style={{ fontWeight: mode === 'login' ? 'bold' : 'normal' }}
>
Sign In
</button>
<button
onClick={() => setMode('register')}
style={{ fontWeight: mode === 'register' ? 'bold' : 'normal' }}
>
Register
</button>
</div>
{mode === 'login' ? (
<FaceLogin
onResult={(result) => {
if (result.success) {
setUser(result.user);
} else {
alert(`Login failed: ${result.error}`);
}
}}
className="w-full h-72 rounded-xl overflow-hidden bg-black"
/>
) : (
<FaceRegister
name="New User"
onResult={(result) => {
if (result.success) {
alert(`Registered as ${result.user.name}! Now sign in.`);
setMode('login');
} else {
alert(`Registration failed: ${result.error}`);
}
}}
className="w-full h-72 rounded-xl overflow-hidden bg-black"
/>
)}
</div>
</FaceSmashProvider>
);
}
export default App;What happens:
<FaceSmashProvider>creates the SDK client and loads 5 neural networks (~12.5 MB, cached after first load)<FaceLogin>opens the webcam, waits 2 seconds, captures 3 frames, and runs face matching against all registered users<FaceRegister>opens the webcam, captures 3 frames, checks for duplicate faces, and creates a new user profile- Results are delivered via
onResultcallbacks with typedLoginResultorRegisterResultobjects
Option B: Vanilla JavaScript
Works with any framework or no framework at all:
<!DOCTYPE html>
<html>
<head>
<title>FaceSmash Demo</title>
</head>
<body>
<h1>FaceSmash Login</h1>
<video id="camera" autoplay playsinline muted width="640" height="480"></video>
<p id="status">Loading face recognition...</p>
<button id="login-btn" disabled>Sign In with Face</button>
<button id="register-btn" disabled>Register New Face</button>
<script type="module">
import { createFaceSmash } from '@facesmash/sdk';
const client = createFaceSmash({
apiUrl: 'https://api.facesmash.app',
debug: true,
});
const video = document.getElementById('camera');
const status = document.getElementById('status');
const loginBtn = document.getElementById('login-btn');
const registerBtn = document.getElementById('register-btn');
// Step 1: Load models
const success = await client.init((progress) => {
status.textContent = `Loading face recognition: ${progress}%`;
});
if (!success) {
status.textContent = 'Failed to load — check WebGL support';
throw new Error('Model loading failed');
}
// Step 2: Start camera
const stream = await navigator.mediaDevices.getUserMedia({
video: { width: 640, height: 480, facingMode: 'user' },
});
video.srcObject = stream;
status.textContent = 'Ready — click a button below';
loginBtn.disabled = false;
registerBtn.disabled = false;
// Helper: capture frames from webcam
async function captureFrames(count = 3, delay = 500) {
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
const ctx = canvas.getContext('2d');
const images = [];
for (let i = 0; i < count; i++) {
ctx.drawImage(video, 0, 0);
images.push(canvas.toDataURL('image/jpeg', 0.9));
if (i < count - 1) await new Promise(r => setTimeout(r, delay));
}
return images;
}
// Step 3: Login
loginBtn.addEventListener('click', async () => {
status.textContent = 'Scanning your face...';
loginBtn.disabled = true;
const images = await captureFrames();
const result = await client.login(images);
if (result.success) {
status.textContent = `Welcome back, ${result.user.name}! (${(result.similarity * 100).toFixed(0)}% match)`;
} else {
status.textContent = `Login failed: ${result.error}`;
}
loginBtn.disabled = false;
});
// Step 4: Register
registerBtn.addEventListener('click', async () => {
const name = prompt('Enter your name:');
if (!name) return;
status.textContent = 'Capturing your face for registration...';
registerBtn.disabled = true;
const images = await captureFrames(5, 400);
const result = await client.register(name, images);
if (result.success) {
status.textContent = `Registered as ${result.user.name}! You can now sign in.`;
} else {
status.textContent = `Registration failed: ${result.error}`;
}
registerBtn.disabled = false;
});
</script>
</body>
</html>Step 3: Test It
- Start your dev server —
npm run dev(Vite, CRA, Next.js, etc.) - Open the page — Navigate to
http://localhost:5173(or your dev server URL) - Allow camera access — The browser will prompt for webcam permission
- Register a face — Click Register, enter a name, and look at the camera
- Sign in — Click Sign In and look at the camera — authentication happens in 3–5 seconds
First load is slower. The SDK downloads ~12.5 MB of neural network weights on first use. Subsequent loads use the browser's HTTP cache and are much faster (typically 2–4 seconds).
What Just Happened?
When you registered:
- The SDK captured 3 webcam frames at 500ms intervals
- Each frame was analyzed for face quality (lighting, pose, size, confidence)
- The best frame's 128-dimensional face descriptor was extracted
- The SDK checked for duplicate faces among existing users
- A new
user_profilesrecord was created in PocketBase with the face embedding - An initial
face_templatesrecord was stored for future multi-template matching
When you logged in:
- The SDK captured 3 frames and picked the best-quality face
- It fetched all registered user profiles from PocketBase
- For each user, it compared the new face descriptor against the stored embedding
- If the user had multiple templates, multi-template matching was also performed
- The best match above the threshold (default: 0.45) was returned
- A
sign_in_logsentry was created and the stored embedding was updated (adaptive learning)
Common Issues
"No face detected"
- Face the camera directly — The SDK requires a frontal face
- Improve lighting — Avoid backlighting; face a light source
- Move closer — Your face should be at least 80×80 pixels in the frame
- Stay still — Quick movements cause motion blur
"Face quality too low"
- Better lighting — The quality score factors in brightness and evenness
- Face the camera straight on — Head tilts reduce the quality score
- Open your eyes — Eye aspect ratio is part of the quality check
Camera not working
- HTTPS required —
getUserMediaonly works onhttps://orlocalhost - Grant permission — Click "Allow" when the browser asks for camera access
- Check other apps — Another app might be using the camera
Models loading slowly
- First load: ~12.5 MB download (2–10 seconds depending on connection)
- Subsequent loads: Cached by browser (2–4 seconds to initialize)
- Tip: Show a loading progress bar using
client.init(progress => ...)oruseFaceSmash().loadProgress
See the Full Flow
Try the Live Demo
Visit facesmash.app to see FaceSmash in action with the full production UI:
- Click Register and follow the on-screen prompts
- Enter your name and look at the camera
- After registration, click Sign In — you'll be authenticated in seconds
- Check the Dashboard for login history, activity graphs, and face quality metrics
Next Steps
- SDK Overview — Architecture, exports, neural networks, browser support
- React Components —
<FaceLogin>,<FaceRegister>, provider, 4 hooks, full prop tables - Vanilla JS — Complete client API, low-level utilities, Vue/Svelte/Angular examples
- Configuration — Every option explained, threshold tuning, event system, PocketBase schemas
- Security & Privacy — How biometric data is processed and stored