Database Configuration
HD Homey uses SQLite for data storage, managed by Drizzle ORM. This guide covers database setup, maintenance, backups, and troubleshooting.
Database Overview
Technology Stack
- Database: SQLite 3
- ORM: Drizzle ORM
- Migrations: Drizzle Kit
- Driver: better-sqlite3 (Node.js)
Why SQLite?
HD Homey chose SQLite for several key advantages:
- ✅ Zero configuration - No separate database server required
- ✅ Single file - Easy backups and migrations
- ✅ Excellent performance - Perfect for <100 concurrent users
- ✅ ACID compliant - Data integrity guarantees
- ✅ Portable - Works on any platform
- ✅ Reliable - Battle-tested for decades
Database Location
Default Path
The database file path is specified by HD_HOMEY_DB_PATH:
Default value: ./data/db/hd_homey.db
Path Change in v1.0.0-beta.3+
Starting in v1.0.0-beta.3, HD_HOMEY_DB_PATH specifies the full path to the database file (including filename), not just the directory. The parent directory is created automatically if it doesn't exist.
Docker Volumes
Docker Compose automatically creates persistent volumes:
volumes:
- hd-homey-data:/app/data # Contains databaseThe database file is stored at:
/app/data/db/hd_homey.dbCustom Path
Set a custom database location via environment variable:
# .env
HD_HOMEY_DB_PATH=/custom/path/to/db/hd_homey.dbFile Path Requirements
The database configuration requires:
- Full path including filename (e.g.,
/path/to/hd_homey.db) - Parent directory must be writable by the HD Homey process
- Parent directory created automatically if it doesn't exist
- File persistent across restarts (use Docker volumes)
- Regular backups recommended
Database Schema
Tables
HD Homey's database contains these tables:
Authentication Tables (Better-Auth)
user - User accounts with credentials and roles
- Fields: id, name, email, username, role, isActive, createdAt, etc.
session - User login sessions (JWT-based, this table is for future use)
- Fields: id, token, expiresAt, userId, ipAddress, userAgent
account - Authentication credentials and OAuth tokens
- Fields: id, accountId, providerId, password, accessToken, etc.
verification - Email verification and password reset tokens
- Fields: id, identifier, value, expiresAt
invitation - User invitation tokens
- Fields: id, email, role, token, expiresAt, invitedBy
Application Tables
tuners - HDHomeRun device configurations
- Fields: id, name, url, friendlyName, modelNumber, tunerCount, createdAt
channels - TV channel lineup
- Fields: id, tunerId, guideNumber, guideName, hd, url
settings - Application configuration
- Fields: key, value
Schema File
The complete schema is defined in:
src/lib/database/schema.tsView the schema using Drizzle Studio (see Database Tools below).
Migrations
Automatic Migrations
HD Homey automatically runs database migrations on startup:
- Application starts
- Checks for pending migrations
- Applies migrations in order
- Continues startup
Docker: Migrations run automatically via docker-entrypoint.sh
From Source: Migrations run via npm run dev or npm start
Migration Files
Migration SQL files are stored in:
migrations/
├── 0000_large_microchip.sql # Initial schema
├── 0001_plain_junta.sql # Better-Auth migration
└── meta/
├── _journal.json # Migration history
├── 0000_snapshot.json
└── 0001_snapshot.jsonManual Migration
To manually run migrations:
# Docker
docker exec hd-homey npm run db:migrate
# From source
npm run db:migrateCreating New Migrations
If you're developing HD Homey and modifying the schema:
Edit schema: Modify
src/lib/database/schema.tsGenerate migration:
bashnpm run db:generateReview migration: Check
migrations/for new SQL fileApply migration:
bashnpm run db:migrate
Schema Changes
Modifying the database schema requires understanding of SQL and database design. Always test schema changes in a development environment before applying to production.
Database Tools
Drizzle Studio
Drizzle Studio provides a web-based database browser:
# From source
npm run db:studio
# Opens at http://localhost:4983Features:
- Browse all tables and data
- View relationships
- Execute queries (read-only recommended)
- Inspect schema
Docker Studio Access
Drizzle Studio requires filesystem access to the database. Run it from a source installation or use docker exec to access the database file directly.
SQLite CLI
Access the database directly using SQLite command-line:
# Docker
docker exec -it hd-homey sqlite3 /app/data/db/hd_homey.db
# From source
sqlite3 ./data/db/hd_homey.dbCommon queries:
-- List all tables
.tables
-- View table schema
.schema user
-- Count users
SELECT COUNT(*) FROM user;
-- View tuners
SELECT id, name, url FROM tuners;
-- View channels by tuner
SELECT guideName, guideNumber FROM channels
WHERE tunerId = 'your-tuner-id';Exit: Type .quit
Backups
Why Backup?
Regular backups protect against:
- Hardware failure
- Accidental data deletion
- Database corruption
- Version upgrade issues
Backup Methods
Method 1: File Copy (Recommended)
SQLite databases are single files - simply copy the file:
# Docker
docker exec hd-homey cp /app/data/db/hd_homey.db /app/data/db/hd_homey.db.backup
docker cp hd-homey:/app/data/db/hd_homey.db.backup ./backup/hd_homey-$(date +%Y%m%d).db
# From source
cp ./data/db/hd_homey.db ./backups/hd_homey-$(date +%Y%m%d).dbMethod 2: SQLite Backup Command
Use SQLite's built-in backup:
# Docker
docker exec hd-homey sqlite3 /app/data/db/hd_homey.db ".backup /app/data/db/hd_homey.db.backup"
# From source
sqlite3 ./data/db/hd_homey.db ".backup ./backups/hd_homey-$(date +%Y%m%d).db"Method 3: Docker Volume Backup
Backup the entire data volume:
# Stop HD Homey
docker compose stop
# Backup volume
docker run --rm \
-v hd-homey_data:/data \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/hd-homey-data-$(date +%Y%m%d).tar.gz /data
# Restart HD Homey
docker compose startAutomated Backups
Cron Job Example
#!/bin/bash
# /usr/local/bin/backup-hd-homey.sh
BACKUP_DIR="/backups/hd-homey"
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
mkdir -p "$BACKUP_DIR"
# Backup database
docker exec hd-homey cp \
/app/data/db/hd_homey.db \
/app/data/db/hd_homey.db.backup
docker cp \
hd-homey:/app/data/db/hd_homey.db.backup \
"$BACKUP_DIR/hd_homey-$TIMESTAMP.db"
# Keep last 30 days of backups
find "$BACKUP_DIR" -name "hd_homey-*.db" -mtime +30 -delete
echo "Backup completed: $BACKUP_DIR/hd_homey-$TIMESTAMP.db"Add to crontab:
# Daily backup at 2 AM
0 2 * * * /usr/local/bin/backup-hd-homey.shBackup Verification
Verify backup integrity:
# Test backup can be opened
sqlite3 ./backups/hd_homey-20231115.db "PRAGMA integrity_check;"
# Expected output: okRestoring from Backup
Method 1: Replace Database File
# Stop HD Homey
docker compose stop
# Replace database
docker cp ./backups/hd_homey-20231115.db hd-homey:/app/data/db/hd_homey.db
# Start HD Homey
docker compose startMethod 2: Volume Restore
# Stop and remove container
docker compose down
# Remove current data volume
docker volume rm hd-homey_data
# Restore from backup
docker run --rm \
-v hd-homey_data:/data \
-v $(pwd)/backups:/backup \
alpine tar xzf /backup/hd-homey-data-20231115.tar.gz -C /
# Start HD Homey
docker compose up -dFrom Source
# Stop application (Ctrl+C if running)
# Restore database
cp ./backups/hd_homey-20231115.db ./data/db/hd_homey.db
# Restart application
npm run devDatabase Maintenance
Optimize Database
Over time, SQLite databases can become fragmented. Optimize with VACUUM:
# Docker
docker exec hd-homey sqlite3 /app/data/db/hd_homey.db "VACUUM;"
# From source
sqlite3 ./data/db/hd_homey.db "VACUUM;"Benefits:
- Reclaims unused space
- Defragments data
- Improves query performance
When to run: Monthly or after deleting large amounts of data
Check Database Integrity
Verify database health:
# Docker
docker exec hd-homey sqlite3 /app/data/db/hd_homey.db "PRAGMA integrity_check;"
# From source
sqlite3 ./data/db/hd_homey.db "PRAGMA integrity_check;"Expected output: ok
If errors appear: Restore from a recent backup
View Database Stats
sqlite3 ./data/db/hd_homey.db << EOF
.print "Database Statistics"
.print "==================="
SELECT 'Database Size: ' || (page_count * page_size / 1024 / 1024) || ' MB' FROM pragma_page_count(), pragma_page_size();
SELECT 'Users: ' || COUNT(*) FROM user;
SELECT 'Tuners: ' || COUNT(*) FROM tuners;
SELECT 'Channels: ' || COUNT(*) FROM channels;
SELECT 'Invitations: ' || COUNT(*) FROM invitation;
EOFTroubleshooting
Database Locked Error
Error: database is locked or SQLITE_BUSY
Causes:
- Multiple processes accessing database
- Long-running transaction
- Backup in progress
Solutions:
- Ensure only one HD Homey instance is running
- Increase SQLite timeout (done automatically by HD Homey)
- Wait for ongoing operations to complete
- Check for stale lock files:
./data/db/hd_homey.db-wal
Database Corruption
Error: database disk image is malformed or integrity check fails
Solutions:
- Stop HD Homey immediately
- Restore from backup (see Restoring from Backup)
- If no backup available:bash
# Attempt recovery (may lose recent data) sqlite3 ./data/db/hd_homey.db ".dump" | sqlite3 recovered.db
Data Loss Risk
Database corruption can result in data loss. Always maintain regular backups.
Migration Fails
Error: Migration fails on startup
Solutions:
- Check logs for specific error:
docker compose logs hd-homey - Verify database is writable
- Ensure adequate disk space
- Roll back to previous backup
- Check GitHub issues for known migration problems
Database Not Found
Error: no such table: user or file not found
Solutions:
- Verify
HD_HOMEY_DB_PATHis correct (full file path including filename) - Ensure database directory exists and is writable
- Check Docker volume is mounted
- Allow HD Homey to create database on first run
Performance Issues
Symptoms: Slow queries, high CPU usage, delayed responses
Solutions:
- Run
VACUUMto optimize database - Check database size (should be <100MB for normal use)
- Review indexes (already optimized in HD Homey)
- Consider hardware upgrade if >50 concurrent users
Database Limits
SQLite is ideal for HD Homey's use case, but has some limits:
| Aspect | Limit | HD Homey Typical |
|---|---|---|
| Database Size | 281 TB max | <100 MB |
| Concurrent Writers | 1 | 1 (single instance) |
| Concurrent Readers | Unlimited | <100 |
| Table Rows | 2^64 | <10,000 |
| String Length | 1 GB | <1 KB |
| BLOB Size | 2 GB | Not used |
Recommendation: If you exceed 100 concurrent users or need multiple HD Homey instances, consider migrating to PostgreSQL (not currently supported).
Next Steps
- Environment Variables - Configure database path
- Installation Guide - Setup HD Homey with persistent data
- Troubleshooting - Solve database-related issues
