fix(zone-id): restore zone-aware playlist_items wiring (issue #3 follow-up)

Phase 2 (assignments -> playlist_items) dropped zone_id during the
conversion: migrateAssignmentsToPlaylists INSERTed only (playlist_id,
content_id, widget_id, sort_order, duration_sec), and the new
playlist_items DDL omitted the zone_id column entirely. Every write
path on top of playlist_items inherited that omission - the
multi-zone layout assignment feature stopped working.

Frontend always sent zone_id correctly (device-detail.js:1015,1072
POST and PUT both include it; api.addAssignment and api.updateAssignment
forward the body verbatim). Server silently dropped it. The
assignments.js PUT route was the most direct evidence: it destructured
zone_id from req.body but never added it to the updates array.

Schema:
- schema.sql: add zone_id TEXT REFERENCES layout_zones(id) ON DELETE
  SET NULL to fresh-install DDL.
- database.js migrations[]: add idempotent ALTER TABLE for existing
  installs (the surrounding try/catch loop handles duplicate-column).

Backfill (new gated migration phase2_zone_id_backfill):
- Pre-migration snapshot copied to db/remote_display.pre-zone-id-
  backfill-<ts>.db (one-off for this migration; the general
  every-migration-snapshot framework is a separate concern, not built
  here).
- Best-effort UPDATE playlist_items.zone_id from surviving
  assignments rows via device.playlist_id + content_id/widget_id
  match, LIMIT 1 for the multi-match edge case.
- Regenerates published_snapshot for every published playlist so the
  JSON the player consumes carries zone_id going forward. Even with
  zero rows backfilled (the common case post-Phase-2 cleanup) this
  closes the snapshot-staleness gap.
- Stamps schema_migrations regardless so it won't re-run on next boot.
- On the live local DB: 0 playlist_items backfilled, 18
  published_snapshots regenerated. On the April 13 prod fixture
  (sandboxed copy): 0 backfilled, 7 regenerated. Expected and matches
  our pre-flight finding that assignments was effectively scrubbed of
  zone_id everywhere.

Route wiring (7 sites + 1 shared constant):
- assignments.js ITEM_SELECT: project pi.zone_id (read path so the
  frontend display at device-detail.js:500 surfaces the value).
- assignments.js POST INSERT: include zone_id column + value.
- assignments.js PUT: actually use the already-destructured zone_id
  in the updates allow-list. Treats undefined as "no change" so a PUT
  that omits zone_id leaves the existing value intact; any explicit
  value (including null) is written.
- assignments.js copy-to INSERT: preserve a.zone_id during
  device-to-device playlist copy.
- playlists.js buildSnapshotItems: project pi.zone_id so the snapshot
  JSON carries it. This is what the player's renderZones loop reads
  (player/index.html:1338 matches a.zone_id === zone.id).
- playlists.js discard-revert INSERT: restore zone_id from snapshot
  item on revert.

Out of scope (verified safe by SQL semantics + UI inspection):
- playlists.js POST item-add and PUT item-update in the playlist-detail
  surface: the UI there doesn't expose zone editing, and their SQL
  leaves zone_id NULL on insert / untouched on update. No regression.
- Other playlists.js SELECT projections (lines 141, 190, 240, 265, 334,
  379, 419) all use SELECT pi.* and auto-pick zone_id once the column
  exists.
- Kiosk-page assign at device-detail.js:1027 doesn't send zone_id;
  separate pre-existing gap, not part of this regression.

Tests (all local, no push, no prod deploy):
- Migration boot on live local DB: clean, idempotent (second boot
  skips the gated function).
- Migration boot on April 13 prod fixture (sandboxed copy at
  /tmp/zone-fix-fixtures/test-run.db): cleanly runs the full migration
  stack (multi-tenancy + 5 other phases the fixture predated) then
  the new zone_id backfill. Live local DB untouched.
- 8 SQL-level route behavior tests pass: INSERT stores zone_id, PUT
  changes/clears zone_id, ITEM_SELECT and buildSnapshotItems
  projections include zone_id, copy-to preserves, discard-revert
  restores from snapshot JSON, undefined zone_id in PUT leaves
  existing value intact.

Not verified: end-to-end multi-zone playback on a real device. The
SQL + snapshot JSON layer is correct (player consumes
playlist.find(a => a.zone_id === zone.id) and now gets the right
zone_id back from the snapshot); confirming render-to-correct-zone
on actual hardware is the next step before prod deploy.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
ScreenTinker 2026-05-14 19:20:44 -05:00
parent cdd29d5e3b
commit 73f41c3288
4 changed files with 121 additions and 9 deletions

View file

@ -137,6 +137,11 @@ const migrations = [
// Phase 2.2c: content_folders gets workspace_id. Phase 1 missed this table.
"ALTER TABLE content_folders ADD COLUMN workspace_id TEXT REFERENCES workspaces(id)",
"CREATE INDEX IF NOT EXISTS idx_content_folders_workspace ON content_folders(workspace_id)",
// Phase 2 zone_id regression fix: playlist_items needs zone_id so the
// multi-zone-layout assignment feature works. The Phase 2 assignments->
// playlist_items conversion (migrateAssignmentsToPlaylists) dropped this
// column. Column ADD is idempotent via the surrounding try/catch loop.
"ALTER TABLE playlist_items ADD COLUMN zone_id TEXT REFERENCES layout_zones(id) ON DELETE SET NULL",
];
for (const sql of migrations) {
try { db.exec(sql); } catch (e) { /* already exists */ }
@ -485,6 +490,109 @@ function backfillActivityLogWorkspace() {
backfillActivityLogWorkspace();
// Phase 2 zone_id backfill. Companion to the ADD COLUMN above. Attempts to
// recover zone_id values for playlist_items rows by joining back to the
// (legacy) assignments table on device+content/widget. On installs where
// assignments is empty or never had zone_id populated this is a no-op; the
// migration row is stamped regardless so it doesn't re-run.
//
// Also regenerates published_snapshot JSON for every published playlist so
// the snapshot the player consumes carries zone_id going forward (the
// player resolves a.zone_id === zone.id in renderZones). Even with zero
// rows backfilled, this republish closes the snapshot-staleness gap.
//
// Pre-migration snapshot is a one-off for this migration only - the general
// "every migration backs up first" framework is tracked as a separate
// concern, not built here.
const PHASE2_ZONE_ID_BACKFILL_ID = 'phase2_zone_id_backfill';
function backfillPlaylistItemsZoneId() {
const already = db.prepare('SELECT 1 FROM schema_migrations WHERE id = ?').get(PHASE2_ZONE_ID_BACKFILL_ID);
if (already) return;
const ts = new Date().toISOString().replace(/[:.]/g, '-');
const snapshotPath = path.join(dbDir, `remote_display.pre-zone-id-backfill-${ts}.db`);
try {
db.pragma('wal_checkpoint(TRUNCATE)');
fs.copyFileSync(config.dbPath, snapshotPath);
console.warn(`[zone-id backfill] Pre-migration snapshot: ${snapshotPath}`);
} catch (e) {
console.error(`[zone-id backfill] Snapshot failed: ${e.message}`);
process.exit(1);
}
try {
const tx = db.transaction(() => {
// Backfill: best-effort match playlist_items back to assignments via
// device.playlist_id and content/widget identity. LIMIT 1 covers the
// unlikely "same content assigned twice in different zones on one
// device" edge case. Items with no matching legacy assignment, or
// matches that themselves had zone_id NULL, are left as NULL.
const backfilled = db.prepare(`
UPDATE playlist_items
SET zone_id = (
SELECT a.zone_id FROM assignments a
JOIN devices d ON d.id = a.device_id
WHERE d.playlist_id = playlist_items.playlist_id
AND a.zone_id IS NOT NULL
AND (
(a.content_id IS NOT NULL AND a.content_id = playlist_items.content_id)
OR
(a.widget_id IS NOT NULL AND a.widget_id = playlist_items.widget_id)
)
LIMIT 1
)
WHERE zone_id IS NULL
AND EXISTS (
SELECT 1 FROM assignments a
JOIN devices d ON d.id = a.device_id
WHERE d.playlist_id = playlist_items.playlist_id
AND a.zone_id IS NOT NULL
AND (
(a.content_id IS NOT NULL AND a.content_id = playlist_items.content_id)
OR
(a.widget_id IS NOT NULL AND a.widget_id = playlist_items.widget_id)
)
)
`).run().changes;
// Republish: regenerate published_snapshot for every published playlist
// so the snapshot JSON carries zone_id. Mirrors buildSnapshotItems in
// routes/playlists.js - kept inline here to avoid pulling routes/* in
// at migration time (circular require).
const publishedPlaylists = db.prepare("SELECT id FROM playlists WHERE status = 'published'").all();
const buildSnapshot = db.prepare(`
SELECT pi.content_id, pi.widget_id, pi.zone_id, pi.sort_order, pi.duration_sec,
COALESCE(c.filename, w.name) as filename, c.mime_type, c.filepath, c.file_size,
c.duration_sec as content_duration, c.remote_url,
w.name as widget_name, w.widget_type, w.config as widget_config
FROM playlist_items pi
LEFT JOIN content c ON pi.content_id = c.id
LEFT JOIN widgets w ON pi.widget_id = w.id
WHERE pi.playlist_id = ?
ORDER BY pi.sort_order ASC
`);
const updateSnap = db.prepare("UPDATE playlists SET published_snapshot = ?, updated_at = strftime('%s','now') WHERE id = ?");
let republished = 0;
for (const pl of publishedPlaylists) {
const items = buildSnapshot.all(pl.id);
updateSnap.run(JSON.stringify(items), pl.id);
republished++;
}
db.prepare('INSERT OR IGNORE INTO schema_migrations (id) VALUES (?)').run(PHASE2_ZONE_ID_BACKFILL_ID);
return { backfilled, republished };
});
const { backfilled, republished } = tx();
console.log(`[zone-id backfill] ${backfilled} playlist_items recovered zone_id, ${republished} published_snapshots regenerated`);
} catch (e) {
console.error(`[zone-id backfill] Migration FAILED: ${e.message}`);
console.error(`[zone-id backfill] Restore with: cp ${snapshotPath} ${config.dbPath}`);
process.exit(1);
}
}
backfillPlaylistItemsZoneId();
// Prune old telemetry (keep last 24h worth at 15s intervals = ~5760, cap at 6000)
function pruneTelemetry(deviceId) {
db.prepare(`

View file

@ -347,6 +347,7 @@ CREATE TABLE IF NOT EXISTS playlist_items (
playlist_id TEXT NOT NULL REFERENCES playlists(id) ON DELETE CASCADE,
content_id TEXT REFERENCES content(id) ON DELETE CASCADE,
widget_id TEXT REFERENCES widgets(id) ON DELETE CASCADE,
zone_id TEXT REFERENCES layout_zones(id) ON DELETE SET NULL,
sort_order INTEGER NOT NULL DEFAULT 0,
duration_sec INTEGER NOT NULL DEFAULT 10,
created_at INTEGER NOT NULL DEFAULT (strftime('%s','now')),

View file

@ -44,7 +44,7 @@ function ensureDevicePlaylist(deviceId, userId) {
// Standard item query with joined content/widget info
const ITEM_SELECT = `
SELECT pi.id, pi.playlist_id, pi.content_id, pi.widget_id, pi.sort_order, pi.duration_sec,
SELECT pi.id, pi.playlist_id, pi.content_id, pi.widget_id, pi.zone_id, pi.sort_order, pi.duration_sec,
pi.created_at, pi.updated_at,
COALESCE(c.filename, w.name) as filename,
c.mime_type, c.filepath, c.thumbnail_path,
@ -108,9 +108,9 @@ router.post('/device/:deviceId', (req, res) => {
try {
const result = db.prepare(`
INSERT INTO playlist_items (playlist_id, content_id, widget_id, sort_order, duration_sec)
VALUES (?, ?, ?, ?, ?)
`).run(playlistId, content_id || null, widget_id || null, order, duration_sec);
INSERT INTO playlist_items (playlist_id, content_id, widget_id, zone_id, sort_order, duration_sec)
VALUES (?, ?, ?, ?, ?, ?)
`).run(playlistId, content_id || null, widget_id || null, zone_id || null, order, duration_sec);
markDraft(playlistId);
@ -150,6 +150,9 @@ router.put('/:id', (req, res) => {
if (sort_order !== undefined) { updates.push('sort_order = ?'); values.push(sort_order); }
if (duration_sec !== undefined) { updates.push('duration_sec = ?'); values.push(duration_sec); }
// zone_id can be null (clear the zone) - treat undefined as "no change",
// any other value (including null) as "write this".
if (zone_id !== undefined) { updates.push('zone_id = ?'); values.push(zone_id || null); }
if (updates.length > 0) {
updates.push("updated_at = strftime('%s','now')");
@ -230,11 +233,11 @@ router.post('/device/:deviceId/copy-to/:targetDeviceId', (req, res) => {
const maxOrder = db.prepare('SELECT MAX(sort_order) as m FROM playlist_items WHERE playlist_id = ?')
.get(targetPlaylistId).m || 0;
const stmt = db.prepare('INSERT INTO playlist_items (playlist_id, content_id, widget_id, sort_order, duration_sec) VALUES (?, ?, ?, ?, ?)');
const stmt = db.prepare('INSERT INTO playlist_items (playlist_id, content_id, widget_id, zone_id, sort_order, duration_sec) VALUES (?, ?, ?, ?, ?, ?)');
const transaction = db.transaction(() => {
sourceItems.forEach((a, i) => {
stmt.run(targetPlaylistId, a.content_id, a.widget_id, maxOrder + i + 1, a.duration_sec);
stmt.run(targetPlaylistId, a.content_id, a.widget_id, a.zone_id || null, maxOrder + i + 1, a.duration_sec);
});
});
transaction();

View file

@ -66,7 +66,7 @@ function requirePlaylistWrite(req, res, next) {
// Build the snapshot item list for a playlist (denormalized for device payload)
function buildSnapshotItems(playlistId) {
return db.prepare(`
SELECT pi.content_id, pi.widget_id, pi.sort_order, pi.duration_sec,
SELECT pi.content_id, pi.widget_id, pi.zone_id, pi.sort_order, pi.duration_sec,
COALESCE(c.filename, w.name) as filename, c.mime_type, c.filepath, c.file_size,
c.duration_sec as content_duration, c.remote_url,
w.name as widget_name, w.widget_type, w.config as widget_config
@ -220,10 +220,10 @@ router.post('/:id/discard', requirePlaylistWrite, (req, res) => {
// Clear current draft items
db.prepare('DELETE FROM playlist_items WHERE playlist_id = ?').run(req.params.id);
// Re-insert from snapshot, skipping items whose content/widget was deleted
const insert = db.prepare('INSERT INTO playlist_items (playlist_id, content_id, widget_id, sort_order, duration_sec) VALUES (?, ?, ?, ?, ?)');
const insert = db.prepare('INSERT INTO playlist_items (playlist_id, content_id, widget_id, zone_id, sort_order, duration_sec) VALUES (?, ?, ?, ?, ?, ?)');
for (const item of publishedItems) {
try {
insert.run(req.params.id, item.content_id || null, item.widget_id || null, item.sort_order, item.duration_sec);
insert.run(req.params.id, item.content_id || null, item.widget_id || null, item.zone_id || null, item.sort_order, item.duration_sec);
} catch (e) {
if (e.message.includes('FOREIGN KEY')) {
console.warn(`Discard: skipping snapshot item (content_id=${item.content_id}, widget_id=${item.widget_id}) — referenced entity was deleted`);