← Back to blog

Scavenger Hunt in the Penn Museum

~ Into the Blue: case study ~ October 30, 2025


<link to demo/> <github repo/>

With its new exhibition Into the Blue, the Museum wanted a virtual companion. We built an offline-first web app that guides visitors to find blue artifacts, “cut” them in place, and keep + share a personal digital sticker collection.

01 — communicating with stakeholders

Continual communication with the Museum team kept goals, product details, and tech requirements aligned.

  • We explored the Museum to pick blue artifacts and map routes.
  • For a younger audience, we created a scavenger hunt to prompt exploration across different galleries and close looking at artifacts.

02 — my contribution: sticker cutout feature

The Museum requested that we do not distort the appearance or cultural meaning of artifacts. During initial team discussions, I proposed a cutout feature to preserve original images while allowing playful physical collection, outlining a technical approach involving HTML canvas and SVG masks.

  • communicated with designers to generate SVG outlines and PNG overlays w/ same viewbox dimensions
  • implemented dynamic routing with json data for modularity
  • aligned clipping/sizing across devices
  • initial iteration didn't work with complex paths; implemented even-odd clipping to cut multiple shapes and holes
  • Zoom feature to help users fit artifact in cutout area

03 — my contribution: storage pipeline

WiFi in the Museum is spotty; we needed an offline-first solution to store stickers reliably. I proposed using IndexedDB for persistent local storage (even when the device restarts!).

  • immediate write to IndexedDB (PNG + metadata) & update on progress pages and stickerboard.
  • converted image into blob for efficient storage and retrieval
  • key-ed by artifact ID for easy retrieval and preventing duplication

04 — my co-contribution: stickerboard

Drag-Drop board with custom stickers and export feature.

  • collaborated with another dev to implement custom drag-drop after unsuccessful iterations with npm libraries
  • developed modals for adding user-collected artifact stickers that switch categories upon button tap + swipe gesture
  • export to PNG using html2canvas

lessons learned

  • translating designers' intent into measurable rules (shared viewBox, aspect ratios, error guards) that hold across edge cases and devices
  • one source of geometry: derive every size, transform, clip, and export from the same reference box (plus DPR + cover-fit) for pixel-perfect alignment
  • staging dynamic flow: guide - capture - reveal, with subtle feedback (zoom, outline animation) that speaks clearly without distorting content

thanks for reading :-)