1 comments

  • howieyoung 10 hours ago

    Hi HN, I built this as a small browser experiment.

    How it works • Toggle “Eleven Mode”, grant camera permission, then show a palm toward the webcam • Gesture detection runs entirely on device in the browser using [MediaPipe Hands] • Once triggered, I apply a lightweight set of DOM and visual effects to create the “Upside Down” transformation

    Privacy • Camera is used for on device gesture detection only • No recording, no upload

    What I’m exploring • The practical ceiling of on device gesture recognition in real world conditions (lighting, camera quality, background noise), and what it takes to keep latency low enough for a “feels instant” UX • How far AI assisted coding can take a real time interactive web experience before manual performance work becomes the bottleneck

    Feedback I’d love • OS + browser + device, and whether it triggers reliably for you • Any performance issues or onboarding confusion • Did you find any easter eggs, and which one is your favorite