top of page

I build interfaces that turn AI from answers → decisions.

Designing AI Systems People Can Understand, Control, and Experience

Multi-agent · Multimodal · Real-world interaction
Abstract visual representing AI Personas

Most AI interfaces hide how decisions are made. I design systems that make model behavior observable, comparable, and controllable for research teams.

Visual representing multimodal AI systems

multi-agent reasoning, comparison, and real-time decision-making

Creative Technologist for Generative AI & XR Systems

I design and prototype systems that make generative AI outputs testable and production-ready. At Meta Superintelligence Lab, I defined evaluation frameworks for multimodal model behavior and partnered with Engineering, PMs, and UXRs to translate AI outputs into structured analysis. My independent React/JavaScript prototypes — Compare Mode, Focus Mode, real-time streaming — solve the gaps I saw firsthand in how teams observe and control model behavior.

I’ve spent 18 years making complex systems observable — from spatial tracking at Microsoft MR to model behavior at Meta. I prototype the tools research teams need to see what’s happening inside the system.

FOCUS

Multi-agent systems • Multimodal AI • Human–AI Interaction • XR Interfaces​

Previously worked across:
  • Meta — Multimodal AI Systems & Evaluation
  • Microsoft Mixed Reality — XR Systems & Interaction

These experiences shape my approach: high-fidelity execution, creative rigor, and bold future-forward experimentation.

bottom of page