arrow_downward
1. Designing Serendipity
I created an AI-native app that re-imagines navigating Wikipedia with left and right swipes. With AI working subtly in the background, my app generates narrative arcs that entice users into new, unexplored territory.
React Native
Code-As-Design
AI-Native
UX Simulations
1. Designing Serendipity
Right-swipes create an ever-emerging narrative
Right swipes work like this: 1) extract URLs from the current page 2) take the last three wiki pages visited 3) chooses the URL that continues the narrative.
1. Designing Serendipity
Left swipes back out to broad perspectives
The user can swipe left and go to broader, related subjects, and keep doing that until they decide to swipe-right.
1. Designing Serendipity
I didn't start with the problem. I started with curiosity.
Case Study
At the outset of this project, I had a hunch about how the app would work, but nothing was clearly defined. Core functionality and subtle refinements emerged as I worked in rapid iterations in code.
1. Designing Serendipity
LLM prompts were at the heart of the creative process
Case Study
At the onset, I created a Python-based simulation of the UX. The code was fairly straightforward: I wrote LLM prompts for left and right swipes, then Python to simulate the user swiping.
1. Designing Serendipity
Coding is designing
Case Study
I worked through most interaction design problems in code. Iterating on a design with a device in-hand is unparalleled for designing novel interfaces.
1. Designing Serendipity
Speed and performance are fundamental to good user experiences
Case Study
My app needed to be performant to work. I created a React-based simulation of right and left swipes to help me figure out how to make preemptive LLM API calls asynchronously.
2. Fusing Gestures, Speech, and AI
I'm fascinated by how emerging technology has the potential to transform how people interact with machines. This prototype explores the fusion of gestures and speech to interact with a locally-running LLM (gemma-2b).
React
Local LLM
Gesture recognition
Speech recognition
2. Fusing Gestures, Speech, and AI
The space in front of the screen is part of the interface
My app enables users to summon an AI assistant with a two-step process: 1) raise a hand, 2) say something. User can also close the chat with a "thumbs-up" which works like a long hold.
2. Fusing Gestures, Speech, and AI
Gestures augment communication
Case Study
I created subscribable JavaScript events in React on top of an open-source gesture recognition library from MediaPipe.
2. Fusing Gestures, Speech, and AI
Conversational cadence and system feedback
My app provides system feedback when users speak that says, "I'm ready", "I'm listening", and "I'm processing what you said".
3. WikiPathfinder
An early exploration of fusing AI with Wikipedia, this project uses an LLM to enhance the browsing experience.
React
GPT-40-mini
IxD exploration
3. WikiPathfinder
In-context link previews
Enabled by Wikipedia's ultra-fast API, this app shows the top six links as the user scrolls, creating a dynamic side bar with link preview text.
3. WikiPathfinder
Augmented information foraging
Case Study
A predecessor to my mobile swipe-base Wikipedia browser, this project uses an LLM to determine logical flows between wiki page.
3. WikiPathfinder
Reconsidering browser interaction patterns
This react-based prototype explores a side-scrolling navigation system, enabling previews of the next page on the right, and the previously viewed page on the left.
3. WikiPathfinder
Responsive ergonomics
Case Study
For smaller screens, I used Figma to design layouts that change significantly for tablet designs.
3. WikiPathfinder
New ways to way-find
Case Study
This app uses an LLM to interpret content and create a menu (right) containing related topics. These topics fit into three categories: related subject, broad perspective, and tangential subject.

Thanks for checking out my projects!

You can read more about me here.

© 2026 luminousfloatinghead.com