Build agents that run in the browser
Embed intelligence into your product's user experience by integrating LLMs with your UI components and client-side logic.
Intelligence where it helps. Nowhere it doesn't.
Generate UI, Not Just Text
With Hashbrown, LLMs compose real views from your components and stream them into the page. Interfaces stay on‑brand, context‑aware, and production‑ready.
Learn how to build Generative UI with tool callingTurn Language into Data
Use Hashbrown to turn natural language into strongly typed data and build friendlier apps. Streaming primitives keep interactions fast, responsive, and type-safe.
Learn how to convert natural language into structured dataInstantly Predict the Next Action
Use Hashbrown to suggest the right next step from context—navigation, filling a form, or kicking off a task—so your users stay in the flow.
Learn to build predictive suggestions and shortcutsBuilt in the open by the team at LiveLoveApp, a consultancy that specializes in designing and engineering joyful products for the web.
The Generative UI Framework for Engineers
Hashbrown gives developers full control over generative AI to build user interfaces that are predictable, high quality, and ready to ship
Generative User Interfaces
Expose your React or Angular components and let Hashbrown use an LLM to serve dynamic views. You stay in control of the ingredients, deciding exactly what can and can't be generated.
Client-side Tool Calling
Hashbrown lets you define custom tools the LLM can use to fetch data or perform actions. While other AI SDKs stop at the server, Hashbrown runs tool calling in the browser so developers can expose app services and state directly.
Structured Data
Hashbrown comes with Skillet, a schema language that makes it simple to get structured data from LLMs. It is fully type safe and works for component props, structured outputs, and tool definitions, always served just right.
Streaming Responses
Hashbrown uses web standards to stream responses in common JavaScript runtimes like Node.js, Lambda, and Cloudflare Workers. A built-in JSON parser lets your app display results as fast as the LLM generates them.
Bring Your Own Model
Hashbrown works with the LLM vendor of your choice, with built-in support for OpenAI, Azure, Google Gemini, Writer, Anthropic, and AWS Bedrock. Use open weight models via Ollama.
JavaScript Runtime
Hashbrown includes a JavaScript runtime compiled to WebAssembly for executing AI-generated code. Create glue code to build graphs on the fly, stitch services together, ground mathematical operations, and more.
Model Context Protocol
Hashbrown integrates with the MCP Client SDK to call remote tools on an MCP server. This lets you connect your app to shared services, enterprise systems, and custom workflows through a standardized protocol.
Threads
Keep network calls small and lightweight while managing token consumption using Hashbrown's built-in threads support. Cache and recall LLM messages with minimal loading code.
Markdown Streaming
Stream and animate inline markdown with Magic Text, Hashbrown's headless markdown parser. Let LLMs cite their sources with full citation support.
Local Models
Hashbrown can connect with experimental local small language models in Chrome and Edge to generate completions and power lightweight chat experiences, with no roundtrip to the server.
Listen To & Generate Speech
Hashbrown pairs speech-to-text and text-to-speech models to make interfaces conversational. Use them together to build voice agents that listen to users, generate speech and UI, and interact with your web app.
Analyze Images & Documents
Scan images and documents with device cameras and turn them into structured data that connects your app to the physical world. Expose files to the JavaScript Runtime to let LLMs generate scripts for deeper analysis.
Hot Out of the Fryer
Our latest videos, podcasts, and more.
Web Dev Challenge - Spotify Game
September 9, 2025