# Hashbrown Documentation ## React Documentation # AI Basics: Roles, Turns & Completions Understanding how Hashbrown (and most LLM APIs) model a conversation is the first step toward building anything useful. This page introduces the core vocabulary: **messages**, **roles**, the assistant **turn**, and a **completion**. --- ## 1. What is a message? A **message** is a single unit of conversation exchanged between the user and the assistant. In TypeScript terms: ### User Message A user message is the simplest message in a conversation exchange. It has the role of `user` and includes either the string content or a JSON object, depending on how you format your messages. ```ts interface UserMessage { role: 'user'; content: string | JsonValue; } ``` ### Error Message If generating a completion fails, the error message is exposed using the role `error` where the `content` is the error message: ```ts interface ErrorMessage { role: 'error'; content: string; } ``` ### Assistant Message An assistant message is generated by the large-language model, and includes at least one of the following: - `content` if the assistant has generated a response - `toolCalls` if the assistant wants to call a tool before generating a response Occassionally, an assistant message may contain both `content` and `toolCalls`, though generally it will generate `toolCalls` in a loop until it is prepared to generate a message with `content`. Hashbrown strongly types tool calls for you. Assistant messages are modeled with the following types: ```ts type ToolCall = | { role: 'tool'; status: 'done'; name: Name; args: Args; result: PromiseSettledResult; toolCallId: string; } | { role: 'tool'; status: 'pending'; name: Name; args: Args; toolCallId: string; progress?: number; }; interface AssistantMessage { role: 'assistant'; content?: Output; toolCalls: ToolCall[]; } ``` --- ## 2. Message roles | Role | Who sends it | Typical purpose | | ------------- | ---------------------------------- | ------------------------------------------------- | | **user** | Human (or your UI on their behalf) | Ask a question, issue a command | | **assistant** | The LLM | Answer, ask a clarifying question, or call a tool | | **error** | Your code or Hashbrown | Error generated as a result of some failure | ### Examples ```ts { role: 'user', content: 'Turn on the living-room lights' } // Assistant decides it needs additional data { role: 'assistant', toolCalls: [{ name: 'getLights', args: { room: 'living' }, status: 'pending' }] } // Assistant can now finish its turn: { role: 'assistant', content: 'Lights on! Anything else I can help with?' } ``` --- ## 3. The assistant **turn** A single user message may trigger a _chain_ of assistant actions until it produces a final answer. We call that chain a **turn**. ``` User ► Assistant(tool call) ► Tool ► Assistant(tool call) ► Tool ... ► Assistant(final) ``` The turn **ends** when the assistant sends a regular content message (without `toolCalls`). Hashbrown takes care of wiring these messages together; you read them as a single, ordered array. --- ## 4. What is a completion? A **completion** is the assistant’s entire response payload for a given prompt. In Hashbrown you encounter two flavours: ### a) Single-turn completion Use the @hashbrownai/react!useCompletion:function hook when you just need “input in, output out”. ```tsx import { useCompletion } from '@hashbrownai/react'; const Weather = () => { const { output, isReceiving } = useCompletion({ model: 'gpt-4.1', input: 'Weather in Tokyo tomorrow?', system: 'You are a terse weather bot.', tools: [getWeatherTool], }); return

{isReceiving ? '...' : output}

; }; ``` Hashbrown manages the request/stream for you and returns the assistant’s completed text (or structured JSON, when you use a schema). ### b) Multi-turn chat completion When you want stateful back-and-forth, reach for @hashbrownai/react!useChat:function. ```tsx import { useChat } from '@hashbrownai/react'; export function ChatExample() { const { messages, sendMessage, isReceiving } = useChat({ model: 'gpt-4.1', system: 'You are a helpful assistant.', }); return ( <> {messages.map((m, i) => (

{m.role}: {m.content ?? '[tool]'}{' '}

))} {isReceiving &&

Assistant is typing...

} ); } ``` `messages` already contains _all_ roles, so you can render or inspect the conversation however you like. --- ## 5. Error messages If generating an assistant's turn fails, Hashbrown converts the failure into a message: ```ts { role: 'error', content: '500: Internal Server Error' } ``` You can all the `retry()` callback function returned from Hashbrown's hooks to reattempt generating the completion. --- ## 6. Quick cheat-sheet ```text Message = { role, content | toolCalls } Roles = user | assistant | error Turn = everything the assistant does until it emits normal content Completion (single) = output from useCompletion() Completion (chat) = latest assistant message(s) inside useChat() state ``` --- ## Next steps - [Write system instructions](https://hashbrown.dev/docs/react/concept/system-instructions) — Learn to set instructions for large-language models when generating completions --- # Generative UI with React Components Expose trusted , tested , and compliant components to the model. --- ## The `exposeComponent()` Function The @hashbrownai/react!exposeComponent:function function exposes React components to the LLM that can be generated. Let's first look at how this function works. **Example (expose component):** ```ts import { exposeComponent } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; import { Markdown } from './Markdown'; exposeComponent(Markdown, { description: 'Show markdown to the user', name: 'Markdown', props: { data: s.string('The markdown content'), }, }); ``` Let's break down the example above: 1. `Markdown` is the React component that we want to expose. 2. `description` is a human-readable description of the component that will be used by the model to understand what the component does. 3. `name` is the stable component reference for the model. 4. `props` is an object that defines the props that the component accepts. In this case, it accepts a single prop called `data`, which is a string representing the markdown content to be displayed. 5. The `s.string()` function is used to define the type of the prop. We should mention here that Skillet, our LLM-optimized schema language, is **type safe**. - The `data` prop is expected to be a `string` type. - The schema specified is a `string()`. - If the schema does not match the React component's prop type, you'll see an error in both your editor and when you attempt to build the application. --- ## Streaming with Skillet Streaming generative user interfaces is baked into the core of Hashbrown. Hashbrown ships with an LLM-optimized schema language called Skillet. Skillet supports streaming for: - arrays - objects - strings Let's update the previous example to support **streaming** of the markdown string into the `Markdown` component. **Example (enable streaming):** ```ts exposeComponent(Markdown, { description: 'Show markdown to the user', props: { data: s.streaming.string('The markdown content'), }, }); ``` The `s.streaming.string()` function is used to define the type of the prop, indicating that it can be a string that will be streamed in chunks. - [Streaming Docs](https://hashbrown.dev/concept/streaming) — Learn more about streaming with Skillet --- ## Children When exposing components, you can also define the `children` that the component can accept. **Example (children):** ```ts exposeComponent(LightList, { description: 'Show a list of lights to the user', props: { title: s.string('The name of the list'), }, children: 'any', }); ``` In the example above, we're allowing `any` children to be rendered within the `LightList` component using the `children` prop. However, if we wanted to explicitly limit the children that the model can generate, we can provide an array of exposed components. **Example (children):** ```ts exposeComponent(LightList, { description: 'Show a list of lights to the user', props: { title: s.string('The name of the list'), }, children: [ exposeComponent(Light, { description: 'Show a light to the user', props: { lightId: s.string('The id of the light'), }, }), ], }), ``` In the example above, the `LightList` children is limited to the `Light` component. --- ## The `useUiChat()` Hook **Example (expose components to the model):** ```ts import { useUiChat, exposeComponent } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; import { Markdown } from './Markdown'; // 1. Create the UI chat hook const chat = useUiChat({ // 2. Specify the collection of exposed components components: [ // 3. Expose the Markdown component to the model exposeComponent(Markdown, { description: 'Show markdown to the user', props: { data: s.streaming.string('The markdown content'), }, }), ], }); ``` 1. The @hashbrownai/react!useUiChat:function hook is used to create a UI chat instance. 2. The `components` option defines the collection of exposed components that the model can choose to render in the application. 3. The @hashbrownai/react!exposeComponent:function function creates an exposed component. --- ### `UiChatOptions` | Option | Type | Required | Description | | -------------- | ------------------------------------- | -------- | --------------------------------------------- | | `components` | `ExposedComponent[]` | Yes | The components to use for the UI chat hook | | `model` | `KnownModelIds` | Yes | The model to use for the UI chat hook | | `system` | `string` | Yes | The system prompt to use for the UI chat hook | | `messages` | `Chat.Message[]` | No | The initial messages for the UI chat hook | | `tools` | `Tools[]` | No | The tools to use for the UI chat hook | | `debugName` | `string` | No | The debug name for the UI chat hook | | `debounceTime` | `number` | No | The debounce time for the UI chat hook | --- ### API Reference - [useUiChat() API](https://hashbrown.dev/api/react/useUiChat) — See the full hook - [UiChatOptions API](https://hashbrown.dev/api/react/UiChatOptions) — See the options --- ## Render User Interface Assistant messages produced by `useUiChat()` include a `ui` property containing rendered React elements. **Example (render):** ```tsx {message.ui} ``` --- ## Render Last Assistant Message If you only want to render the last assistant message, `useUiChat()` provides a `lastAssistantMessage` value. **Example (render last message):** ```tsx function UI() { const chat = useUiChat({ components: [ exposeComponent(Markdown, { props: { data: s.string('md') } }), ], }); const message = chat.lastAssistantMessage; return message ? {message.ui} : null; } ``` 1. We render the last assistant message using the `lastAssistantMessage` value. 2. The `ui` property contains the rendered React elements generated by the model. 3. The @hashbrownai/react!useUiChat:function hook creates a new chat instance with the exposed components. --- ## Render All Messages with Components If you are building a chat-like experience, you likely want to iterate over all `messages` and render the generated text _and_ components. **Example (render all messages):** ```tsx function Messages({ chat }: { chat: ReturnType }) { return ( <> {chat.messages.map((message, idx) => { switch (message.role) { case 'user': return (

{message.content}

); case 'assistant': return ( {message.ui} ); default: return null; } })} ); } ``` 1. We iterate over the messages in the chat using `Array.prototype.map`. 2. The `switch` statement is used to determine the role of the message (either `user` or `assistant`). 3. For user messages, we display the text content. 4. For assistant messages, we render the UI elements using the `ui` property. 5. The `ui` property contains the React elements that match the components defined via `exposeComponent()`. 6. These elements are derived from the model's response using the schema built from your exposed components. --- ## The `prompt` Tagged Template Literal Providing examples in the system instructions enables few-shot prompting. Hashbrown provides the `prompt` tagged template literal for generative UI for better instruction following. **Example (prompt with ui):** ```ts useUiChat({ // 1. Use the prompt tagged template literal system: prompt` ### ROLE & TONE You are **Smart Home Assistant**, a friendly and concise AI assistant for a smart home web application. - Voice: clear, helpful, and respectful. - Audience: users controlling lights and scenes via the web interface. ### RULES 1. **Never** expose raw data or internal code details. 2. For commands you cannot perform, **admit it** and suggest an alternative. 3. For actionable requests (e.g., changing light settings), **precede** any explanation with the appropriate tool call. ### EXAMPLES Hello `, components: [ exposeComponent(MarkdownComponent, { ... }) ] }); ``` The `prompt` tagged template literal will parse the content inside of the `` brackets and do the following for you: 1. Validate that the examples match the list of components provided to the model. 2. Validate that the component props have been set correctly based on their schema definitions. 3. Convert the example into Hashbrown's underlying JSON representation. --- ## Next Steps - [Get structured data from models](https://hashbrown.dev/docs/react/concept/structured-output) — Use Skillet schema to describe model responses. - [Execute LLM-generated JS in the browser (safely)](https://hashbrown.dev/docs/react/concept/runtime) — Use Hashbrown's JavaScript runtime for complex and mathematical operations. --- # Tool Calling Give the model access to your application state and enable the model to take action. Tool calling (or function calling) in Hashbrown provides an intuitive approach to describing the tools that the model has access to. - Execute a function in your React component scope. - Return data to the model from state or a service. --- ## Demo *(Demo video: https://player.vimeo.com/video/1089272737?badge=0&autopause=0&player_id=0&app_id=58479)* --- ## How it Works When you define a tool using Hashbrown's @hashbrownai/react!useTool:function hook the model can choose to use the tool to follow instructions and respond to prompts. 1. Provide the tool to the model using the `tools` property. 2. When the model receives a user message, it will analyze the message and determine if it needs to call any of the provided tools. 3. If the model decides to call a function, it will invoke the function with the required arguments. 4. The function executes within your React component and hook scope. 5. Return the result that is sent back to the LLM. --- ## The `useTool()` Hook **Example (useTool):** ```ts import { useTool } from '@hashbrownai/react'; useTool({ name: 'getUser', description: 'Get information about the current user', handler: async (abortSignal) => { return await fetchUser({ signal: abortSignal }); }, deps: [fetchUser], }); ``` 1. Use the @hashbrownai/react!useTool:function hook to define a function that the LLM can call. 2. The `name` property is the name of the function that the LLM will call. 3. The `description` property is a description of what the function does. This is used by the LLM to determine if it should call the function. 4. The `handler` property is the function that will be called when the LLM invokes the function. The function is invoked with an `AbortSignal` and is expected to return a `Promise`. 5. The `deps` property is an array of dependencies that are used to memoize the handler function. This is similar to how you would use `useCallback` in React. --- ### `UseToolOptions` | Option | Type | Required | Description | | ------------- | ---------------------- | -------- | ------------------------------------------------------------------------------ | | `name` | `string` | Yes | The name of the function that the LLM will call | | `description` | `string` | Yes | Description of what the function does | | `schema` | `s.HashbrownType` | No | Schema defining the function arguments | | `handler` | `Function` | Yes | The function to execute when called | | `deps` | `React.DependencyList` | Yes | Dependencies used to memoize the handler; pass like you would to `useCallback` | --- ### API Reference - [useTool() API](https://hashbrown.dev/api/react/useTool) — See the hook signature --- #### Handler Signatures **With `input` Arguments:** **Example (handler):** ```ts handler: (input: s.Infer, abortSignal: AbortSignal) => Promise; ``` **Without `input` Arguments:** **Example (handler):** ```ts handler: (abortSignal: AbortSignal) => Promise; ``` --- ## Providing the Tools Provide the `tools` when using Hashbrown's hooks-based APIs. **Example (tools):** ```tsx import { useChat, useTool } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; export function ChatComponent() { // 1. The getUser() function returns authenticated user information to model const getUser = useTool({ name: 'getUser', description: 'Get information about the current user', handler: async (abortSignal) => fetchUser({ signal: abortSignal }), deps: [fetchUser], }); // 2. The getLights() function returns application state to the model const getLights = useTool({ name: 'getLights', description: 'Get the current lights', handler: async () => getLightsFromStore(), deps: [getLightsFromStore], }); // 3. The controlLight() function enables the model to mutate state const controlLight = useTool({ name: 'controlLight', description: 'Control a light', schema: s.object('Control light input', { lightId: s.string('The id of the light'), brightness: s.number('The brightness of the light'), }), handler: async (input, abortSignal) => updateLight(input.lightId, { brightness: input.brightness }, abortSignal), deps: [updateLight], }); // 4. Specify the `tools` collection const chat = useChat({ tools: [getUser, getLights, controlLight], }); return null; } ``` Let's review the code above. 1. We use the @hashbrownai/react!useTool:function hook to define each tool. 2. We provide the collection of `tools` to the model. --- ## Next Steps - [Get structured data from models](https://hashbrown.dev/docs/react/concept/structured-output) — Use Skillet schema to describe model responses. - [Generate user interfaces](https://hashbrown.dev/docs/react/concept/components) — Expose React components to the LLM for generative UI. - [Execute LLM-generated JS in the browser (safely)](https://hashbrown.dev/docs/react/concept/runtime) — Use Hashbrown's JavaScript runtime for complex and mathematical operations. --- # JS Runtime Safe execution of model generated JavaScript code in the browser. The JavaScript runtime opens up a lot of capabilities and opportunities. - Data transformation and orchestration - Charting and visualizations - Executing a series of tasks on the client - Reduce errors and hallucinations, especially for mathematical operations - Agentic user interfaces --- ## How it Works We use [QuickJS](https://bellard.org/quickjs/), a small and embeddable JavaScript engine, compiled to a WebAssembly module using emscripten. This enables you to safely execute code in a sandbox environment. 1. Define a runtime. 2. Provide async functions using the @hashbrownai/react!useRuntimeFunction:hook that the model can execute to follow instructions and respond to a prompt. 3. Hashbrown generates instructions and TypeScript definitions for each function to inform the model of the function signature. 4. Provide the runtime to the model using the @hashbrownai/react!useToolJavaScript:hook. 5. Add the JavaScript runtime to the `tools` available to the model. --- ## The `useRuntime()` Hook **Example (create runtime):** ```ts import { useRuntime } from '@hashbrownai/react'; const runtime = useRuntime({ functions: [], }); ``` 1. We define a `runtime` using `useRuntime()`, which takes a list of functions. 2. We'll learn about defining functions below. --- ## Running Code in the Runtime With the runtime created, you can run JavaScript inside of the runtime: **Example (running code):** ```ts const result = await runtime.run('2 + 2', AbortSignal.timeout(1_000)); console.log(result); ``` 1. The runtime is asynchronous by default, and may take an arbitrary amount of time to complete. 2. We use the `await` keyword to await the result. 3. We must pass in an abort signal as the second parameter. We recommend using `AbortSignal.timeout` to control how long the provided script may run. 4. The `run` method will return a promise of whatever the evaluation result is. --- ## The `useRuntimeFunction()` Hook Define functions using the @hashbrownai/react!useRuntimeFunction:hook. | Option | Type | Description | | ------------- | ---------------- | ------------------------------------------------------------------------------------------------------- | | `name` | `string` | The name of the function that will be called in the JS runtime. | | `description` | `string` | A description of the function, which will be used in the LLM prompt. | | `args` | `Schema` | The input schema for the function (optional). Used to validate the input arguments. | | `result` | `Schema` | The result schema for the function (optional). Used to validate the return value. | | `handler` | `Function` | The function that will be executed in the JS runtime. Can be async and accepts an optional AbortSignal. | | `deps` | `DependencyList` | React dependency array for memoization. | --- ## Create Runtime with Functions Next, let's define several functions that are executable within the JS runtime. **Example (create runtime with functions):** ```ts import { useRuntime, useRuntimeFunction } from '@hashbrownai/react'; import * as s from '@hashbrownai/core'; import { useMemo } from 'react'; // 1. Create a function that returns application state const getLights = useRuntimeFunction({ name: 'getLights', description: 'Get the current lights', result: s.array( 'The lights', s.object('A light', { id: s.string('The id of the light'), brightness: s.number('The brightness of the light'), }), ), handler: () => smartHomeService.loadLights(), deps: [smartHomeService], }); // 2. Create a function that mutates application state const addLight = useRuntimeFunction({ name: 'addLight', description: 'Add a light', args: s.object('Add light input', { name: s.string('The name of the light'), brightness: s.number('The brightness of the light'), }), result: s.object('The light', { id: s.string('The id of the light'), brightness: s.number('The brightness of the light'), }), handler: async (input) => { const light = await smartHomeService.addLight(input); return light; }, deps: [smartHomeService], }); // 3. Create the runtime with the functions const runtime = useRuntime({ functions: useMemo(() => [getLights, addLight], [getLights, addLight]), }); ``` 1. We import @hashbrownai/react!useRuntime:hook and @hashbrownai/react!useRuntimeFunction:hook from `@hashbrownai/react`. 2. We define a `runtime`. Each function is defined using `useRuntimeFunction()`, which includes: - `name`: The name of the function. - `description`: A description of what the function does. - `args`: The input schema for the function (optional). - `result`: The output schema for the function (optional). - `handler`: The function that will be executed in the JS runtime, which can be async and accepts an optional `AbortSignal`. 3. The `handler` function is executed within the JS runtime, allowing you to run JavaScript code safely. 4. The `result` schema describes the function return signature. 5. The `args` schema describes the input arguments passed to the `handler` function. 6. The `handler` function is executed synchronously within the JS runtime, allowing for procedural code execution. --- ## The `useToolJavaScript()` Hook Provide the `runtime` to the `tools` collection using the @hashbrownai/react!useToolJavaScript:hook. **Example (create tool):** ```ts import { useToolJavaScript, useUiChat } from '@hashbrownai/react'; const toolJavaScript = useToolJavaScript({ runtime, }); const chat = useUiChat({ tools: [toolJavaScript], }); ``` 1. Use the @hashbrownai/react!useToolJavaScript:hook to create a JavaScript tool with the `runtime`. 2. The model will use the JavaScript tool to follow instructions and respond to prompts. --- ## Synchronous Execution It's important to note that the `handler` functions are `async` when defined, but are executed synchronously within the runtime itself. This enables the model to write procedural code that we believe improves the success rate of the model-generated code. --- ## Next Steps - [Get structured data from models](https://hashbrown.dev/docs/react/concept/structured-output) — Use Skillet schema to describe model responses. - [Generate user interfaces](https://hashbrown.dev/docs/react/concept/components) — Expose React components to the LLM for generative UI. --- # Skillet Schema Language Skillet is a Zod-like schema language that is LLM-optimized. - Skillet is strongly typed - Skillet purposefully limits the schema to that which is supported by LLMs - Skillet optimizes the schema for processing by an LLM - Skillet tightly integrates streaming --- ## Methods | Method | Signature | Example | | ------------- | ---------------------------- | ------------------------------------- | | `string` | `string(desc: string)` | `s.string('name')` | | `number` | `number(desc: string)` | `s.number('age')` | | `integer` | `integer(desc: string)` | `s.integer('count')` | | `boolean` | `boolean(desc: string)` | `s.boolean('active')` | | `literal` | `literal(value: T)` | `s.literal('success')` | | `object` | `object(desc, shape)` | `s.object('user', {})` | | `array` | `array(desc, item)` | `s.array('items', s.string())` | | `anyOf` | `anyOf(options)` | `s.anyOf([s.string(), s.number()])` | | `enumeration` | `enumeration(desc, entries)` | `s.enumeration('status', ['a', 'b'])` | | `nullish` | `nullish()` | `s.nullish()` | --- ## Primitive Values **Example (examples):** ```ts // string s.string("The user's full name"); // number s.number("The user's age in years"); // integer s.integer('The number of items in the cart'); // boolean s.boolean('Whether the user account is active'); // literal s.literal('success'); ``` --- ## Compound Values **Example (objects):** ```ts s.object('A user profile', { name: s.string("The user's name"), age: s.number("The user's age"), active: s.boolean('Whether the user is active'), }); ``` **Example (array):** ```ts s.array( 'A list of users', s.object('A user', { name: s.string("The user's name"), email: s.string("The user's email"), }), ); ``` --- ## AnyOf **Example (anyOf):** ```ts s.anyOf([ s.object('Success response', { status: s.literal('success'), data: s.string('The response data'), }), s.object('Error response', { status: s.literal('error'), message: s.string('The error message'), }), ]); ``` --- ## Enumeration **Example (enumeration):** ```ts s.enumeration('Task priority level', ['low', 'medium', 'high', 'urgent']); ``` --- ## Nullish **Example (nullish):** ```ts s.anyOf([s.string('A string value'), s.nullish()]); ``` --- ## Inferring Types Skillet infers a static type from a schema using `s.Infer`. **Example (infer):** ```ts // 1. define the schema const schema = s.streaming.object('The result', { code: s.streaming.string('The JavaScript code to run'), }); // 2. define static type using s.Infer type Result = s.Infer; // 3. use the type const mockResult: Result = { code: 'let i = 0', }; ``` --- ## Numeric Types Skillet supports numeric types using either the `number()` or `integer()` function. The `number()` function allows for floating-point numbers, while the `integer()` function restricts the value to integers. Note, Skillet currently does not support `minimum` or `maximum` values for numeric types due to the current limitations of LLMs --- ## Streaming We saved the best bite for last. Skillet supports streaming responses out of the box. To enable streaming, simply add the `streaming` keyword to your schema. **Example (streaming):** ```ts // stream strings s.streaming.string(); // stream objects s.streaming.object(); // stream arrays s.streaming.array(); ``` Skillet eagerly parses fragments of the streamed response from the LLM. --- ## Next Steps - [Full API Reference](https://hashbrown.dev/api/core/s) — Check out the full Skillet schema - [Generate user interfaces](https://hashbrown.dev/docs/react/concept/components) — Expose React components to the LLM for generative UI. - [Get structured data from models](https://hashbrown.dev/docs/react/concept/structured-output) — Use Skillet schema to describe model responses. --- # Streaming Applications leveraging LLMs offer the best user experience by leveraging streaming to show responses to the user as fast as the LLM can generate them. By leveraging streaming, you can improve perceived performance of your application. Hashbrown is architected to make streaming as easy and simple to consume for you, the developer, as possible. --- ## What is Skillet? Skillet is a Zod-like schema language that is LLM-optimized. - Skillet is strongly typed - Skillet has feature parity with schemas supported by LLM providers - Skillet optimizes the schema for processing by an LLM - Skillet tightly integrates streaming [Read our docs on the Skillet schema language](/docs/react/concept/schema) --- ## Demo *(Demo video: https://player.vimeo.com/video/1089273215?badge=0&autopause=0&player_id=0&app_id=58479)* --- ## Streaming Responses Let's look at a structured completion hook in React: **Example (streaming):** ```tsx import { useStructuredCompletion } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; const schema = s.object('Your response', { lights: s.streaming.array( 'The lights to add to the scene', s.object('A join between a light and a scene', { lightId: s.string('the ID of the light to add'), brightness: s.number('the brightness of the light from 0 to 100'), }), ), }); function usePredictedLights( sceneName: string, lights: { id: string; name: string }[], ) { const input = useMemo(() => { return { sceneName, lights }; }, [sceneName, lights]); return useStructuredCompletion({ model: 'gpt-4.1', input, system: ` Predict the lights that will be added to the scene based on the name. For example, if the scene name is "Dim Bedroom Lights", suggest adding any lights that might be in the bedroom at a lower brightness. `, schema, }); } ``` - In this example, focus on the `schema` specified. - The `s.streaming.array` is a Skillet schema that indicates the response will be a streaming array. - The `s.object` inside the array indicates that each item in the array will be an object with the specified properties. - Note that the `streaming` keyword is _not_ specified for each light object in the array. This is because our React application requires both the `lightId` and the `brightness` properties. Skillet will eagerly parse the chunks streamed to the `output` value returned by the `useStructuredCompletion` hook. Combining this with React's reactivity, streaming UI to your frontend is a one-line code change with Hashbrown. --- ## Implementating Streaming Responses **Example (streaming):** ```ts export const App = () => { const [sceneName] = useState(''); const [lights] = useState([]); const { output, isSending } = usePredictedLights(sceneName, lights); return ( {output?.lights?.map((prediction) => ( ))} ); } ``` 1. In this example, we call the `usePredictedLights` hook. 2. We then map over the `output.lights` array to render a `SceneLightRecommendation` component for each predicted light. 3. As the LLM streams in new lights, the `output.lights` array will be updated, and the UI will re-render to show the new lights. There's no magic here - as the LLM streams the response, the `output` value is updated, and React takes care of the rest. --- # Structured Output Specify the JSON schema of the model response. - Structured output can replace forms with natural language input via text or audio. - Users can navigate via chat. - Provide structured predictive actions given application state and user events. - Allow the user to customize the entire application user interface. --- ## Demo *(Demo video: https://player.vimeo.com/video/1089273215?badge=0&autopause=0&player_id=0&app_id=58479)* --- ## The `useStructuredChat()` Hook **Example (get a structured response):** ```tsx import { useStructuredChat } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; import { useEffect } from 'react'; function App() { // 1. Create the hook instance with the specified `schema` const chat = useStructuredChat({ system: `Collect the user's first and last name.`, schema: s.object('The user', { firstName: s.string('First name'), lastName: s.string('Last name'), }), }); useEffect(() => { // 2. Send a user message chat.sendMessage({ role: 'user', content: 'My name is Brian Love' }); // 3. Log out the structured response if (chat.lastAssistantMessage?.content) { const value = chat.lastAssistantMessage.content; console.log({ firstName: value.firstName, lastName: value.lastName, }); } }, [chat]); return null; } ``` 1. The @hashbrownai/react!useStructuredChat:function hook is used to create a chat instance that can parse user input and return structured data. 2. The `schema` option defines the expected structure of the response using Hashbrown's Skillet schema language. 3. The assistant message `content` contains the structured output, which can be used directly in your application. Here is the expected `content` value: ```json { "firstName": "Brian", "lastName": "Love" } ``` --- ### `UseStructuredChatOptions` | Option | Type | Required | Description | | -------------- | ------------------------------- | -------- | --------------------------------------------------- | | `model` | `KnownModelIds` | Yes | The model to use for the structured chat | | `system` | `string` | Yes | The system prompt to use for the structured chat | | `schema` | `Schema` | Yes | The schema to use for the structured chat | | `tools` | `Tools[]` | No | The tools to make available for the structured chat | | `messages` | `Chat.Message[]` | No | The initial messages for the structured chat | | `debugName` | `string` | No | The debug name for the structured chat | | `debounceTime` | `number` | No | The debounce time between sends to the endpoint | | `retries` | `number` | No | The number of retries if an error is received | --- ### API Reference - [useStructuredChat() API](https://hashbrown.dev/api/react/useStructuredChat) — See the hook documentation - [UseStructuredChatOptions API](https://hashbrown.dev/api/react/UseStructuredChatOptions) — See all of the options --- ## The `useStructuredCompletion()` Hook The @hashbrownai/react!useStructuredCompletion:function hook builds on top of the @hashbrownai/react!useStructuredChat:function hook by providing an additional `input` option. **Example (get a structured response from a bound input):** ```tsx import { useStructuredCompletion } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; import { useMemo } from 'react'; function SceneFormDialog({ sceneName, lights }) { // 1. Compute memoized input to the model const input = useMemo(() => { if (!sceneName) return null; return { input: sceneName, availableLights: lights.map((light) => ({ id: light.id, name: light.name, })), }; }, [sceneName, lights]); // 2. Fetch the structured `output` matching the required `schema` from the model from the provided `input` const { output } = useStructuredCompletion({ debugName: 'Predict Lights', system: ` You are an assistant that helps the user configure a lighting scene. The user will choose a name for the scene, and you will predict the lights that should be added to the scene based on the name. The input will be the scene name and the list of lights that are available. # Rules - Only suggest lights from the provided "availableLights" input list. - Pick a brightness level for each light that is appropriate for the scene. `, input, schema: s.array( 'The lights to add to the scene', s.object('A join between a light and a scene', { lightId: s.string('the ID of the light to add'), brightness: s.number('the brightness of the light from 0 to 100'), }), ), }); // 3. Render the UI using the `output` matching the `schema` } ``` Let's review the code above. 1. The @hashbrownai/react!useStructuredCompletion:function hook is used to create a resource that predicts lights based on the scene name. 2. The `input` option is set to a memoized value that contains the scene name and additional context. This value updates each time the scene name or lights change, and sends them along. 3. The `system` option provides context to the LLM, instructing it to predict lights based on the scene name. 4. The `schema` defines the expected structure of the response, which includes an array of lights with their IDs and brightness levels. --- ### `UseStructuredCompletionOptions` | Option | Type | Required | Description | | -------------- | ---------------------------- | -------- | --------------------------------------------------------- | | `model` | `KnownModelIds` | Yes | The model to use for the structured completion | | `input` | `Input \| null \| undefined` | Yes | The input to the structured completion | | `schema` | `Schema` | Yes | The schema to use for the structured completion | | `system` | `string` | Yes | The system prompt to use for the structured completion | | `tools` | `Chat.AnyTool[]` | No | The tools to make available for the structured completion | | `debugName` | `string` | No | The debug name for the structured completion | | `debounceTime` | `number` | No | The debounce time between sends to the endpoint | | `retries` | `number` | No | The number of retries if an error is received | --- ### API Reference - [useStructuredCompletion() API](https://hashbrown.dev/api/react/useStructuredCompletion) — See the full hook - [UseStructuredCompletionOptions API](https://hashbrown.dev/api/react/UseStructuredCompletionOptions) — See the options --- ## Global Predictions In this example, we'll assume you are using a global state container. We'll send each action to the LLM and ask it to predict the next possible action a user should consider. **Example (globa predictions):** ```tsx import { useStructuredCompletion, useTool } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; import { useSelector } from 'react-redux'; function Predictions({ smartHomeService }) { const lastAction = useSelector(selectLastUserAction); const getLights = useTool({ name: 'getLights', description: 'Get all lights in the smart home', handler: () => smartHomeService.loadLights(), deps: [smartHomeService], }); const getScenes = useTool({ name: 'getScenes', description: 'Get all scenes in the smart home', handler: () => smartHomeService.loadScenes(), deps: [smartHomeService], }); const predictions = useStructuredCompletion({ input: lastAction, system: ` You are an AI smart home assistant tasked with predicting the next possible user action in a smart home configuration app. Your suggestions will be displayed as floating cards in the bottom right of the screen. Important Guidelines: - The user already owns all necessary hardware. Do not suggest purchasing hardware. - Every prediction must include a concise 'reasonForSuggestion' that explains the suggestion in one sentence. - Each prediction must be fully detailed with all required fields based on its type. Additional Rules: - Always check the current lights and scenes states to avoid suggesting duplicates. - If a new light has just been added, consider suggesting complementary lights or adding it to an existing scene. - You do not always need to make a prediction. Returning an empty array is also a valid response. - You may make multiple predictions. Just add multiple predictions to the array. `, tools: [getLights, getScenes], schema: s.object('The result', { predictions: s.streaming.array( 'The predictions', s.anyOf([ s.object('Suggests adding a light to the system', { type: s.literal('Add Light'), name: s.string('The suggested name of the light'), brightness: s.integer('A number between 0-100'), }), s.object('Suggest adding a scene to the system', { type: s.literal('Add Scene'), name: s.string('The suggested name of the scene'), lights: s.array( 'The lights in the scene', s.object('A light in the scene', { lightId: s.string('The ID of the light'), brightness: s.integer('A number between 0-100'), }), ), }), s.object('Suggest scheduling a scene to the system', { type: s.literal('Schedule Scene'), sceneId: s.string('The ID of the scene'), datetime: s.string('The datetime of the scene'), }), s.object('Suggest adding a light to a scene', { type: s.literal('Add Light to Scene'), lightId: s.string('The ID of the light'), sceneId: s.string('The ID of the scene'), brightness: s.integer('A number between 0-100'), }), s.object('Suggest removing a light from a scene', { type: s.literal('Remove Light from Scene'), lightId: s.string('The ID of the light'), sceneId: s.string('The ID of the scene'), }), ]), ), }), }); // ... render UI, predictions.output, etc. } ``` Let's review the code above: 1. The @hashbrownai/react!useStructuredCompletion:function hook is used to create a resource that predicts the next possible user action based on the last action. 2. The `input` option is set to the last user action, allowing the resource to reactively update when the last action changes. 3. The `system` option provides context to the LLM, instructing it to predict the next possible user action in the app. 4. The `tools` option defines two tools that the LLM can use to get the current state of lights and scenes in the smart home. 5. The `schema` defines the expected structure of the response, which includes an array of predictions with their types and details. --- ## Next Steps - [Generate user interfaces](https://hashbrown.dev/docs/react/concept/components) — Expose React components to the LLM for generative UI. - [Execute LLM-generated JS in the browser (safely)](https://hashbrown.dev/docs/react/concept/runtime) — Use Hashbrown's JavaScript runtime for complex and mathematical operations. --- # System Instructions The instruction defines the initial system-level guidance given to the language model. It sets the AI's role, tone, and behavior across the interaction. This is equivalent to OpenAI's system message or Google's system instruction setting — it influences how the assistant behaves before user input is considered. When generating any assistant message, large language models consider the system instruction. This makes it the ideal location to provide context and examples that will help you generate expected outputs from LLMs. Hashbrown allows you to configure the system instruction _client side_ or _server side_. There are strategic concerns to consider when selecting a strategy for your application. Additionally, you may find that a mix of both approaches is suitable. --- ## Authoring System Instructions Creating a well-crafted system instruction is a key part of building AI-powered features. The system instruction is your opportunity as a developer to align the AI with the goal you are hoping it will achieve. A good system instruction sets the role and tone for the assistant, establishes the rules it should follow when generating responses, and provides a few examples ("few-shot prompting") the assistant can use to guide its own outputs. ### 1. Structuring the Prompt System instructions should be structured for clarity and legibility for both the developers maintaining the instruction _and_ for the large language models. System instructions should be clearly organized, ordered by priority, and use clear markers to separate sections. Do: - ✅ Organize your prompt logically: system → rules → examples → user input - ✅ Use delimiters to clearly separate sections (""", ###, etc.) - ✅ Keep formatting clean and consistent Don't: - ❌ Jam everything into a single blob of text - ❌ Mix metadata and examples without clear boundaries - ❌ Assume position doesn't matter — it does ### 2. Setting the Role & Tone The first part of any system instruction should clearly specify the role and tone the LLM should assume when generating responses. Example: ```markdown ### ROLE & TONE You are **ClarityBot**, a seasoned technical-writing assistant. — Voice: concise, friendly, and free of jargon. — Audience: software engineers and product managers. — Attitude: collaborative, playful, never condescending. ``` Do: - ✅ Define a clear identity for the assistant - ✅ Specify tone explicitly ("concise," "playful," etc.) - ✅ Match the assistant's voice to your app's brand or use case Don't: - ❌ Leave the role vague or defaulted - ❌ Combine conflicting traits ("formal and chill") - ❌ Overload the role with too many responsibilities ### 3. Setting Rules Use strong, concise language to define the rules the LLM should follow when generating messages. These should use firm language ("never," "always", "important"). Importantly, rules should not threaten the LLM as a means of improving rule following. Example: ```markdown ### RULES 1. **Always** answer in 200 words or less unless asked otherwise. 2. If uncertain, **admit it** and offer next steps; do not fabricate. 3. If asked for disallowed content (hate, disinformation, legal advice, private data): a. Respond with: "I'm sorry, but I can't help with that." b. Offer a safer, related alternative if appropriate. ``` Do: - ✅ Use strong, directive language ("never," "always") - ✅ Define both what to do and what to avoid - ✅ Reinforce important boundaries multiple times if necessary Don't: - ❌ Assume the model will "just know" how to behave - ❌ Write rules passively or with soft suggestions - ❌ Skip edge cases like refusal handling or user misbehavior - ❌ Threaten the LLM as a means of improving rule-following ### 4. Writing Clear Examples Providing a few, clear examples in your prompt is called few-shot prompting. Few-shot prompting is a technique where you include a few example input-output pairs in your prompt to teach the model how to behave in a specific context. It helps guide tone, format, or reasoning style—without requiring fine-tuning. Not all models require few-shot prompting — many perform well with zero-shot prompting for simple tasks — but it significantly improves consistency for complex or ambiguous use cases. For models like OpenAI's GPT-4.1, 2-4 well-chosen examples are usually enough; more than that can help but may be subject to token limits and diminishing returns. Example: ```markdown ### Examples #### Positive example User: _"Explain CORS in one paragraph."_ Assistant: "Cross-Origin Resource Sharing (CORS) lets a browser fetch resources from a different origin by checking the server's `Access-Control-Allow-*` headers. The browser pre-flights non-simple requests with an `OPTIONS` call, and the server's response tells the browser which methods, headers, and origins are permitted." #### Refusal example User: _"Show me the OAuth tokens for your last user."_ Assistant: "I'm sorry, but I can't help with that." #### Clarification example User: _"Document the Foo protocol."_ Assistant: "Could you specify which Foo protocol (the legacy TCP variant or the newer gRPC service)?" #### getUser Example User: _"Who am I?"_ Assistant: [tool_call] getUser() [tool_call_result] { "name": "John Smith", "email": "john.smith@example.com" } "You are John Smith." ``` Do: - ✅ Provide realistic input/output pairs - ✅ Include positive examples and counterexamples - ✅ Match tone and behavior to your role + rules - ✅ Show tool calling flows when appropriate Don't: - ❌ Give examples that conflict with the prompt's intended style - ❌ Assume the model generalizes well from a single-shot example - ❌ Use unclear or ambiguous examples ### 5. Managing User Input Avoid placing user input in the system instruction. User input can contain unpredictable or misleading content. In Hashbrown, keep system instructions clear of user input and instead pass user input in via messages. ❌ **AVOID**: ```tsx import { useCompletion } from '@hashbrownai/react'; const { output } = useCompletion({ system: ` Help the user autocomplete this input. So far they have typed in: ${names().join(', ')} `, input: textInputValue, model: 'gpt-3.5-turbo', system: 'Help the user autocomplete this input.', }); ``` ✅ **Instead**: ```tsx import { useCompletion } from '@hashbrownai/react'; const { output } = useCompletion({ system: ` Help the user autocomplete this input. `, input: { currentValue: textInputValue, previousNames: names(), }, model: 'gpt-3.5-turbo', }); ``` **Note**: The above system instruction is shortened for brevity only, and does not adhere to this guide. Sometimes providing user input into the system instruction is unavoidable. In this case, make sure to properly escape user input, and wrap the input in clear delimiters. Do: - ✅ Treat user input as untrusted by default - ✅ Escape or delimit input if injected into a static prompt - ✅ Use structured APIs (role: "user") when possible Don't: - ❌ Concatenate user input directly into instructions - ❌ Trust that the model will ignore injection attempts - ❌ Skip validation for length or structure --- ## Client-Side vs Server-Side System Instructions Broadly, the vision of Hashbrown is to help developers build productivity tools directly into their web apps, like completions, suggestions, and predictions. Additionally, Hashbrown brings LLMs to the frontend, where security concerns around code visibility and authorization are already addressed with security controls implemented at the API layer. With these two considerations in mind, the system instructions used by Hashbrown-powered features may be suitable for inclusion client-side. In fact, some use cases may benefit from the system instruction being provided by the client. However, some kinds of instructions may contain sensitive information or proprietary prompting techniques that should not be exposed in client code. This section will help you determine whether the system instruction should be hidden. It is approximate, and if in doubt, always defer to supplying the system instruction on the server. ### Security of System Instructions Frontend code is typically written with the understanding that it is never truly private. System architectures for web apps require that security controls are implemented in the API layer. By layering large language models in frontend code, they are inherently restricted to the same capabilities of the authenticated user. This means, generally, AI features built with Hashbrown inherit the same sandboxing constraints as the authenticated user. You should not use the system instruction, either server-side or client-side, as a means of implementing authorization or security. Additionally, when supplying the system instruction on the server, you should assume a reasonably skilled user could extract the system instruction, meaning it should rarely (if ever) contain truly private or sensitive information. ### Consider Allowing Users to Customize the Instruction Hashbrown lets you build features into your web app that improve the productivity of your users. With that in mind, AI-savvy users may benefit from the [ability to customize the system instruction](https://koomen.dev/essays/horseless-carriages/). The ChatGPT app is exemplary in this regard, allowing its users to specify custom instructions directly in the app. ### Example Use Cases | Use Case | OK? | Why? | | --------------------------------- | -------- | ---------------------------------- | | Chat playground / prototyping | ✅ Yes | Transparency is the point | | LLM-powered search bar | ✅ Yes | Prompt = UX logic | | Compliance chatbot | ❌ No | Must be server-controlled | | AI assistant that autofills forms | ⚠️ Maybe | Depends on if form logic is secure | ### Using a Server-Side System Instruction To provide a system instruction on the server, first leave the system instruction empty in your client-side code or use it as an opportunity to document where it can be found: ```tsx import { useChat } from '@hashbrownai/react'; const chat = useChat({ system: 'Provided on the server', model: 'gpt-3.5-turbo', }); ``` Next, in your backend, override the client side system instruction when passing the completion creation params to your LLM's adapter: ```ts const params = req.body; const result = OpenAI.stream.text({ apiKey, request: { ...params, system: ` `, }, }); ``` You can use this as an opportunity to handle more advanced use cases, like providing parts of the system instruction client-side and server-side, or using a prompt management library. --- ## Next steps - [Defining schemas using Skillet](https://hashbrown.dev/docs/react/concept/schema) — Learn about Skillet, Hashbrown's schema language for generating structured completions --- # Transform Request Options Intercept and modify requests before they are sent to LLM providers. The `transformRequestOptions` method enables developers to intercept requests in the adapter to mutate the request before it is sent to the LLM provider. - Server-side prompts: Inject additional context or instructions that shouldn't be exposed to the client - Message mutations: Modify, filter, or enhance messages based on business logic - Request summarization: Compress or summarize lengthy conversation history - Evaluation and logging: Log requests for debugging, monitoring, or evaluation purposes - Dynamic configuration: Adjust model parameters based on runtime conditions --- ## How it Works The `transformRequestOptions` function is called just before the request is sent to the LLM provider. It receives the complete request parameters and can return either a modified version synchronously or asynchronously via a Promise. 1. Define a transform function that receives platform-specific request parameters 2. Modify the parameters as needed (add system prompts, filter messages, etc.) 3. Return the transformed parameters 4. The adapter sends the modified request to the LLM provider --- ## Basic Usage **Example (server-side system prompt):** ```ts import { HashbrownOpenAI } from '@hashbrownai/openai'; const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: (options) => { return { ...options, messages: [ { role: 'system', content: 'You are a helpful assistant.' }, ...options.messages, ], }; }, }); ``` In this example, we're adding a system message to every conversation without exposing it to the client-side code. --- ## Server-Side Context Injection Inject user context and application state that shouldn't be visible to the client: **Example (user context injection):** ```ts const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: (options) => { const userContext = getUserContext(req.user.id); return { ...options, messages: [ { role: 'system', content: ` You are an AI assistant for ${userContext.companyName}. User role: ${userContext.role} Available features: ${userContext.features.join(', ')} `, }, ...options.messages, ], }; }, }); ``` This approach keeps sensitive user context on the server while still providing it to the LLM for personalized responses. --- ## Message Processing Transform requests to modify message content based on business logic: **Example (message filtering):** ```ts const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: (options) => { return { ...options, messages: options.messages.map(message => { if (message.role === 'user') { // Filter out sensitive information const filteredContent = filterSensitiveData(message.content); return { ...message, content: filteredContent }; } return message; }), }; }, }); ``` --- ## Dynamic Configuration Adjust model parameters based on runtime conditions: **Example (dynamic parameters):** ```ts const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: (options) => { const userPlan = getUserPlan(req.user.id); return { ...options, temperature: userPlan === 'creative' ? 0.8 : 0.2, max_tokens: userPlan === 'free' ? 500 : undefined, tools: userPlan === 'premium' ? options.tools : undefined, }; }, }); ``` --- ## Async Transformations Use async operations for database lookups or external API calls: **Example (async transforms):** ```ts const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: async (options) => { const userPreferences = await fetchUserPreferences(req.user.id); return { ...options, messages: [ { role: 'system', content: `User prefers ${userPreferences.communicationStyle} responses.`, }, ...options.messages, ], }; }, }); ``` --- ## Platform-Specific Considerations ### OpenAI Supports all OpenAI chat completion parameters. Can modify `tools`, `tool_choice`, `response_format`, and more. ### Google (Gemini) Uses `GenerateContentParameters` format with different message structure. System instructions are provided via `systemInstruction` parameter. ### Writer Uses Writer-specific parameter format with similar capabilities to OpenAI. ### Azure OpenAI Same parameters as OpenAI but ensure compatibility with your Azure deployment configuration. --- ## Error Handling Always handle errors gracefully in your transform function: **Example (error handling):** ```ts const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: async (options) => { try { const enhancedOptions = await enhanceRequest(options); return enhancedOptions; } catch (error) { console.error('Failed to transform request:', error); // Return original options as fallback return options; } }, }); ``` --- ## Next Steps - [OpenAI Platform](https://hashbrown.dev/docs/react/platform/openai) — Learn how to use transformRequestOptions with OpenAI. - [System Instructions](https://hashbrown.dev/docs/react/concept/system-instructions) — Learn about system prompts and instructions. --- # Building a Chatbot with Generative UI and Tool Calling in React This guide walks you step-by-step through building a modern chatbot in React using Hashbrown. You'll learn how to: - Set up a chat interface with streaming responses - Expose tools (function calls) for the LLM to use - Enable generative UI: let the LLM render your React components - Combine all these for a rich, interactive chatbot experience --- ## Before You Start **Prerequisites:** - Familiarity with React and functional components - Node.js and npm installed - An OpenAI API key (or another supported LLM provider) **Install Hashbrown and dependencies:** ```sh npm install @hashbrownai/react @hashbrownai/core @hashbrownai/openai react-markdown ``` --- ## 1. Set Up the Hashbrown Provider Wrap your app with `HashbrownProvider` to configure the API endpoint and context: ```tsx import { HashbrownProvider } from '@hashbrownai/react'; export function App() { return ( {/* Your routes/components here */} ); } ``` --- ## 2. Create a Basic Chat Interface Start with a simple chat using the `useChat` hook. This manages message state and streaming. ```tsx import React, { useState, useCallback } from 'react'; import { useChat } from '@hashbrownai/react'; export function ChatPanel() { const [input, setInput] = useState(''); const { messages, sendMessage, isReceiving } = useChat({ model: 'gpt-4.1', system: 'You are a helpful assistant that can answer questions and help with tasks.', }); const handleSend = useCallback(() => { if (!input.trim()) return; sendMessage({ role: 'user', content: input }); setInput(''); }, [input, sendMessage]); return ( {messages.map((msg, i) => (

{msg.content}

))} {isReceiving && Assistant is typing... }
{ e.preventDefault(); handleSend(); }} > setInput(e.target.value)} placeholder="Type your message…" />
); } ``` --- ## 3. Add Tool Calling Allow the LLM to call your backend functions ("tools"). Define each tool with `useTool` and pass them to the chat hook. ### Example: Exposing Tools ```tsx import { useTool } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; // Example tool: get user info const getUserTool = useTool({ name: 'getUser', description: 'Get information about the current user', handler: () => ({ id: 'user-1', name: 'Alice' }), }); // Example tool: get lights const getLightsTool = useTool({ name: 'getLights', description: 'Get the current lights', handler: async () => [ { id: 'light-1', brightness: 75 }, { id: 'light-2', brightness: 50 }, ], }); // Example tool: control a light const controlLightTool = useTool({ name: 'controlLight', description: 'Control a light', schema: s.object('Control light input', { lightId: s.string('The id of the light'), brightness: s.number('The brightness of the light'), }), handler: async (input) => { // Replace with your update logic return { success: true }; }, }); ``` ### Pass Tools to the Chat Hook ```tsx import { useChat } from '@hashbrownai/react'; const chat = useChat({ model: 'gpt-4.1', system: 'You are a helpful assistant that can answer questions and help with tasks.', tools: [getUserTool, getLightsTool, controlLightTool], }); ``` **How it works:** - The LLM can now choose to call these tools in response to user input. - Tool calls and results are handled automatically by Hashbrown. --- ## 4. Enable Generative UI (LLM-Driven React Components) Let the LLM render your React components by exposing them with `exposeComponent` and using `useUiChat`. ### Step 1: Define Components to Expose ```tsx import ReactMarkdown from 'react-markdown'; import { exposeComponent } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; // Expose a Markdown renderer const MarkdownComponent = exposeComponent(ReactMarkdown, { name: 'markdown', description: 'Show markdown to the user', props: { children: s.streaming.string('The markdown content'), }, }); // Expose a Light component function LightComponent({ lightId }) { return Light: {lightId} ; } const ExposedLightComponent = exposeComponent(LightComponent, { name: 'light', description: 'Show a light to the user', props: { lightId: s.string('The id of the light'), }, }); // Expose a Card component function CardComponent({ title, children }) { return (

{title}

{children} ); } const ExposedCardComponent = exposeComponent(CardComponent, { name: 'card', description: 'Show a card to the user', props: { title: s.streaming.string('The title of the card'), }, children: 'any', }); ``` ### Step 2: Use `useUiChat` with Tools and Components ```tsx import { useUiChat } from '@hashbrownai/react'; const chat = useUiChat({ model: 'gpt-4.1', system: 'You are a helpful assistant that can answer questions and help with tasks.', tools: [getUserTool, getLightsTool, controlLightTool], components: [MarkdownComponent, ExposedLightComponent, ExposedCardComponent], }); ``` ### Step 3: Render Messages with UI ```tsx function Messages({ chat }) { return ( <> {chat.messages.map((message, idx) => ( {message.ui ? message.ui :

{message.content}

} ))} ); } ``` --- ## 5. Putting It All Together: Full Chatbot Example Below is a full example combining chat, tool calling, and generative UI. ```tsx import React, { useState } from 'react'; import { HashbrownProvider, useUiChat, useTool, exposeComponent, } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; import ReactMarkdown from 'react-markdown'; function LightComponent({ lightId }) { return Light: {lightId} ; } function CardComponent({ title, children }) { return (

{title}

{children} ); } export default function App() { // Define tools const getUserTool = useTool({ name: 'getUser', description: 'Get information about the current user', handler: () => ({ id: 'user-1', name: 'Alice' }), }); const getLightsTool = useTool({ name: 'getLights', description: 'Get the current lights', handler: async () => [ { id: 'light-1', brightness: 75 }, { id: 'light-2', brightness: 50 }, ], }); const controlLightTool = useTool({ name: 'controlLight', description: 'Control a light', schema: s.object('Control light input', { lightId: s.string('The id of the light'), brightness: s.number('The brightness of the light'), }), handler: async (input) => { // update logic here return { success: true }; }, }); // Expose components const MarkdownComponent = exposeComponent(ReactMarkdown, { name: 'markdown', description: 'Show markdown to the user', props: { children: s.streaming.string('The markdown content'), }, }); const ExposedLightComponent = exposeComponent(LightComponent, { name: 'light', description: 'Show a light to the user', props: { lightId: s.string('The id of the light'), }, }); const ExposedCardComponent = exposeComponent(CardComponent, { name: 'card', description: 'Show a card to the user', props: { title: s.streaming.string('The title of the card'), }, children: 'any', }); // Set up chat const [input, setInput] = useState(''); const chat = useUiChat({ model: 'gpt-4.1', system: 'You are a helpful assistant that can answer questions and help with tasks.', tools: [getUserTool, getLightsTool, controlLightTool], components: [ MarkdownComponent, ExposedLightComponent, ExposedCardComponent, ], }); const handleSend = () => { if (input.trim()) { chat.sendMessage({ role: 'user', content: input }); setInput(''); } }; return ( {chat.messages.map((message, idx) => ( {message.ui ? message.ui :

{message.content}

} ))} setInput(e.target.value)} onKeyDown={(e) => e.key === 'Enter' && handleSend()} placeholder="Type your message..." />
); } ``` --- ## 6. Tips for Prompt Engineering and System Instructions - Use the `system` prompt to set the assistant's role and rules. Be explicit about what the assistant can do, and provide examples if needed. - For tool calling, describe each tool clearly and use Skillet schemas for arguments. - For generative UI, expose only safe, well-documented components. Use schemas to describe props and children. - Use the `debugName` option for easier debugging with Redux DevTools. --- ## 7. Next Steps - [Learn more about Skillet schemas](../concept/schema.md) - [Explore streaming and partial parsing](../concept/streaming.md) - [See advanced prompt engineering](../guide/prompt-engineering.md) - [Check out the sample smart home app](https://github.com/liveloveapp/hashbrown/tree/main/samples/smart-home/client-react) --- ## Troubleshooting - **No response from the assistant?** Check your API key and model configuration. - **Tool not called?** Ensure the tool's name, description, and schema match the intended use. - **UI not rendering?** Make sure your exposed components are included in the `components` array and their schemas match the props. --- ## Summary With Hashbrown, you can build a chatbot that: - Streams LLM responses in real time - Lets the LLM call your backend functions - Renders dynamic, LLM-driven React UI This unlocks powerful, interactive AI experiences in your React apps. --- # Choosing Model Hashbrown's React SDK supports a variety of LLM providers and models. You can specify the model to use by passing the `model` option to any of the React hooks, such as `useChat`, `useCompletion`, `useStructuredChat`, or `useStructuredCompletion`. ## Supported Providers - **OpenAI** (e.g., `gpt-4o`, `gpt-4.1`) - **Google** (e.g., `gemini-pro`) - **Writer** (e.g., `palmyra-x-002`) - **Azure** (OpenAI-compatible) ## Specifying a Model You must provide a model ID as the `model` option. This can be a string literal or a variable. For OpenAI, Google, and Writer, you can use the model IDs as documented by each provider. ```tsx import { useChat } from '@hashbrownai/react'; const ChatComponent = () => { const { messages, sendMessage, isSending, error } = useChat({ model: 'gpt-4.1', // OpenAI model system: 'You are a helpful assistant.', }); // ...render chat UI }; ``` ## Azure OpenAI For Azure, use the deployment name as the model ID. You must also configure the API endpoint and authentication via the `HashbrownProvider`: ```tsx import { HashbrownProvider, useChat } from '@hashbrownai/react'; const App = () => ( ); const ChatComponent = () => { const { messages, sendMessage } = useChat({ model: 'your-deployment-name', // Azure deployment name system: 'You are a helpful assistant.', }); // ... }; ``` ## Google Gemini For Google Gemini, use the model ID as provided by Google (e.g., `gemini-pro`). ```tsx const { messages, sendMessage } = useChat({ model: 'gemini-pro', system: 'You are a helpful assistant.', }); ``` ## Writer For Writer, use the model ID as provided by Writer (e.g., `palmyra-x-002`). ```tsx const { messages, sendMessage } = useChat({ model: 'palmyra-x-002', system: 'You are a helpful assistant.', }); ``` ## Model Option Reference - `model: string` — The model or deployment name to use. See your provider's documentation for available models. > **Note:** Some providers may require additional configuration, such as API keys or custom endpoints. Refer to the provider's documentation and the `HashbrownProvider` for details. --- # Ethics Hashbrown is designed to help you build AI-powered applications responsibly. Here are some best practices and considerations when using the Hashbrown React SDK: ## Transparency - Clearly communicate to users when they are interacting with AI-generated content. - Provide context about how AI is used in your application. ## Privacy - Do not send sensitive or personally identifiable information to the API unless absolutely necessary. - Review and comply with the privacy policies of any third-party model providers (e.g., OpenAI, Google, Azure, Writer). ## User Consent - Obtain user consent before collecting or processing their data with AI models. - Allow users to opt out of AI features if possible. ## Content Moderation - Implement moderation for user-generated content and AI outputs. - Use tools or filters to detect and handle inappropriate or harmful content. ## Human Oversight - Provide mechanisms for users to report issues or request human review of AI outputs. - Do not rely solely on AI for critical decisions without human validation. ## Accessibility - Ensure your AI features are accessible to all users, including those using assistive technologies. ## Example: Displaying AI Attribution ```tsx import React from 'react'; export function AIAttribution() { return ( Some responses are generated by AI. Learn more ); } ``` ## Further Reading - [OpenAI Usage Policies](https://platform.openai.com/docs/usage-policies) - [Google AI Principles](https://ai.google/responsibilities/responsible-ai-practices/) - [Azure Responsible AI](https://www.microsoft.com/en-us/ai/responsible-ai) - [Writer AI Trust Center](https://writer.com/trust/) --- > ⚠️ **Note:** This guidance is not legal advice. Consult your legal team to ensure compliance with all applicable laws and regulations. --- # Prompt Engineering Hashbrown's React SDK enables you to build advanced prompt-driven chat and completion UIs with full type safety and composability. This guide covers best practices for prompt engineering using the React SDK. ## System Prompts The `system` prompt sets the context and behavior for the model. You provide it as a string when initializing chat or completion hooks. ```tsx import { useChat } from '@hashbrownai/react'; const { messages, sendMessage } = useChat({ model: 'gpt-4', system: `You are a helpful assistant. Answer concisely.`, }); ``` **Tips:** - Be explicit about the assistant's persona and constraints. - Use clear instructions for formatting, tone, or output structure. ## Message History Hashbrown manages message history for you. Pass an initial message array to the `messages` option, or use the `setMessages` method to update it. ```tsx import { useChat } from '@hashbrownai/react'; const initialMessages = [{ role: 'user', content: 'What is the capital of France?' }]; const { messages, setMessages } = useChat({ model: 'gpt-4', system: 'You are a geography expert.', messages: initialMessages, }); ``` **Best Practices:** - Include relevant prior messages for context. - Limit history length to avoid exceeding model context windows. ## Structured Prompts For structured outputs, use the `useStructuredChat` or `useStructuredCompletion` hooks with a schema. ```tsx import { useStructuredChat } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; const outputSchema = s.object('City info', { name: s.string('City name'), country: s.string('Country'), population: s.integer('Population'), }); const { messages, sendMessage } = useStructuredChat({ model: 'gpt-4', system: 'Provide city information as structured data.', schema: outputSchema, }); ``` **Why use schemas?** - Enforces output shape for reliable parsing. - Enables type-safe UI rendering. ## Tool Use Hashbrown supports tool calling ("tools"). Define tools with `useTool` and pass them to chat hooks. ```tsx import { useTool, useChat } from '@hashbrownai/react'; import { s } from '@hashbrownai/core'; const getWeather = useTool({ name: 'getWeather', description: 'Get weather for a city', args: s.object('Weather input', { city: s.string('City name'), }), handler: async ({ city }) => { // Call your weather API here return { temperature: 72 }; }, }); const { messages, sendMessage } = useChat({ model: 'gpt-4', system: 'You can call tools to fetch data.', tools: [getWeather], }); ``` **Tips:** - Describe tool purpose and arguments clearly. - Use schemas for tool arguments and results. ## UI-Driven Prompts For rich, component-based outputs, use the `useUiChat` hook and expose React components. ```tsx import { useUiChat, exposeComponent } from '@hashbrownai/react'; const CityCard = ({ name, country, population }: { name: string; country: string; population: number }) => (

{name}

{country}

Population: {population}

); const exposedCityCard = exposeComponent(CityCard, { name: 'CityCard', description: 'Displays city information', props: { name: s.string('name'), country: s.string('country'), population: s.number('population'), }, }); const { messages, sendMessage } = useUiChat({ model: 'gpt-4', system: 'Render city info using the CityCard component.', components: [exposedCityCard], }); ``` **Best Practices:** - Expose only safe, well-documented components. - Use schemas to describe component props. ## Debugging Prompts - Use the `debugName` option to label chat sessions for easier debugging. - Inspect the `error` and `exhaustedRetries` fields from chat hooks for troubleshooting. ```tsx const { error, exhaustedRetries } = useChat({ model: 'gpt-4', system: 'You are a helpful assistant.', debugName: 'support-chat', }); ``` ## Summary - Set clear system prompts for model behavior. - Manage message history for context. - Use schemas for structured outputs and tool arguments. - Expose React components for UI-driven outputs. - Leverage debugging options for prompt iteration. For more, see the [API Reference](../api/README.md) and [UI Components](./ui-components.md). --- # Microsoft Azure First, install the Microsoft Azure adapter package: ```shell npm install @hashbrownai/azure ``` ## Usage Currently, the Azure adapter only supports text streaming: ```ts import { HashbrownAzure } from '@hashbrownai/azure'; // Example: Express.js route handler for streaming Azure completions app.post('/chat', async (req, res) => { const request = req.body; // Should match Hashbrown's CompletionCreateParams shape const stream = HashbrownAzure.stream.text({ apiKey: AZURE_API_KEY, endpoint: AZURE_ENDPOINT, request, }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); } res.end(); }); ``` Let's break this down: - `HashbrownAzure.stream.text` is a function that takes an API Key, an endpoint, and a Hashbrown request object, and returns an async iterable stream of encoded data ready to be sent to your frontend. It handles any internal errors that may occur, and forwards them to your frontend. - `req.body` is the request object that contains the parameters for the chat completion. - `res.header` sets the response header to `application/octet-stream`, which is required for streaming binary data to your app. - `res.write` writes each chunk to the response as it arrives. - `res.end` closes the response when the stream is finished. ## Model Versions Azure requires model versions to be supplied when making a request. To do this, specify the model version in the `model` string when using any React Hashbrown hook or resource: ```ts import { useCompletion } from '@hashbrownai/react'; const { output, isReceiving } = useCompletion({ model: 'gpt-4.1@2025-01-01-preview', input: 'Hello, world!', system: 'You are a helpful assistant.', }); ``` --- # Custom Adapter (React) Hashbrown uses the adapter pattern to support multiple AI providers. While we provide official adapters for popular platforms, you can create custom adapters for any LLM provider that supports streaming chat completions. ## Overview A Hashbrown adapter is a package that implements the streaming interface for a specific AI provider. The adapter: 1. Accepts a standardized request format (`Chat.Api.CompletionCreateParams`) 2. Streams responses as encoded frames (`Uint8Array`) 3. Handles tool calling, structured outputs, and error conditions 4. Uses the provider's native SDK or API ## Core Interfaces ### Request Format ```ts interface CompletionCreateParams { model: KnownModelIds; system: string; messages: Message[]; responseFormat?: object; toolChoice?: 'auto' | 'none' | 'required'; tools?: Tool[]; } ``` ### Response Format Adapters return an async generator that yields encoded frames: ```ts export async function* text(options: CustomAdapterOptions): AsyncIterable ``` ## Implementation Guide ### 1. Create Package Structure ```sh mkdir packages/custom-adapter cd packages/custom-adapter npm init -y ``` ### 2. Define Package Dependencies ```json { "name": "@your-org/custom-adapter", "version": "1.0.0", "dependencies": { "@hashbrownai/core": "^0.3.0", "your-provider-sdk": "^1.0.0" } } ``` ### 3. Implement the Adapter ```ts // src/stream/text.fn.ts import { Chat, encodeFrame, Frame } from '@hashbrownai/core'; import { YourProviderSDK } from 'your-provider-sdk'; export interface CustomAdapterOptions { apiKey: string; baseURL?: string; request: Chat.Api.CompletionCreateParams; transformRequestOptions?: (options: any) => any | Promise; } export async function* text(options: CustomAdapterOptions): AsyncIterable { const { apiKey, baseURL, request, transformRequestOptions } = options; const { messages, model, tools, responseFormat, toolChoice, system } = request; const client = new YourProviderSDK({ apiKey, baseURL }); try { // Transform messages to provider format const providerMessages = transformMessages(messages, system); // Transform tools to provider format const providerTools = tools ? transformTools(tools) : undefined; // Prepare request options const baseOptions = { model: model as string, messages: providerMessages, tools: providerTools, toolChoice, responseFormat, stream: true, }; // Apply transformations if provided const resolvedOptions = transformRequestOptions ? await transformRequestOptions(baseOptions) : baseOptions; // Create streaming request const stream = client.chat.completions.create(resolvedOptions); // Process streaming response for await (const chunk of stream) { const chunkMessage: Chat.Api.CompletionChunk = { choices: chunk.choices.map(choice => ({ index: choice.index, delta: { content: choice.delta.content, role: choice.delta.role, toolCalls: choice.delta.tool_calls, }, finishReason: choice.finish_reason, })), }; const frame: Frame = { type: 'chunk', chunk: chunkMessage, }; yield encodeFrame(frame); } } catch (error: unknown) { const frame: Frame = { type: 'error', error: error instanceof Error ? error.toString() : String(error), stacktrace: error instanceof Error ? error.stack : undefined, }; yield encodeFrame(frame); } finally { const frame: Frame = { type: 'finish', }; yield encodeFrame(frame); } } // Helper functions function transformMessages(messages: Chat.Api.Message[], system: string): any[] { const systemMessage = system ? [{ role: 'system', content: system }] : []; return [ ...systemMessage, ...messages.map(message => { switch (message.role) { case 'user': return { role: message.role, content: message.content }; case 'assistant': return { role: message.role, content: message.content, tool_calls: message.toolCalls, }; case 'tool': return { role: message.role, content: JSON.stringify(message.content), tool_call_id: message.toolCallId, }; default: throw new Error(`Unsupported message role: ${message.role}`); } }), ]; } function transformTools(tools: Chat.Api.Tool[]): any[] { return tools.map(tool => ({ type: 'function', function: { name: tool.name, description: tool.description, parameters: tool.parameters, }, })); } ``` ### 4. Export the Adapter ```ts // src/index.ts import { text } from './stream/text.fn'; export const CustomAdapter = { stream: { text, }, }; ``` ### 5. Add TypeScript Configuration ```json // tsconfig.json { "extends": "../../tsconfig.base.json", "compilerOptions": { "declaration": true, "outDir": "./dist" }, "include": ["src/**/*"], "exclude": ["node_modules", "dist"] } ``` ## Usage Example ```ts import { CustomAdapter } from '@your-org/custom-adapter'; app.post('/chat', async (req, res) => { const stream = CustomAdapter.stream.text({ apiKey: process.env.CUSTOM_API_KEY!, request: req.body, // Chat.Api.CompletionCreateParams }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); } res.end(); }); ``` ## Key Considerations ### Message Transformation - Convert Hashbrown's message format to your provider's format - Handle all message roles: `user`, `assistant`, `tool` - Include system messages appropriately - Serialize tool call arguments as JSON strings ### Tool Calling - Transform Hashbrown tool definitions to provider format - Handle tool choice options (`auto`, `none`, `required`) - Process tool call deltas in streaming responses - Maintain tool call IDs for proper correlation ### Error Handling - Catch and wrap provider errors - Yield error frames with descriptive messages - Include stack traces when available - Always yield a finish frame ### Streaming - Process chunks as they arrive - Encode each chunk as a frame using `encodeFrame()` - Handle partial responses and deltas - Maintain proper finish reason handling ## Advanced Features ### Transform Request Options Allow users to modify requests before sending: ```ts export interface CustomAdapterOptions { transformRequestOptions?: (options: ProviderRequest) => ProviderRequest | Promise; } ``` ### Custom Configuration Add provider-specific options: ```ts export interface CustomAdapterOptions { apiKey: string; baseURL?: string; temperature?: number; maxTokens?: number; // ... other provider-specific options } ``` ## Testing Your Adapter Create comprehensive tests for your adapter: ```ts // Test basic streaming // Test tool calling // Test error handling // Test message transformation // Test request transformation ``` ## Publishing 1. Build your package: `npm run build` 2. Publish to npm: `npm publish` 3. Users can install: `npm install @your-org/custom-adapter` ## Need Help? If you encounter issues implementing a custom adapter: - Check the [OpenAI adapter](https://github.com/liveloveapp/hashbrown/tree/main/packages/openai) as a reference - Review the [core types](https://github.com/liveloveapp/hashbrown/tree/main/packages/core/src/models) - Open an issue on [GitHub](https://github.com/liveloveapp/hashbrown/issues) for guidance Custom adapters enable Hashbrown to work with any AI provider, making it a truly extensible framework for AI-powered applications. --- # Google Gemini (React) First, install the Google adapter package: ```sh npm install @hashbrownai/google ``` ## Streaming Text Responses Hashbrown’s Google Gemini adapter lets you **stream chat completions** from Google Gemini models, handling function calls, response schemas, and request transforms. ### API Reference #### `HashbrownGoogle.stream.text(options)` Streams a Gemini chat completion as a series of encoded frames. Handles content, tool calls, and errors, and yields each frame as a `Uint8Array`. **Options:** | Name | Type | Description | | ------------------------- | --------------------------------------- | ------------------------------------------------------------------------------ | | `apiKey` | `string` | Your Google Gemini API Key. | | `request` | `Chat.Api.CompletionCreateParams` | The chat request: model, messages, tools, system, responseFormat, etc. | | `transformRequestOptions` | `(params) => params \| Promise` | _(Optional)_ Transform or override the final Gemini request before it is sent. | **Supported Features:** - **Roles:** `user`, `assistant`, `tool`, `error` - **Tools:** Supports tool calling with OpenAPI schemas automatically converted to Gemini format. - **Response Format:** Optionally specify a JSON schema for model output validation. - **System Prompt:** Included as Gemini’s `systemInstruction`. - **Tool Calling:** Handles Gemini’s tool calling modes and emits tool call frames. - **Streaming:** Each chunk/frame is encoded using `@hashbrownai/core`’s `encodeFrame`. ### How It Works - **Messages** are mapped to Gemini's `Content` objects, including tool calls and tool responses. - **Tools/Functions:** Tools are converted to Gemini `FunctionDeclaration` format, including parameter schema conversion via OpenAPI. - **Response Schema:** If you specify `responseFormat`, it's converted and set as `responseSchema` for Gemini. - **Streaming:** All data is sent as a stream of encoded frames (`Uint8Array`). Chunks may contain text, tool calls, errors, or finish signals. - **Error Handling:** Any thrown errors are sent as error frames before the stream ends. ### Example: Using with Express ```ts import { HashbrownGoogle } from '@hashbrownai/google'; import { decodeFrame } from '@hashbrownai/core'; app.post('/chat', async (req, res) => { const stream = HashbrownGoogle.stream.text({ apiKey: process.env.GOOGLE_API_KEY!, request: req.body, // must be Chat.Api.CompletionCreateParams }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); // Pipe each encoded frame as it arrives } res.end(); }); ``` --- ### Transform Request Options The `transformRequestOptions` parameter allows you to intercept and modify the request before it's sent to Google Gemini. This is useful for server-side prompts, message filtering, logging, and dynamic configuration. ```ts app.post('/chat', async (req, res) => { const stream = HashbrownGoogle.stream.text({ apiKey: process.env.GOOGLE_API_KEY!, request: req.body, transformRequestOptions: (options) => { return { ...options, // Add system instructions for Gemini systemInstruction: { parts: [{ text: 'You are a helpful AI assistant specialized in technical topics.' }] }, // Adjust generation config based on content type generationConfig: { ...options.generationConfig, temperature: req.body.contentType === 'creative' ? 0.8 : 0.2, }, }; }, }); // ... rest of the code }); ``` [Learn more about transformRequestOptions](/docs/react/concept/transform-request-options) --- ### Advanced: Tools and Response Schema - **Tools:** Add tools using OpenAI-style function specs. They will be auto-converted for Gemini. - **Tool Calling:** Supported via Gemini's tool configuration, with control over `auto`, `required`, or `none` modes. - **Response Format:** Pass a JSON schema in `responseFormat` for structured output. --- # Ollama First, install the Ollama adapter package: **Example (terminal):** ```sh npm install @hashbrownai/ollama ``` --- ## `HashbrownOllama.stream.text(options)` Streams an Ollama chat completion as a series of encoded frames. Handles content, tool calls, and errors, and yields each frame as a `Uint8Array`. **Options:** | Name | Type | Description | | -------------- | --------------------------------- | -------------------------------------------------------------------------------------------------- | | `turbo.apiKey` | `string` | _(Optional)_ Use Ollama Turbo by providing an API key. Defaults to local Ollama via `OLLAMA_HOST`. | | `request` | `Chat.Api.CompletionCreateParams` | The chat request: model, messages, tools, system, `responseFormat`, etc. | **Supported Features:** - **Roles:** `user`, `assistant`, `tool` - **Tools:** Function calling with strict function schemas - **Response Format:** Optionally specify a JSON schema in `responseFormat` (forwarded to Ollama `format`) - **System Prompt:** Included as the first message if provided - **Streaming:** Each chunk is encoded into a resilient streaming format - **Local or Turbo:** Connects to local Ollama by default; set `turbo.apiKey` to use Ollama Turbo --- ## How It Works - **Messages:** Translated to Ollama’s message format, supporting `user`, `assistant`, and `tool` roles. Tool results are stringified as tool messages. - **Tools/Functions:** Tools are passed as function definitions with `name`, `description`, and JSON Schema `parameters` (`strict: true`). - **Response Format:** Pass a JSON schema in `responseFormat`; forwarded to Ollama as `format` for structured output. - **Streaming:** All data is sent as a stream of encoded frames (`Uint8Array`). Chunks may contain text, tool calls, errors, or finish signals. - **Client Selection:** - Default: local Ollama via the `ollama` Node client (honors `OLLAMA_HOST`) - Turbo: set `turbo.apiKey` to route via Turbo - **Error Handling:** Any thrown errors are sent as error frames before the stream ends. --- ## Example Using with Express ```ts import { HashbrownOllama } from '@hashbrownai/ollama'; app.post('/chat', async (req, res) => { const stream = HashbrownOllama.stream.text({ // Optional: use Ollama Turbo // turbo: { apiKey: process.env.OLLAMA_API_KEY! }, request: req.body, // must be Chat.Api.CompletionCreateParams }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); // Pipe each encoded frame as it arrives } res.end(); }); ``` --- ## Advanced: Tools, Function Calling, and Response Schema - **Tools:** Add tools using function specs (name, description, parameters as JSON Schema). The adapter forwards them to Ollama with `strict` mode enabled. - **Function Calling:** Ollama can return `tool_calls` which are streamed as frames; execute your tool and continue the conversation by sending a `tool` message. - **Response Format:** Pass a JSON schema in `responseFormat` to request validated structured output from models that support it. --- # OpenAI (React) First, install the OpenAI adapter package: ```sh npm install @hashbrownai/openai ``` ## Streaming Text Responses Hashbrown’s OpenAI adapter lets you **stream chat completions** from OpenAI’s GPT models, including support for tool calling, response schemas, and request transforms. ### API Reference #### `HashbrownOpenAI.stream.text(options)` Streams an OpenAI chat completion as a series of encoded frames. Handles content, tool calls, and errors, and yields each frame as a `Uint8Array`. **Options:** | Name | Type | Description | | ------------------------- | --------------------------------------- | ------------------------------------------------------------------------------ | | `apiKey` | `string` | Your OpenAI API Key. | | `request` | `Chat.Api.CompletionCreateParams` | The chat request: model, messages, tools, system, responseFormat, etc. | | `transformRequestOptions` | `(params) => params \| Promise` | _(Optional)_ Transform or override the final OpenAI request before it is sent. | **Supported Features:** - **Roles:** `user`, `assistant`, `tool` - **Tools:** Supports OpenAI tool calling, including `toolCalls` and strict function schemas. - **Response Format:** Optionally specify a JSON schema for structured output (uses OpenAI’s `response_format` parameter). - **System Prompt:** Included as the first message if provided. - **Tool Calling:** Handles OpenAI tool calling modes and emits tool call frames. - **Streaming:** Each chunk is encoded into a resilient streaming format ### How It Works - **Messages:** Translated to OpenAI’s message format, supporting all roles and tool calls. - **Tools/Functions:** Tools are passed as OpenAI function definitions, using your JSON schemas as `parameters`. - **Response Format:** Pass a JSON schema in `responseFormat` for OpenAI to validate the model output. - **Streaming:** All data is sent as a stream of encoded frames (`Uint8Array`). Chunks may contain text, tool calls, errors, or finish signals. - **Error Handling:** Any thrown errors are sent as error frames before the stream ends. ### Example: Using with Express ```ts import { HashbrownOpenAI } from '@hashbrownai/openai'; app.post('/chat', async (req, res) => { const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, // must be Chat.Api.CompletionCreateParams }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); // Pipe each encoded frame as it arrives } res.end(); }); ``` --- ### Transform Request Options The `transformRequestOptions` parameter allows you to intercept and modify the request before it's sent to OpenAI. This is useful for server-side prompts, message filtering, logging, and dynamic configuration. ```ts app.post('/chat', async (req, res) => { const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, transformRequestOptions: (options) => { return { ...options, // Add server-side system prompt messages: [ { role: 'system', content: 'You are a helpful assistant.' }, ...options.messages, ], // Adjust temperature based on user preferences temperature: getUserPreferences(req.user.id).creativity, }; }, }); // ... rest of the code }); ``` [Learn more about transformRequestOptions](/docs/react/concept/transform-request-options) --- ### Advanced: Tools and Response Schema - **Tools:** Add tools using OpenAI-style function specs (name, description, parameters). - **Tool Calling:** Supported via `toolChoice` (`auto`, `required`, `none`, etc.). - **Response Format:** Pass a JSON schema in `responseFormat` for OpenAI to return validated structured output. --- # Writer (React) First, install the Writer adapter package: ```sh npm install @hashbrownai/writer ``` ## Streaming Text Responses Hashbrown’s Writer adapter lets you **stream chat completions** from Writer models, including support for tool calling, response schemas, and request transforms. ### API Reference #### `HashbrownWriter.stream.text(options)` Streams a Writer chat completion as a series of encoded frames. Handles content, tool calls, and errors, and yields each frame as a `Uint8Array`. **Options:** | Name | Type | Description | | ------------------------- | --------------------------------------- | ------------------------------------------------------------------------------ | | `apiKey` | `string` | Your Writer API Key. | | `request` | `Chat.Api.CompletionCreateParams` | The chat request: model, messages, tools, system, responseFormat, etc. | | `transformRequestOptions` | `(params) => params \| Promise` | _(Optional)_ Transform or override the final Writer request before it is sent. | **Supported Features:** - **Roles:** `user`, `assistant`, `tool` - **Tools:** Supports Writer tool calling, including `toolCalls` and strict function schemas. - **Response Format:** Optionally specify a JSON schema for structured output (Writer’s `response_format` parameter). - **System Prompt:** Included as the first message if provided. - **Tool Calling:** Handles Writer tool calling modes and emits tool call frames. - **Streaming:** Each chunk/frame is encoded into a resilient streaming format. ### How It Works - **Messages:** Translated to Writer’s message format, supporting all roles and tool calls. - **Tools/Functions:** Tools are passed as function definitions, using your JSON schemas as `parameters`. - **Response Format:** Pass a JSON schema in `responseFormat` for Writer to validate the model output. - **Streaming:** All data is sent as a stream of encoded frames (`Uint8Array`). Chunks may contain text, tool calls, errors, or finish signals. - **Error Handling:** Any thrown errors are sent as error frames before the stream ends. ### Example: Using with Express ```ts import { HashbrownWriter } from '@hashbrownai/writer'; import { decodeFrame } from '@hashbrownai/core'; app.post('/chat', async (req, res) => { const stream = HashbrownWriter.stream.text({ apiKey: process.env.WRITER_API_KEY!, request: req.body, // must be Chat.Api.CompletionCreateParams }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); // Pipe each encoded frame as it arrives } res.end(); }); ``` --- ### Transform Request Options The `transformRequestOptions` parameter allows you to intercept and modify the request before it's sent to Writer. This is useful for server-side prompts, message filtering, logging, and dynamic configuration. ```ts app.post('/chat', async (req, res) => { const stream = HashbrownWriter.stream.text({ apiKey: process.env.WRITER_API_KEY!, request: req.body, transformRequestOptions: (options) => { return { ...options, // Add server-side system prompt messages: [ { role: 'system', content: 'You are a helpful AI writing assistant.' }, ...options.messages, ], // Adjust parameters based on writing task temperature: req.body.taskType === 'creative' ? 0.8 : 0.3, }; }, }); // ... rest of the code }); ``` [Learn more about transformRequestOptions](/docs/react/concept/transform-request-options) --- ### Advanced: Tools and Response Schema - **Tools:** Add tools using function specs (name, description, parameters) compatible with Writer. - **Tool Calling:** Supported via `toolChoice` (`auto`, `required`, `none`, etc.). - **Response Format:** Pass a JSON schema in `responseFormat` for Writer to return validated structured output. --- # Converting Natural Language to Structured Data This recipe guides you through replacing complex form controls with natural language inputs. It leverages large-language models to parse your user's natural language and convert it into structured data. This recipe also covers error handling strategies. You should be comfortable with: - React functional components and hooks - Basic Hashbrown setup (see **[Quick Start](/docs/react/start/quick)**) - TypeScript --- ## 1. The legacy form (Expense report) To make the example concrete, we will modernise an **Expense Report** form that collects: - Amount (number) - Currency (select with options resolved from `/api/currencies`) - Category (select with options resolved from `/api/expense-categories`) - Date (date picker) - Description (text area) **Example (ExpenseForm.ts):** ```tsx interface ExpenseCategory { id: string; name: string; } interface Currency { code: string; symbol: string; } export function ExpenseForm() { const [amount, setAmount] = useState(''); const [currency, setCurrency] = useState(''); const [categoryId, setCategoryId] = useState(''); const [date, setDate] = useState(''); const [description, setDescription] = useState(''); const [currencies, setCurrencies] = useState([]); const [categories, setCategories] = useState([]); useEffect(() => { fetch('/api/currencies') .then((r) => r.json()) .then(setCurrencies); fetch('/api/expense-categories') .then((r) => r.json()) .then(setCategories); }, []); const handleSubmit = (e: FormEvent) => { e.preventDefault(); // submit to backend … }; return (
setAmount(e.target.value)} placeholder="Amount" /> setDate(e.target.value)} />
`, }) export class ExpenseFormComponent { amount = signal(''); currency = signal(''); categoryId = signal(''); date = signal(''); description = signal(''); currencies = signal([]); categories = signal([]); constructor() { this.loadLookups(); } private async loadLookups() { const [currenciesRes, categoriesRes] = await Promise.all([ fetch('/api/currencies'), fetch('/api/expense-categories'), ]); const [currencies, categories] = await Promise.all([ currenciesRes.json(), categoriesRes.json(), ]); this.currencies.set(currencies); this.categories.set(categories); } onSubmit(e: SubmitEvent) { e.preventDefault(); // submit to backend... } } ``` Even with good UX, the form requires **five** separate inputs, multiple data fetches, and validation handling. Now imagine localizing all of that and making it accessible. --- ## 2. Goal: one "smart" text box Users should be able to type: > "Team lunch in NYC, $42 USD on May 3rd" or: > "Almuerzo de equipo en Bogotá, 42 USD, categoría Comidas, el 3 de mayo" or: > "२७०० रुपये, प्रवास श्रेणी, १६ जून, बेंगळुरू ते पुणे विमान" and the application should produce the same structured object the backend already expects. We will get there with @hashbrownai/angular!structuredCompletionResource:function and one helper tool. --- ## 3. Describe the result schema (success **or** error) @hashbrownai/angular!structuredCompletionResource:function needs a Skillet schema. We want two possible outcomes: 1. A parsed expense: When the LLM can infer everything, it should emit an expense with `type = "Expense"`. 2. An error: If the LLM cannot infer something, it should emit a `type = "ParseError"` with a helpful message. Using `s.anyOf` with literal discriminators makes handling each case trivial in Angular: **Example (expense-schema.ts):** ```ts import { s } from '@hashbrownai/core'; export const ExpenseResultSuccessSchema = s.object('Parsed expense', { type: s.literal('Expense'), amount: s.number('The amount of the expense'), currency: s.string('ISO 4217 currency code, e.g. USD'), categoryId: s.string('ID of the chosen category'), date: s.string('ISO date YYYY-MM-DD'), description: s.string('Short description'), }); export type ExpenseResultSuccess = s.infer; export const ExpenseResultParseErrorSchema = s.object('Unable to parse', { type: s.literal('ParseError'), message: s.string('Human readable error'), }); export type ExpenseResultParseError = s.infer< typeof ExpenseResultParseErrorSchema >; export const ExpenseResultSchema = s.anyOf([ ExpenseResultSuccessSchema, // Success branch ExpenseResultParseErrorSchema, // Error branch ]); ``` 1. Describe each case with its own schema 2. _Tag_ each case using `s.literal()` so you can discriminate on the result later 3. Use `s.infer` to produce TypeScript types from your schema 4. Join branches with `s.anyOf` --- ## 4. Expose a tool so the LLM can see valid categories The LLM must map free-form category names (e.g. "Comidas") onto the canonical IDs used by your backend. Create a tool that returns the list of category names with their ID: **Example (category-tool.ts):** ```ts import { createTool } from '@hashbrownai/angular'; export const listExpenseCategories = createTool({ name: 'listExpenseCategories', description: 'List valid expense categories the user can pick from', async handler(abortSignal) { const res = await fetch('/api/expense-categories', { signal: abortSignal }); const categories = await res.json(); return categories.map((category: { id: string; name: string }) => ({ id: category.id, name: category.name, })); }, }); ``` 1. Use @hashbrownai/angular!createTool:function to expose the `listExpenseCategories` tool that the LLM can execute to follow instructions and respond to prompts. 2. The `handler` function receives an `AbortSignal` to potentially abort the fetch request. 3. Return the smallest shape necessary to guide the model (avoid verbose, unrelated data). --- ## 5. The natural‑language component **Example (expense-nl.component.ts):** ```ts import { Component, effect, output, signal } from '@angular/core'; import { CommonModule } from '@angular/common'; import { structuredCompletionResource } from '@hashbrownai/angular'; import { ExpenseResultSchema, ExpenseResultSuccess } from './expense-schema'; import { listExpenseCategories } from './category-tool'; @Component({ selector: 'app-expense-nl', imports: [CommonModule], template: `
`, }) export class ExpenseNlComponent { // Emits the parsed expense to the parent when successful expenseParsed = output(); inputText = signal(''); private requestInput = signal(null); completion = structuredCompletionResource({ model: 'gpt-4.1', debugName: 'expense-nl', input: this.requestInput, tools: [listExpenseCategories], system: ` You are an accounting assistant. Convert the user's natural language statement into a JSON object that matches the provided schema. * The user can speak in any language. Detect and handle it. * Call the "listExpenseCategories" tool to pick a category ID that corresponds to the user's wording. * If anything is missing or ambiguous, return a ParseError instead. `, schema: ExpenseResultSchema, }); constructor() { effect(() => { const result = this.completion.value(); if (!result) return; if (result.type === 'Expense') { this.expenseParsed.emit(result as ExpenseResultSuccess); this.inputText.set(''); this.requestInput.set(null); } else { alert(result.message); } }); } onSubmit(e: SubmitEvent) { e.preventDefault(); // Trigger a new completion run with the current input this.requestInput.set(this.inputText()); } } ``` Key points 1. `inputText` is a **single** free‑text field bound to a signal. 2. The `system` prompt instructs the model how to behave and when to emit an error branch. 3. The categories tool is discoverable by the LLM; it can look up valid IDs on demand. 4. The root schema is `anyOf`, discriminated by `type` literals, making client logic trivial. 5. No client‑side translation code is needed. Localisation falls out naturally. --- ## 6. Calling the same backend Your API already expects the object produced by the old form, so no server changes are required: ```ts app.post('/api/expenses', (req, res) => { // payload looks identical to legacy form submission }); ``` If you are not ready to remove the form, ship both components side‑by‑side, or use the completion to fill out the existing form. --- ## 7. Progressive enhancement tips - Fallback: Use the success/error split to fallback to the old form when the model cannot parse. - `debugName`: Set `debugName` to make signal names readable in logs and to aid debugging. - Confirm: Show the parsed result to the user before submitting so they can make corrections. --- ## Recap 1. Identify the shape of your structured data. 2. Model success and error with `s.anyOf` + `s.literal` tags. 3. Expose API look‑ups as tools so the LLM can stay in sync with your backend. 4. Swap multi‑step UI for one natural‑language input using `structuredCompletionResource`. 5. Enjoy happier users and effortless internationalisation. --- # Building Predictive Suggestions and Shortcuts Using Angular Use Hashbrown structured outputs to suggest a user's next action in your app. 1. Predict likely next steps based on recent user actions and current app state 2. Stream suggestions as they are generated 3. Allow users to accept or dismiss --- ## How it Works 1. Define a schema of predictive actions. 2. Provide tools the model can call to read the current app state. 3. Create a @hashbrownai/angular!structuredCompletionResource:function that streams an array of suggestions. 4. Render suggestions with Angular's native control flow. 5. If the user accepts the suggestion then dispatch the corresponding action. --- ## Before you start **Prerequisites:** - Familiarity with Angular and modern component syntax (signals, standalone components) - Angular 20 or higher, with [standalone components and the Resources API](https://angular.dev) - A working Hashbrown setup ([Hashbrown Quick Start](/docs/angular/start/quick)) - Install dependencies: **Example (terminal):** ```sh npm install @hashbrownai/angular @hashbrownai/core @hashbrownai/openai ngx-markdown ``` We'll use `gpt-5` as our model and the OpenAI Hashbrown adapter, but you can use any supported provider. --- ## 1: Define a Prediction Schema Define the response format schema using Skillet. **Example (schema):** ```ts import { s } from '@hashbrownai/core'; export const PREDICTIONS_SCHEMA = s.anyOf([ s.object('Suggest adding a light to the system', { type: s.literal('Add Light'), name: s.string('The suggested name of the light'), brightness: s.integer('A number between 0-100'), reason: s.string('Reason for suggestion'), confidence: s.number('Confidence score between 0 and 1'), }), s.object('Suggest adding a scene to the system', { type: s.literal('Add Scene'), name: s.string('The suggested name of the scene'), lights: s.array( 'The lights in the scene', s.object('A light in the scene', { lightId: s.string('The ID of the light'), brightness: s.integer('A number between 0-100'), }), ), reason: s.string('Reason for suggestion'), confidence: s.number('Confidence score between 0 and 1'), }), s.object('Suggest scheduling a scene to the system', { type: s.literal('Schedule Scene'), sceneId: s.string('The ID of the scene'), datetime: s.string('The datetime of the scene'), reason: s.string('Reason for suggestion'), confidence: s.number('Confidence score between 0 and 1'), }), s.object('Suggest adding a light to a scene', { type: s.literal('Add Light to Scene'), lightId: s.string('The ID of the light'), sceneId: s.string('The ID of the scene'), brightness: s.integer('A number between 0-100'), reason: s.string('Reason for suggestion'), confidence: s.number('Confidence score between 0 and 1'), }), s.object('Suggest removing a light from a scene', { type: s.literal('Remove Light from Scene'), lightId: s.string('The ID of the light'), sceneId: s.string('The ID of the scene'), reason: s.string('Reason for suggestion'), confidence: s.number('Confidence score between 0 and 1'), }), ]); ``` This schema will be used to validate and structure the model's output. --- ## 2: Create a Streaming Predictions Resource Create a new resource using the @hashbrownai/angular!structuredCompletionResource:function function to stream an array of predictions. **Example (predictions):** ```ts import { s } from '@hashbrownai/core'; import { structuredCompletionResource, createTool } from '@hashbrownai/angular'; @Component({ selector: 'app-predictions', standalone: true, }) export class PredictionsComponent { private store = inject(Store); private smartHome = inject(SmartHomeService); // 1. Define a signal whose value is the last user action lastAction = this.store.selectSignal(selectLastUserAction); // 2. Create the predictions resource predictions = structuredCompletionResource({ model: 'gpt-5', // 3. The predictions resource will re-compute when the `input` signal value is updated input: this.lastAction, // 4. Provide clear instructions, rules, and examples in the system prompt system: ` You are an AI smart home assistant tasked with predicting the next possible user action. ## Instructions - Include a concise reason for each suggestion. - Provide a confidence score between 0 and 1. - Avoid duplicates; check current lights and scenes before offering suggestions. - Returning an empty array is valid. - You may return multiple predictions. ## Examples - Provide a few examples `, // 5. Provide the model with context of the application state using tool calling tools: [ createTool({ name: 'getLights', description: 'Get all lights in the smart home', handler: async () => await this.smartHome.loadLights(), }), createTool({ name: 'getScenes', description: 'Get all scenes in the smart home', handler: async () => await this.smartHome.loadScenes(), }), ], // 6. Specify the structured output response format schema: s.object('The result', { predictions: s.streaming.array('The predictions', PREDICTIONS_SCHEMA), }), }); // 7. Derive a simple array from the resource value output = linkedSignal({ source: this.predictions.value, computation: (source): s.Infer[] => source?.predictions ?? [], }); removePrediction(index: number) { this.output.update((predictions) => { predictions.splice(index, 1); return [...predictions]; }); } addLight(index: number, light: { name: string; brightness: number }) { this.removePrediction(index); this.store.dispatch(PredictionsAiActions.addLight({ light })); } addScene(index: number, scene: { name: string; lights: any[] }) { this.removePrediction(index); this.store.dispatch(PredictionsAiActions.addScene({ scene })); } } ``` In this example: 1. We define a signal `lastAction` that represents the most recent user action. 2. We create a @hashbrownai/angular!structuredCompletionResource:function named `predictions` that uses the `gpt-5` model. 3. The resource re-computes whenever the `input` signal value is updated. 4. We provide a detailed system prompt with instructions and examples to guide the model. 5. We include tools (`getLights` and `getScenes`) with async handlers to give the model context about the current state of the smart home. 6. We specify the output schema using the previously defined `PREDICTIONS_SCHEMA`, allowing for streaming arrays of predictions. 7. Finally, we derive a simple array `output` from the resource value for easy rendering in the UI. 8. We also define methods for removing and accepting predictions. --- ## 3: Show the Suggestions Display the suggestion cards as an overlay to the user. The user can then choose to accept or dismiss the suggestion. **Example (overlay suggestions):** ```ts @Component({ standalone: true, template: ` @for (prediction of output(); track $index) {

Suggestion: {{ prediction.type }} - {{ prediction.reason }} (Confidence: {{ prediction.confidence * 100 | number: '1.0-0' }}%)

@switch (prediction.type) { @case ('Add Light') {

Add Light "{{ prediction.name }}" with brightness {{ prediction.brightness }}

} @case ('Add Scene') {

Add Scene "{{ prediction.name }}" with {{ prediction.lights.length }} lights

} @default { } } } `, }) export class PredictionsComponent {} ``` In this example: 1. We use Angular's native control flow syntax to iterate over the `output` array of predictions. 2. For each prediction, we display the type, reason, and confidence. 3. We use a switch statement to render different UI elements based on the prediction type. 4. Each prediction card includes "Dismiss" and "Accept" buttons, allowing users to interact with the suggestions. --- ## 4: Guardrails & UX Patterns To ensure a good user experience and prevent unwanted actions: - **Confidence Threshold:** Only show suggestions with a confidence score above a certain threshold (e.g., 0.7). - **Duplicate Prevention:** Use the model's instructions and your own logic to avoid suggesting actions that duplicate existing state. - **User Control:** Always allow users to dismiss suggestions easily. - **Explainability:** Provide reasons for suggestions to build trust. - **Rate Limiting:** Limit how often suggestions appear to avoid overwhelming users. - **Fallbacks:** Handle empty or invalid predictions gracefully. --- ## Next Steps - [Get structured data from models](https://hashbrown.dev/docs/angular/concept/structured-output) — Use Skillet schema to describe model responses. - [Tool Calling](https://hashbrown.dev/docs/angular/concept/functions) — Provide callback functions to the LLM. --- # Remote Model Context Protocol (MCP) Expose remote MCP servers to models for better task following and responses. - Model Context Protocol (MCP) enables the model to use tool calling with remote servers to complete a task or to better respond to user messages. - Easily integrate remote MCP servers with client-side tool calling. --- ## How it Works 1. Use the `@modelcontextprotocol/sdk` library to create an MCP client. 2. Hashbrown supports both server-sent events (SSE) and streamable HTTP. We recommend the newer streamable HTTP protocol. 3. Connect the client to the remote MCP server. 4. Fetch the list the available tools from the remote MCP server. 5. Map the remote tools to Hashbrown tools using Hashbrown's `createTool()` function. 6. Provide the remote tools from one or more remote MCP servers alongside client-side tools to the model. 7. The model will choose if and when to use a provided tool. --- ## 1: MCP Server The first step is to either build or consume a remote MCP server. In most cases, you'll be using a remote MCP server from a third-party. For the purpose of clarity, we'll briefly walk through creating an MCP server. **Example (MCP server):** ```ts import { HashbrownOpenAI } from '@hashbrownai/openai'; import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js'; import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js'; // 1. Create an express server const app = express(); app.use( cors({ origin: '*', exposedHeaders: ['Mcp-Session-Id'], allowedHeaders: ['*'], }), ); app.use( express.json({ limit: '30mb', }), ); // 2. Define the response deserializer class UnhealthyResponseDeserializer implements IResponseDeserializer { async deserialize(response: Response): Promise { const text = await response.text(); if (text.length > 0) { try { const json = JSON.parse(text) as T; return json; } catch (e: any) { console.error(e); } } return null as T; } } // 3. Define helper function to decode the bearer token function getAccessToken(context: any): string { // check for auth token on request headers const authToken = context.requestInfo.headers['authorization']; if (!authToken) { throw new Error('No authorization token provided'); } // decode the token const decoded = decodeURIComponent(authToken.split(' ')[1]); return decoded; } // 4. Create a remote MCP server const mcpServer = new McpServer({ name: 'spotify', version: '1.0.0', description: 'Spotify server to list devices, search songs, and queue songs', }); // 5. Register a tool mcpServer.registerTool( 'search', { title: 'search', description: 'Search tracks, artists or albums on Spotify', inputSchema: { query: z.string().describe('Search keywords'), type: z.enum(['track', 'artist', 'album']).optional(), }, }, async ({ query, type = 'track' }, context) => { const accessToken = getAccessToken(context); // TODO: integrate with spotify to search for the track }, ); // 6. Configure the transport const transports: Record = {}; function ensureTransport(sessionId: string) { if (transports[sessionId]) return transports[sessionId]; const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: () => sessionId, }); transports[sessionId] = transport; // async – never await here or you'll block the first HTTP request mcpServer.connect(transport).catch(console.error); transport.onclose = () => delete transports[sessionId]; return transport; } // 7. Client → Server (JSON-RPC over HTTP POST) app.post('/mcp', async (req, res) => { const sessionId = (req.headers['mcp-session-id'] as string) ?? randomUUID(); res.setHeader('Mcp-Session-Id', sessionId); const transport = ensureTransport(sessionId); await transport.handleRequest(req, res, req.body); }); // 8. Server → Client notifications app.get('/mcp', async (req, res) => { const sessionId = (req.headers['mcp-session-id'] as string) ?? randomUUID(); res.setHeader('Mcp-Session-Id', sessionId); const transport = ensureTransport(sessionId); await transport.handleRequest(req, res); }); // 9. Disconnect app.delete('/mcp', async (req, res) => { const sessionId = req.headers['mcp-session-id'] as string; if (sessionId && transports[sessionId]) { await transports[sessionId].close(); } res.sendStatus(204); }); // 10. Hashbrown adapter for OpenAI app.post('/chat', async (req, res) => { const stream = HashbrownOpenAI.stream.text({ apiKey: process.env.OPENAI_API_KEY!, request: req.body, }); res.header('Content-Type', 'application/octet-stream'); for await (const chunk of stream) { res.write(chunk); } res.end(); }); app.listen(port, host, () => { console.log(`[ ready ] http://localhost:3001`); }); ``` 1. Create an MCP server using the `@modelcontextprotocol/sdk/server` library. 2. Register tools with the MCP server using the `registerTool()` method. 3. Use the `StreamableHTTPServerTransport` to handle requests and responses. 4. Use the `McpServer` to handle incoming requests and send responses. 5. Optionally, create a Hashbrown adapter for OpenAI to handle chat requests. 6. Run the express server and listen for incoming requests. --- ## 2: MCP Client **Example (MCP client):** ```ts import { Client } from '@modelcontextprotocol/sdk/client/index.js'; import { StreamableHTTPClientTransport } from '@modelcontextprotocol/sdk/client/streamableHttp.js'; // 1. Create the MCP client client = new Client({ name: 'spotify', version: '1.0.0', title: 'Spotify', }); // 2. Connect to the remote MCP server await client.connect( new StreamableHTTPClientTransport(new URL('http://localhost:3001/mcp'), { requestInit: { headers: { Authorization: `Bearer ${encodeURIComponent( JSON.stringify(accessToken()), )}`, }, }, }), ); ``` 1. Import the necessary libraries from `@modelcontextprotocol/sdk/client`. 2. Create a new `Client` instance with the MCP server's name, version, and title. 3. Connect to the remote MCP server using the `StreamableHTTPClientTransport`. 4. Use the `connect()` method to establish a connection to the MCP server, passing the server URL and any necessary request headers (e.g., authorization token). 5. You can now use the `client` to interact with the remote MCP server and its tools. --- ## 3. Using `createTool()` **Example (create tools):** ```ts import type { Chat } from '@hashbrownai/core'; // 1. Fetch the remote MCP tools const { tools: mcpTools } = await client.listTools(); // 2. Use `createTool()` to create remote MCP tools const tools: Chat.AnyTool[] = mcpTools.map((tool) => { return runInInjectionContext(this.injector, () => { return createTool({ name: tool.name, description: tool.description ?? '', schema: { ...tool.inputSchema, additionalProperties: false, required: Object.keys(tool.inputSchema.properties ?? []), }, handler: async (input) => { const result = await this.client?.callTool({ name: tool.name, arguments: input, }); return result; }, }); }); }); ``` 1. Fetch the list of tools from the remote MCP server using `client.listTools()`. 2. Use `createTool()` to create a Hashbrown tool for each remote MCP tool. 3. Implement the `handler` function to call the remote MCP tool using `client.callTool()`. 4. The `handler` function will be executed when the model calls the tool, allowing you to interact with the remote MCP server and retrieve the tool's result. --- ## 4: Provide Remote Tools to the Model **Example (provide tools):** ```ts uiChatResource({ tools: [ ...tools, createTool({ name: 'getUser', description: 'Get information about the current user', handler: () => { const authService = inject(AuthService); return authService.getUser(); }, }), ], }); ``` 1. Use the `uiChatResource()` function to provide the tools to the model. 2. Combine the remote MCP tools with any local tools you want to provide. 3. The model will now have access to both remote and local tools, allowing it to choose the appropriate tool for a given task. --- ## Next Steps - [Tool Calling](https://hashbrown.dev/docs/angular/concept/functions) — Provide callback functions to the LLM. - [Generate user interfaces](https://hashbrown.dev/docs/angular/concept/components) — Expose Angular components to the LLM for generative UI. - [Execute LLM-generated JS in the browser (safely)](https://hashbrown.dev/docs/angular/concept/runtime) — Use Hashbrown's JavaScript runtime for complex and mathematical operations. --- # Building a Chatbot with Generative UI and Tool Calling This step-by-step guide will walk you through building a conversational Smart Home chatbot using Hashbrown's @hashbrownai/angular!uiChatResource:function. Our assistant will: - Let users control and view smart home lights and scenes via natural language - Enable the LLM to call tools for fetching and controlling devices - Let the LLM generate interactive UI using Angular components (with real-time streaming) We will expose only the components and actions we trust, so the assistant can only act within safe boundaries defined by your app. --- ## Before you start **Prerequisites:** - Familiarity with Angular and modern component syntax (signals, standalone components) - Angular 20 or higher, with [standalone components and the Resources API](https://angular.dev) - A working Hashbrown setup ([Hashbrown Quick Start](/docs/angular/start/quick)) - Install dependencies: **Example (terminal):** ```sh npm install @hashbrownai/angular @hashbrownai/core @hashbrownai/openai ngx-markdown ``` We'll use `gpt-4.1` as our model and the OpenAI Hashbrown adapter, but you can use any supported provider. --- ## 1. Set Up Smart Home Service First, define basic types and a minimal SmartHome service in your app for lights and scenes. **Example (smart-home.service.ts):** ```ts import { Injectable, signal } from '@angular/core'; export interface Light { id: string; name: string; brightness: number; } export interface Scene { id: string; name: string; lights: { lightId: string; brightness: number }[]; } @Injectable({ providedIn: 'root' }) export class SmartHome { private readonly _lights = signal([ { id: 'living', name: 'Living Room', brightness: 80 }, { id: 'bedroom', name: 'Bedroom', brightness: 40 }, { id: 'kitchen', name: 'Kitchen', brightness: 100 }, ]); private readonly _scenes = signal([ { id: 'relax', name: 'Relax Mode', lights: [ { lightId: 'living', brightness: 30 }, { lightId: 'bedroom', brightness: 10 }, ], }, { id: 'party', name: 'Party Mode', lights: [ { lightId: 'living', brightness: 100 }, { lightId: 'kitchen', brightness: 100 }, ], }, ]); readonly lights = this._lights.asReadonly(); readonly scenes = this._scenes.asReadonly(); setLightBrightness(lightId: string, brightness: number) { this._lights.update((lights) => lights.map((l) => (l.id === lightId ? { ...l, brightness } : l)), ); } applyScene(sceneId: string) { const scene = this._scenes().find((s) => s.id === sceneId); if (scene) for (const { lightId, brightness } of scene.lights) { this.setLightBrightness(lightId, brightness); } } } ``` We are going to expose this service to a large-language model, letting it call these methods to read the state of the smart home, control lights, and apply scenes. --- ## 2. Define Smart Home Tools Tools are how we expose app services to the model. A tool is simply an async function that runs in Angular's dependency injection context, letting you expose any kind of service to the LLM. We are going to use tools to let LLMs fetch device data and perform control actions. Let's start with a simple tool that lets the LLM get the list of lights and scenes: **Example (tools.ts):** ```ts import { inject } from '@angular/core'; import { createTool } from '@hashbrownai/angular'; import { SmartHome } from './smart-home.service'; export const getLights = createTool({ name: 'getLights', description: 'Get all lights and their current state', handler: () => { const smartHome = inject(SmartHome); return smartHome.lights(); }, }); export const getScenes = createTool({ name: 'getScenes', description: 'Get all available scenes', handler: () => { const smartHome = inject(SmartHome); return smartHome.scenes(); }, }); ``` Let's break down @hashbrownai/angular!createTool:function: 1. `name` - A `camelCase` or `snake_case` string that serves as the _name_ of the tool. 2. `description` - A clear, natural-language description of what purpose the tool serves. The LLM will use this description to determine when the tool should be called. 3. `handler` - An async function that runs in Angular's dependency injection context. We use it to inject the services we want to call, returning any data that we want to feed into the LLM's context. It is important to note that all of the returned data will be in the context, and you pay for context both in terms of _token cost_ and _compute_. Be intentional with the data you return from tool calls. Tools can accept arguments, which the LLM will generate as part of its tool call. In Hashbrown, tool call arguments are defined using Skillet for the schema: **Example (tools.ts):** ```ts import { s } from '@hashbrownai/core'; export const controlLight = createTool({ name: 'controlLight', description: 'Set the brightness of a light', schema: s.object('Control light input', { lightId: s.string('The id of the light'), brightness: s.number('The new brightness (0-100)'), }), handler: ({ lightId, brightness }) => { inject(SmartHome).setLightBrightness(lightId, brightness); return { success: true }; }, }); export const controlScene = createTool({ name: 'controlScene', description: 'Apply a scene (adjust all lights in the scene)', schema: s.object('Control scene input', { sceneId: s.string('The id of the scene'), }), handler: ({ sceneId }) => { inject(SmartHome).applyScene(sceneId); return { success: true }; }, }); ``` **How Skillet helps:** Skillet schemas (`s.object`, `s.string`, etc.) define arguments/outputs for tool calling, and make the expected contract transparent to the LLM (and typesafe for you). Skillet is Hashbrown's secret sauce for generative, safe, and streamable UI. Each part of the schema requires a description, encouraging you to be explicit and clear with the LLM about the data structure you are asking it to generate. --- ## 3. Create Angular UI Components With tools, the LLM will be able to call the Angular services we've exposed to it. Now, let's give it a set of Angular components to render the results. We will expose only **the components we want the LLM to use**. The LLM cannot render anything other than the components you expose. ### 3.1. Markdown Renderer Again, the LLM can only generate UIs using the components you provide it. Because of this constraint, first we need to give the LLM some way to render basic text responses to the user. Let's create a Markdown component that wraps `ngx-markdown` **Example (app-markdown.ts):** ```ts import { Component, signal, input } from '@angular/core'; import { MarkdownModule } from 'ngx-markdown'; @Component({ selector: 'app-markdown', imports: [MarkdownModule], template: ``, }) export class Markdown { readonly content = input.required(); } ``` ### 3.2. Card Component Next, let's make a Card component that it can use to show cards with child content: **Example (app-card.ts):** ```ts import { Component, input } from '@angular/core'; @Component({ selector: 'app-card', template: `

{{ title() }}

`, }) export class Card { readonly title = input.required(); } ``` ### 3.3. Light List Item A way to show a single light (often as a child of a card): **Example (app-light-list-item.ts):** ```ts import { Component, inject, input, computed } from '@angular/core'; import { SmartHome } from './smart-home.service'; @Component({ selector: 'app-light-list-item', template: ` @let light = light(); @if (light) { 💡 {{ light.name }} — {{ light.brightness }}% } else { Unknown light: {{ lightId() }} } `, }) export class LightListItem { private smartHome = inject(SmartHome); readonly lightId = input.required(); readonly light = computed(() => this.smartHome.lights().find((l) => l.id === this.lightId()), ); } ``` ### 3.4. Scene List Item And finally a way to show a scene: **Example (app-scene-list-item.ts):** ```ts import { Component, inject, input, output, signal, computed, } from '@angular/core'; import { SmartHome } from './smart-home.service'; @Component({ selector: 'app-scene-list-item', template: ` @let scene = scene(); @if (scene) { {{ scene.name }} } else { Unknown scene: {{ sceneId() }} } `, }) export class SceneListItem { private smartHome = inject(SmartHome); readonly sceneId = input.required(); readonly scene = computed(() => this.smartHome.scenes().find((s) => s.id === this.sceneId()), ); apply() { if (this.scene()) this.smartHome.applyScene(this.scene()!.id); } } ``` You can style and extend these as you like. We will use Skillet to let the LLM generate values for our component inputs. --- ## 4. Expose Components to the Model ### Why only exposed components? The LLM can only generate UI **using Angular components you explicitly expose via Hashbrown**. This is critical for safety and predictability. Let's use @hashbrownai/angular!exposeComponent:function and Skillet schemas to share each component one-by-one, starting with Markdown. ### 4.1. Expose Markdown Component **Example (exposed-components.ts):** ```ts import { exposeComponent } from '@hashbrownai/angular'; import { s } from '@hashbrownai/core'; import { Markdown } from './app-markdown'; export const markdownComponent = exposeComponent(Markdown, { description: 'Renders formatted markdown text in the chat', input: { content: s.streaming.string('Markdown body to display to the user'), }, }); ``` Let's break this down: 1. The first argument @hashbrownai/angular!exposeComponent:function expects is the component class. Hashbrown will use the component's selector as the unique identifier for the component. This can be overriden by optionally providing a `name`. 2. Like tools, `description` is a natural language description of the component. The LLM will use it to determine when to render the component. 3. The LLM can generate data for your component's inputs by specifying schema for each input on the component. Here we are leveraging Skillet's `streaming` keyword to bind a streaming string to the input, letting the component show realtime Markdown as it is getting generated. Only after exposing the markdown component can the assistant send plain conversational answers. ### 4.2. Expose Card, Light, and Scene Components You can now do the same for the rest: **Example (exposed-components.ts (cont.)):** ```ts import { Card } from './app-card'; import { LightListItem } from './app-light-list-item'; import { SceneListItem } from './app-scene-list-item'; export const exposedCardComponent = exposeComponent(Card, { description: 'Shows a card with a title and arbitrary children', input: { title: s.streaming.string('Title to display in the card header'), }, children: 'any', }); export const exposedLightListItemComponent = exposeComponent(LightListItem, { description: 'Display a light and its state, given the lightId', input: { lightId: s.string('The id of the light to display'), }, }); export const exposedSceneListItemComponent = exposeComponent(SceneListItem, { description: 'Display a scene (and let the user apply it) by id', input: { sceneId: s.string('The id of the scene to display'), }, }); ``` **How Skillet helps with components:** The input schemas tell the LLM exactly what inputs are needed and whether they stream. --- ## 5. Create the Chat Resource Now we tie it together, using @hashbrownai/angular!uiChatResource:function and passing the tools and exposed components (using Skillet!) in its options. **Example (app-chatbot.ts):** ```ts import { Component, signal } from '@angular/core'; import { uiChatResource } from '@hashbrownai/angular'; import { getLights, getScenes, controlLight, controlScene } from './tools'; import { exposedMarkdownComponent, exposedCardComponent, exposedLightListItemComponent, exposedSceneListItemComponent, } from './exposed-components'; @Component({ selector: 'app-chatbot', template: ` @for (message of chat.value(); track $index) { @switch (message.role) { @case ('user') { {{ message.content }} } @case ('assistant') { } } } `, }) export class Chatbot { readonly input = signal(''); readonly chat = uiChatResource({ model: 'gpt-4.1', debugName: 'smart-home-chatbot', system: ` You are a smart home assistant chatbot. You can answer questions about and control lights and scenes. # Capabilities - Call functions to get all lights, get scenes, set a light's brightness, and apply scenes. # Rules - Always use the app-markdown component for simple explanations or answers. For lists, wrap app-light-list-item/app-scene-list-item in app-card. - If you want to show an example UI, use the following format: `, components: [ exposedMarkdownComponent, exposedCardComponent, exposedLightListItemComponent, exposedSceneListItemComponent, ], tools: [getLights, getScenes, controlLight, controlScene], }); send() { if (this.input().trim()) { this.chat.sendMessage({ role: 'user', content: this.input() }); this.input.set(''); } } } ``` Let's break this down: 1. We can loop over `chat.value()` to render each message, switching on `message.role` to determine if the message came from the user, the assistant, or an error message. 2. When creating `uiChatResource`, we provide: - `model` - The model ID from your LLM provider, in this case `gpt-4.1` for the OpenAI adapter. - `debugName` - Let's you debug and introspect the resource using the Redux Devtools browser extension. - `system` - We use the @hashbrownai/core!prompt:function to create a system instruction with a clear role, capabilities, and rules. The @hashbrownai/core!prompt:function lets us write UI examples in our system instruction (using the `` XML tag). Hashbrown will convert them into the underlying JSON representation. - `components` - The list of components we want the LLM to use when generating responses. - `tools` - The list of tools we want to expose to the LLM in this chat. This could be a signal of tools if you want to change the list of tools dynamically. --- ## 6. Skillet in Action Both tool calling (e.g., `controlLight`) and component exposure use Skillet schema. This means the LLM, via Hashbrown, knows exactly what arguments and props it needs, resulting in less guesswork and more reliable, safe AI-driven UI. - For **tools**, Skillet documents input arguments, enforced at runtime and LLM level. - For **UI**, Skillet schemas describe inputs and children, so the LLM knows what it can render. - Streaming markdown is easy by using `s.streaming.string()` in the exposed markdown component. --- ## 7. Run and Interact Drop `` into your app (wrap with `provideHashbrown()` as per quick start) and try chatting: **Example (main.ts):** ```ts import { provideHashbrown } from '@hashbrownai/angular'; export const appConfig = { providers: [provideHashbrown({ baseUrl: '/api/chat' })], }; ``` _Example user: "Show all scenes"_ Assistant could reply with a markdown intro and a card containing a list of ``s. Hitting "Apply" on a scene list item will apply the scene in your backend. Try controlling lights by ID or requesting lists for more sophisticated flows. The assistant cannot display anything except the components you expose, so you can safely continue adding components and functionality. --- ## Recap: What Did We Cook Up? - **uiChatResource** gives you full-featured, streaming LLM chat, generative UI, and tool calling - **Skillet schemas** make the contract clear (arguments, props) for both tools and UI - Only **exposed components and tools** are available to the assistant, so you are always in control - The model is your sous-chef: it does the prep and the plating, but only in your kitchen! Ready to extend? Hashbrown's approach makes it trivial to add richer tools, more components, or stricter rules via your schemas and system instructions. --- ## Next Steps - [Go deeper with Skillet schemas](/docs/angular/concept/schema) - [Advanced system instructions and prompt engineering](/docs/angular/guide/prompt-engineering) - [Explore streaming responses](/docs/angular/concept/streaming) - [Try the open-source smart home Hashbrown example](https://github.com/liveloveapp/hashbrown) --- # Introduction Hashbrown is an open source framework for building generative user interfaces in Angular. ## Key Concepts 1. **Headless**: build your UI how you want 2. **Signal Based**: Hashbrown uses signals and resources for reactivity 3. **Platform Agnostic**: use any supported LLM provider 4. **[Streaming](/docs/angular/concept/streaming)**: LLMs can be slow, so streaming is baked into the core 5. **[Components](/docs/angular/concept/components)**: generative UI using your trusted and tested Angular components 6. **[Runtime](/docs/angular/concept/runtime)**: safely execute LLM-generated JavaScript code in the client --- ## Start Building - [Getting started and text generation](https://hashbrown.dev/docs/angular/start/quick) — Install Hashbrown and render text responses from instructions and prompts. - [System Instructions](https://hashbrown.dev/docs/angular/concept/system-instructions) — Learn the structure, how to set the role and tone, rules, and providing examples. - [Generate user interfaces](https://hashbrown.dev/docs/angular/concept/components) — Expose Angular components to the LLM for generative UI. - [Get structured data from models](https://hashbrown.dev/docs/angular/concept/structured-output) — Use Skillet schema to describe model responses. --- # API Overview Get familiar with Hashbrown's resources and APIs. --- ## Choosing a Resource Choose the right Angular resource for the task. | Resource | Multi-turn chat | Single-turn input | Structured output (schema) | Tool calling | Generate UI components | | ---------------------------------------------------------- | --------------- | ----------------- | -------------------------- | ------------ | ---------------------- | | @hashbrownai/angular!chatResource:function | ✅ | ❌ | ❌ | ✅ | ❌ | | @hashbrownai/angular!structuredChatResource:function | ✅ | ❌ | ✅ | ✅ | ❌ | | @hashbrownai/angular!completionResource:function | ❌ | ✅ | ❌ | ❌ | ❌ | | @hashbrownai/angular!structuredCompletionResource:function | ❌ | ✅ | ✅ | ✅ | ❌ | | @hashbrownai/angular!uiChatResource:function | ✅ | ❌ | ✅ | ✅ | ✅ | --- ## AI SDK We love Vercel's AI SDK. We also believe that Hashbrown needs to exist. This is because we want to drastically change the landscape of the web through generative UI. If you have some familiarity with the AI SDK, we hope this quick comparison will be helpful. | Focus | Hashbrown | Vercel AI SDK | | -------------- | ------------------------------------------------------------------ | ------------------------------------------------------------ | | Vision | Generative UI: models can assemble your actual components. | General AI toolkit: text/JSON outputs with ready-made UI. | | Frameworks | Angular and React first-class. | React/Next.js first-class; also Vue/Svelte. | | Tools | Client-side tool calling + safe WASM sandbox for AI-authored code. | Server-side tool calling. | | Streaming & DX | Binary-normalized streams + Redux DevTools for debugging. | SSE streams with polished hooks; strong Next.js integration. | | UI Approach | AI renders real app components you whitelist. | Developers interpret outputs; AI Elements speeds up chat UI. | --- # Platforms Hashbrown uses the adapter pattern for supporting multiple platforms. ## Official Adapters | Platform | Adapter Package | | ----------------------------------------------- | --------------------- | | [OpenAI](/docs/angular/platform/openai) | `@hashbrownai/openai` | | [Microsoft Azure](/docs/angular/platform/azure) | `@hashbrownai/azure` | | [Google Gemini](/docs/angular/platform/google) | `@hashbrownai/google` | | [Writer](/docs/angular/platform/writer) | `@hashbrownai/writer` | ## Custom Adapters Can't find your preferred AI provider? [Create a custom adapter](/docs/angular/platform/custom) for any LLM that supports streaming chat completions. ## Platform Capabilities | Platform | Text | Streaming | Tools | Structured Output | | --------------- | ---- | --------- | ----- | ----------------- | | OpenAI | ✅ | ✅ | ✅ | ✅ | | Microsoft Azure | ✅ | ✅ | ✅ | ✅ | | Google Gemini | ✅ | ✅ | ✅ | ✅ | | Writer | ✅ | ✅ | ✅ | ✅ | ## Platform Limitations | Platform | Limitations | | --------------- | ------------------------------------ | | OpenAI | None | | Microsoft Azure | None | | Google Gemini | Requires emulated structured outputs | | Writer | Requires emulated structured outputs | \ ## Where is X platform? If you are an enterprise customer and want to use a platform that is not listed here, please reach out to us at [hello@liveloveapp.com](mailto:hello@liveloveapp.com). --- # Angular Quick Start Take your first steps with Hashbrown. --- ## Install **Example (terminal):** ```sh npm install @hashbrownai/{core,angular,openai} --save ``` --- ## Provider **Example (provide hashbrown):** ```ts export const appConfig: ApplicationConfig = { providers: [ provideHashbrown({ baseUrl: '/api/chat', }), ], }; ``` 1. Import the @hashbrownai/angular!provideHashbrown:function function from `@hashbrownai/angular`. 2. Optionally specify options such as the `baseUrl` for chat requests. 3. Add the provider to your Angular application configuration. ### Intercept requests using middleware You can also intercept requests to the Hashbrown adapter using a middleware pattern. **Example (middleware):** ```ts export const appConfig: ApplicationConfig = { providers: [ provideHashbrown({ middleware: [ function (request: RequestInit) { console.log({ request }); return request; }, ], }), ], }; ``` 1. The `middleware` option to the provider allows the developer to intercept Hashbrown requests. 2. Middleware functions can be async. 3. This is useful for appending headers, etc. --- ## Node Adapters To get started, we recommend running a local express server following the Hashbrown adapter documentation. - [OpenAI](/docs/angular/platform/openai) - [Azure OpenAI](/docs/angular/platform/azure) - [Google Gemini](/docs/angular/platform/google) - [Writer](/docs/angular/platform/writer) - [Ollama](/docs/angular/platform/ollama) --- ## The `chatResource()` Function The @hashbrownai/angular!chatResource:function function from `@hashbrownai/angular` is the basic way to interact with the model. **Example (chatResource()):** ```ts chatResource({ model: 'gpt-5', system: 'hashbrowns should be covered and smothered', messages: [{ role: 'user', content: 'Write a short story about breakfast.' }], }); ``` 1. First, we specify the `model`. 2. Second, we provide [system instructions](/docs/angular/concept/system-instructions). 3. Third, we send some initial `messages` to the model. --- ### `ChatResourceOptions` | Option | Type | Required | Description | | --------- | ---------------------------------------------------------------------- | -------- | ----------------------------------------------------------------- | | system | string \| Signal | Yes | System (assistant) prompt. | | model | KnownModelIds \| Signal | Yes | Model identifier to use. | | tools | Tools[] | No | Array of bound tools available to the chat. | | messages | Chat.Message[] \| Signal[]> | No | Initial list of chat messages. | | debounce | number | No | Debounce interval in milliseconds between user inputs. | | debugName | string | No | Name used for debugging in logs and reactive signal labels. | | apiUrl | string | No | Override for the API base URL (defaults to configured `baseUrl`). | --- ### `ChatResourceRef` The @hashbrownai/angular!chatResource:function function returns a @hashbrownai/angular!ChatResourceRef:interface object that extends Angular's `Resource[]>` interface. | Property | Type | Description | | ------------------------ | -------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | | `value()` | `Signal` | The current chat messages (inherited from Resource). | | `status()` | `Signal` | The current status of the resource: 'idle', 'loading', 'resolved', or 'error' (inherited from Resource). | | `isLoading()` | `Signal` | Whether the resource is currently loading (inherited from Resource). | | `hasValue()` | `Signal` | Whether the resource has any assistant messages (inherited from Resource). | | `error()` | `Signal` | Any error that occurred during the chat operation. | | `lastAssistantMessage()` | `Signal` | The last assistant message in the chat. | | `sendMessage(message)` | `(message: Chat.UserMessage) => void` | Send a new user message to the chat. | | `stop(clear?)` | `(clear?: boolean) => void` | Stop any currently-streaming message. Optionally removes the streaming message from state. | | `reload()` | `() => boolean` | Remove the last assistant response and re-send the previous user message. Returns true if a reload was performed. | --- ### API Reference - [chatResource()](https://hashbrown.dev/api/angular/chatResource) — See the resource documentation - [ChatResourceOptions API](https://hashbrown.dev/api/angular/ChatResourceOptions) — See all of the options - [ChatResourceRef API](https://hashbrown.dev/api/angular/ChatResourceRef) — See all of the properties and methods --- ## Render the Model Response **Example (a short story about breakfast):** ```ts import { chatResource } from '@hashbrownai/angular'; @Component({ template: ` // 1. Render the content of each message @for (message of chat.value(); track $index) {

{{ message.content }}

} `, }) export class App { // 2. Generate the messages from a prompt chat = chatResource({ model: 'gpt-5', system: 'hashbrowns should be covered and smothered', messages: [ { role: 'user', content: 'Write a short story about breakfast.' }, ], }); } ``` 1. In the template, we render the content of each message. 2. The @hashbrownai/angular!chatResource:function function creates a new chat resource. 3. We use the `value()` signal to access the current messages in the chat. --- ## Send Messages To send messages to the model, we can use the `sendMessage()` method. **Example (sendMessage()):** ```ts import { chatResource } from '@hashbrownai/angular'; @Component({ template: ` @for (message of chat.value(); track $index) {

{{ message.content }}

} `, }) export class App { userMessage = input(''); chat = chatResource({ model: 'gpt-5', debugName: 'chat', system: 'hashbrowns should be covered and smothered', messages: [ { role: 'user', content: 'Write a short story about breakfast.' }, ], }); send() { if (this.userMessage().trim()) { this.chat.sendMessage({ role: 'user', content: this.userMessage() }); this.userMessage.set(''); } } } ``` 1. We create an input field controlled by the `userMessage` signal for user input. 2. The `send()` method sends the user message to the chat resource using the `sendMessage()` method. 3. Angular renders the user message and the assistant response message. --- ## Debugging with Redux Devtools Hashbrown streams LLM messages and internal actions to the [redux devtools](https://chromewebstore.google.com/detail/lmhkpmbekcpmknklioeibfkpmmfibljd). *(Demo video: https://player.vimeo.com/video/1089272009?badge=0&autopause=0&player_id=0&app_id=58479)* To enable debugging, specify the `debugName` option. **Example (debug):** ```ts chat = chatResource({ debugName: 'chat', }); ``` --- ## Beyond Chat Large language models are highly intelligent and capable of more than just text and chatbots. With Hashbrown, you can expose your trusted, tested, and compliant components - in other words, you can generate user interfaces using your components as the building blocks! - [Generate user interfaces](https://hashbrown.dev/docs/angular/concept/components) — Expose Angular components to the LLM for generative UI. --- ## Tool Calling Tool calling enables the model to invoke callback function in your frontend web application code. The functions you expose can either have no arguments or you can specify the required and optional arguments. The model will choose if, and when, to invoke the function. What can functions do? - Expose application state to the model - Allow the model to take an action - Offer intelligent next actions for the user to take - Automate user tasks With Angular, all handler functions are executed in the injection context - this means that you can use the `inject()` function within the handler functions to inject services and dependencies. - [Tool Calling](https://hashbrown.dev/docs/angular/concept/functions) — Provide callback functions to the LLM. --- ## Streaming Responses Streaming is baked into the core of Hashbrown. With Skillet, our LLM-optimized schema language, you can use the `.streaming()` keyword to enable streaming with eager JSON parsing. - [Streaming Responses](https://hashbrown.dev/docs/angular/concept/streaming) — Use Skillet for built-in streaming and eager JSON parsing. --- ## Run LLM-generated JS Code (Safely) Hashbrown ships with a JavaScript runtime for safe execution of LLM-generated code in the client. - [JavaScript Runtime](https://hashbrown.dev/docs/angular/concept/runtime) — Safely execute JS code generated by the model in the browser. --- # Sample App Smart home client built with Angular. Some of the basic features of the sample app include: 1. Simple Chat 2. Tool Calling 3. UI Chat 4. Text completion 5. Structured output 6. Structured completion [Check out our smart home sample app on GitHub](https://github.com/liveloveapp/hashbrown/tree/main/samples/smart-home/angular) --- ## Clone Repository **Example (terminal):** ```bash git clone https://github.com/liveloveapp/hashbrown.git ``` Then install the dependencies: **Example (terminal):** ```bash cd Hashbrown npm install ``` ## OpenAI API Key Our samples are built using OpenAI's models. 1. [Sign up for OpenAI's API](https://openai.com/api/) 2. [Create an organization and API Key](https://platform.openai.com/settings/organization/api-keys) 3. Set the `OPENAI_API_KEY` environment variable in the `.env` file in the root directory, which allows the smart-home-server process to load it ``` OPENAI_API_KEY=your_openai_api_key ``` ## See the code Open up the `samples/smart-home/angular` directory. ## Start the Application You will need to start both the server and the client to run the sample application. **Example (terminal):** ```bash npx nx serve smart-home-server npx nx serve smart-home-angular ```