React Quick Start
Take your first steps with Hashbrown.
Install
npm install @hashbrownai/{core,react,openai} --save
Provider
export function Providers() {
return (
<HashbrownProvider url={url}>
{children}
</HashbrownProvider>
)
}
- Import the
component from @hashbrownai/react
. - Optionally specify options such as the
url
for chat requests. - Add the provider to your React application.
You can also intercept requests to the Hashbrown adapter using a middleware pattern.
export function Providers() {
const middleware = [
function (request: RequestInit) {
console.log({ request });
return request;
},
];
return (
<HashbrownProvider url={url} middleware={middleware}>
{children}
</HashbrownProvider>
);
}
- The
middleware
option to the provider allows the developer to intercept Hashbrown requests. - Middleware functions can be async.
- This is useful for appending headers, etc.
Node Adapters
To get started, we recommend running a local express server following the Hashbrown adapter documentation.
The useChat()
Hook
The @hashbrownai/react
is the basic way to interact with the model.
It provides a set of methods for sending and receiving messages, as well as managing the chat state.
useChat({
model: 'gpt-5',
system: 'hashbrowns should be covered and smothered',
messages: [{ role: 'user', content: 'Write a short story about breakfast.' }],
});
- First, we specify the
model
. - Second, we provide system instructions.
- Third, we send some initial messages to the model.
UseChatOptions
Property | Type | Required | Default | Description |
---|---|---|---|---|
model |
KnownModelIds |
Yes | - | The LLM model to use for the chat. |
system |
string |
Yes | - | The system message to use for the chat. |
messages |
Chat.Message |
No | [] |
The initial messages for the chat. |
tools |
Tools[] |
No | [] |
The tools to make available for the chat. |
debounceTime |
number |
No | 150 |
The debounce time between sends to the endpoint (in milliseconds). |
retries |
number |
No | 0 |
Number of retries if an error is received. |
debugName |
string |
No | - | The name of the hook, useful for debugging in Redux DevTools. |
UseChatResult
The
Property | Type | Description |
---|---|---|
messages |
Chat.Message |
The current chat messages. |
sendMessage(message) |
(message: Chat.UserMessage) => void |
Send a new user message to the chat. |
setMessages(messages) |
(messages: Chat.Message |
Update the chat messages. |
stop(clear?) |
(clear?: boolean) => void |
Stop any currently-streaming message. Optionally removes the streaming message from state. |
reload() |
() => boolean |
Remove the last assistant response and re-send the previous user message. Returns true if a reload was performed. |
error |
Error | undefined |
Any error that occurred during the chat operation. |
isReceiving |
boolean |
Whether the chat is receiving a response. |
isSending |
boolean |
Whether the chat is sending a response. |
isRunningToolCalls |
boolean |
Whether the chat is running tool calls. |
exhaustedRetries |
boolean |
Whether the current request has exhausted retries. |
lastAssistantMessage |
Chat.AssistantMessage |
The last assistant message. |
API Reference
useChat() API
See the hook documentation
UseChatOptions API
See all of the options
UseChatResult API
See all of the properties and methods
Render the Model Response
import { useChat } from '@hashbrownai/react';
export function App() {
// 1. Generate the messages from a prompt
const { messages } = useChat({
model: 'gpt-5',
system: 'hashbrowns should be covered and smothered',
messages: [
{ role: 'user', content: 'Write a short story about breakfast.' },
],
});
// 2. Render the content of each message
return (
<>
{messages.map((message, i) => (
<p key={i}>{message.content}</p>
))}
</>
);
}
- The
hook creates a new chat instance. - We destructure the response object and set the
messages
constant. - We return the JSX to render the content of each message.
This creates a basic chat interface where messages are displayed automatically.
Send Messages
To send messages to the model, we can use the sendMessage()
method.
import { useState } from 'react';
import { useChat } from '@hashbrownai/react';
export function App() {
const [userMessage, setUserMessage] = useState('');
const { messages, sendMessage } = useChat({
model: 'gpt-5',
debugName: 'chat',
system: 'hashbrowns should be covered and smothered',
});
const handleSend = () => {
if (userMessage.trim()) {
sendMessage({ role: 'user', content: userMessage });
setUserMessage('');
}
};
return (
<div>
<div>
<input
type="text"
value={userMessage}
onChange={(e) => setUserMessage(e.target.value)}
placeholder="Prompt..."
onKeyDown={(e) => e.key === 'Enter' && handleSend()}
/>
<button onClick={handleSend}>Send</button>
</div>
<div>
{messages.map((message, i) => (
<p key={i}>{message.content}</p>
))}
</div>
</div>
);
}
- We create an input field controlled by
userMessage
state for user input. - The
handleSend()
function sends the user message to the chat usingsendMessage()
. - React renders the user message and the assistant response message.
Debugging with Redux Devtools
Hashbrown streams LLM messages and internal actions to the redux devtools.
To enable debugging, specify the debugName
option.
import { useChat } from '@hashbrownai/react';
const chat = useChat({
debugName: 'chat',
});
Beyond Chat
Large language models are highly intelligent and capable of more than just text and chatbots. With Hashbrown, you can expose your trusted, tested, and compliant components - in other words, you can generate user interfaces using your components as the building blocks!
Generate user interfaces
Expose React components to the LLM for generative UI.
Tool Calling
Tool calling enables the model to invoke callback function in your frontend web application code. The functions you expose can either have no arguments or you can specify the required and optional arguments. The model will choose if, and when, to invoke the function.
What can functions do?
- Expose application state to the model
- Allow the model to take an action
- Offer intelligent next actions for the user to take
- Automate user tasks
Tool Calling
Provide callback functions to the LLM.
Streaming Responses
Streaming is baked into the core of Hashbrown.
With Skillet, our LLM-optimized schema language, you can use the .streaming()
keyword to enable streaming with eager JSON parsing.
Streaming Responses
Use Skillet for built-in streaming and eager JSON parsing.
Run LLM-generated JS Code (Safely)
Hashbrown ships with a JavaScript runtime for safe execution of LLM-generated code in the client.
JavaScript Runtime
Safely execute JS code generated by the model in the browser.