Angular Quick Start
Take your first steps with Hashbrown.
Install
npm install @hashbrownai/{core,angular,openai} --save
Provider
export const appConfig: ApplicationConfig = {
providers: [
provideHashbrown({
baseUrl: '/api/chat',
}),
],
};
- Import the
function from @hashbrownai/angular
. - Optionally specify options such as the
baseUrl
for chat requests. - Add the provider to your Angular application configuration.
You can also intercept requests to the Hashbrown adapter using a middleware pattern.
export const appConfig: ApplicationConfig = {
providers: [
provideHashbrown({
middleware: [
function (request: RequestInit) {
console.log({ request });
return request;
},
],
}),
],
};
- The
middleware
option to the provider allows the developer to intercept Hashbrown requests. - Middleware functions can be async.
- This is useful for appending headers, etc.
Node Adapters
To get started, we recommend running a local express server following the Hashbrown adapter documentation.
The chatResource()
Function
The @hashbrownai/angular
is the basic way to interact with the model.
chatResource({
model: 'gpt-5',
system: 'hashbrowns should be covered and smothered',
messages: [{ role: 'user', content: 'Write a short story about breakfast.' }],
});
- First, we specify the
model
. - Second, we provide system instructions.
- Third, we send some initial
messages
to the model.
ChatResourceOptions
Option | Type | Required | Description |
---|---|---|---|
system | string | Signal |
Yes | System (assistant) prompt. |
model | KnownModelIds | Signal |
Yes | Model identifier to use. |
tools | Tools[] | No | Array of bound tools available to the chat. |
messages | Chat.Message<string, Tools>[] | Signal<Chat.Message<string, Tools>[]> | No | Initial list of chat messages. |
debounce | number | No | Debounce interval in milliseconds between user inputs. |
debugName | string | No | Name used for debugging in logs and reactive signal labels. |
apiUrl | string | No | Override for the API base URL (defaults to configured baseUrl ). |
ChatResourceRef
The Resource
interface.
Property | Type | Description |
---|---|---|
value() |
Signal |
The current chat messages (inherited from Resource). |
status() |
Signal |
The current status of the resource: 'idle', 'loading', 'resolved', or 'error' (inherited from Resource). |
isLoading() |
Signal |
Whether the resource is currently loading (inherited from Resource). |
hasValue() |
Signal |
Whether the resource has any assistant messages (inherited from Resource). |
error() |
Signal |
Any error that occurred during the chat operation. |
lastAssistantMessage() |
Signal |
The last assistant message in the chat. |
sendMessage(message) |
(message: Chat.UserMessage) => void |
Send a new user message to the chat. |
stop(clear?) |
(clear?: boolean) => void |
Stop any currently-streaming message. Optionally removes the streaming message from state. |
reload() |
() => boolean |
Remove the last assistant response and re-send the previous user message. Returns true if a reload was performed. |
API Reference
chatResource()
See the resource documentation
ChatResourceOptions API
See all of the options
ChatResourceRef API
See all of the properties and methods
Render the Model Response
import { chatResource } from '@hashbrownai/angular';
@Component({
template: `
// 1. Render the content of each message
@for (message of chat.value(); track $index) {
<p>{{ message.content }}</p>
}
`,
})
export class App {
// 2. Generate the messages from a prompt
chat = chatResource({
model: 'gpt-5',
system: 'hashbrowns should be covered and smothered',
messages: [
{ role: 'user', content: 'Write a short story about breakfast.' },
],
});
}
- In the template, we render the content of each message.
- The
function creates a new chat resource. - We use the
value()
signal to access the current messages in the chat.
Send Messages
To send messages to the model, we can use the sendMessage()
method.
import { chatResource } from '@hashbrownai/angular';
@Component({
template: `
<div>
<input
type="text"
[value]="userMessage()"
(input)="userMessage.set($any($event.target).value)"
(keydown.enter)="send()"
placeholder="Prompt..."
/>
<button (click)="send()">Send</button>
</div>
<div>
@for (message of chat.value(); track $index) {
<p>{{ message.content }}</p>
}
</div>
`,
})
export class App {
userMessage = input<string>('');
chat = chatResource({
model: 'gpt-5',
debugName: 'chat',
system: 'hashbrowns should be covered and smothered',
messages: [
{ role: 'user', content: 'Write a short story about breakfast.' },
],
});
send() {
if (this.userMessage().trim()) {
this.chat.sendMessage({ role: 'user', content: this.userMessage() });
this.userMessage.set('');
}
}
}
- We create an input field controlled by the
userMessage
signal for user input. - The
send()
method sends the user message to the chat resource using thesendMessage()
method. - Angular renders the user message and the assistant response message.
Debugging with Redux Devtools
Hashbrown streams LLM messages and internal actions to the redux devtools.
To enable debugging, specify the debugName
option.
chat = chatResource({
debugName: 'chat',
});
Beyond Chat
Large language models are highly intelligent and capable of more than just text and chatbots. With Hashbrown, you can expose your trusted, tested, and compliant components - in other words, you can generate user interfaces using your components as the building blocks!
Generate user interfaces
Expose Angular components to the LLM for generative UI.
Tool Calling
Tool calling enables the model to invoke callback function in your frontend web application code. The functions you expose can either have no arguments or you can specify the required and optional arguments. The model will choose if, and when, to invoke the function.
What can functions do?
- Expose application state to the model
- Allow the model to take an action
- Offer intelligent next actions for the user to take
- Automate user tasks
With Angular, all handler functions are executed in the injection context - this means that you can use the inject()
function within the handler functions to inject services and dependencies.
Tool Calling
Provide callback functions to the LLM.
Streaming Responses
Streaming is baked into the core of Hashbrown.
With Skillet, our LLM-optimized schema language, you can use the .streaming()
keyword to enable streaming with eager JSON parsing.
Streaming Responses
Use Skillet for built-in streaming and eager JSON parsing.
Run LLM-generated JS Code (Safely)
Hashbrown ships with a JavaScript runtime for safe execution of LLM-generated code in the client.
JavaScript Runtime
Safely execute JS code generated by the model in the browser.