Structured Output
Specify the JSON schema of the model response.
- Structured output can replace forms with natural language input via text or audio.
- Users can navigate via chat.
- Provide structured predictive actions given application state and user events.
- Allow the user to customer the entire application user interface.
Demo
The structuredChatResource()
Function
@Component({})
export class App {
// 1. Create the resource with the specified `schema`
chat = structuredChatResource({
system: `Collect the user's first and last name.`,
schema: s.object('The user', {
firstName: s.string('First name'),
lastName: s.string('Last name'),
}),
});
constructor() {
// 1. Send a user message
chat.sendMessage({ role: 'user', content: 'My name is Brian Love' });
// 3. Log out the structure response
effect(() => {
const value = chat.value();
console.log({
firstName: value.content.firstName,
lastName: value.content.lastName,
});
});
}
}
- The
function is used to create a chat resource that can parse user input and return structured data. - The
schema
option defines the expected structure of the response using Hashbrown's Skillet schema language. - The resource
value()
contains the structured output, which can be used directly in your application.
Here is the expected content
value:
{
"firstName": "Brian",
"lastName": "Love"
}
StructuredChatResourceOptions
Option | Type | Required | Description |
---|---|---|---|
model |
KnownModelIds | Signal |
Yes | The model to use for the structured chat resource |
system |
string | Signal |
Yes | The system prompt to use for the structured chat resource |
schema |
Schema |
Yes | The schema to use for the structured chat resource |
tools |
Tools[] |
No | The tools to use for the structured chat resource |
messages |
Chat.Message |
No | The initial messages for the structured chat resource |
debugName |
string |
No | The debug name for the structured chat resource |
debounce |
number |
No | The debounce time for the structured chat resource |
retries |
number |
No | The number of retries for the structured chat resource |
apiUrl |
string |
No | The API URL to use for the structured chat resource |
API Reference
structuredChatResource() API
See the resource documentation
StructuredChatResourceOptions API
See all of the options
The structuredCompletionResource()
Function
The input
option.
predictedLights = structuredCompletionResource({
debugName: 'Predict Lights',
system: `
You are an assistant that helps the user configure a lighting scene.
The user will choose a name for the scene, and you will predict the
lights that should be added to the scene based on the name. The input
will be the scene name and the list of lights that are available.
# Rules
- Only suggest lights from the provided "availableLights" input list.
- Pick a brightness level for each light that is appropriate for the scene.
`,
input: computed(() => {
if (!this.sceneNameSignal()) return null;
return {
input: this.sceneNameSignal(),
availableLights: untracked(() => {
return this.lights().map((light) => ({
id: light.id,
name: light.name,
}));
}),
};
}),
schema: s.array(
'The lights to add to the scene',
s.object('A join between a light and a scene', {
lightId: s.string('the ID of the light to add'),
brightness: s.number('the brightness of the light from 0 to 100'),
}),
),
});
Let's review the code above.
- The
function is used to create a resource that predicts lights based on the scene name. - The
input
option is set to a signal that contains the scene name and additional untracked context. This signal updates each time the scene name signal changes, and reads the list of light names and sends them along. - The
schema
defines the expected structure of the response, which includes an array of lights with their IDs and brightness levels.
When the user types a scene name, the LLM will predict which lights should be added to the scene and return a structured JSON object that can be used directly in your application.
StructuredCompletionResourceOptions
Option | Type | Required | Description |
---|---|---|---|
model |
KnownModelIds |
Yes | The model to use for the structured completion resource |
input |
Signal |
Yes | The input to the structured completion resource |
schema |
Schema |
Yes | The schema to use for the structured completion resource |
system |
SignalLike |
Yes | The system prompt to use for the structured completion resource |
tools |
Chat.AnyTool[] |
No | The tools to use for the structured completion resource |
debugName |
string |
No | The debug name for the structured completion resource |
apiUrl |
string |
No | The API URL to use for the structured completion resource |
API Reference
structuredCompletionResource() API
See the full resource
StructuredCompletionResourceOptions API
See the options
Global Predictions
In this example, we'll assume you are using a global state container (like NgRx). We'll send each action to the LLM and ask it to predict the next possible action a user should consider.
lastAction = this.store.selectSignal(selectLastUserAction);
predictions = structuredCompletionResource({
// 1. The resource is re-computed with the last user action
input: this.lastAction,
// 2. The system instructions provide the guidelines and rules
system: `
You are an AI smart home assistant tasked with predicting the next possible user action in a
smart home configuration app. Your suggestions will be displayed as floating cards in the
bottom right of the screen.
Important Guidelines:
- The user already owns all necessary hardware. Do not suggest purchasing hardware.
- Every prediction must include a concise 'reasonForSuggestion' that explains the suggestion
in one sentence.
- Each prediction must be fully detailed with all required fields based on its type.
Additional Rules:
- Always check the current lights and scenes states to avoid suggesting duplicates.
- If a new light has just been added, consider suggesting complementary lights or adding it
to an existing scene.
- You do not always need to make a prediction. Returning an empty array is also a valid
response.
- You may make multiple predictions. Just add multiple predictions to the array.
`,
// 3. Provide tools to retrieve the current app state
tools: [
createTool({
name: 'getLights',
description: 'Get all lights in the smart home',
handler: () => this.smartHomeService.loadLights(),
}),
createTool({
name: 'getScenes',
description: 'Get all scenes in the smart home',
handler: () => this.smartHomeService.loadScenes(),
}),
],
// 4. Specify the structured output schema
schema: s.object('The result', {
predictions: s.streaming.array(
'The predictions',
s.anyOf([
s.object('Suggests adding a light to the system', {
type: s.literal('Add Light'),
name: s.string('The suggested name of the light'),
brightness: s.integer('A number between 0-100'),
}),
s.object('Suggest adding a scene to the system', {
type: s.literal('Add Scene'),
name: s.string('The suggested name of the scene'),
lights: s.array(
'The lights in the scene',
s.object('A light in the scene', {
lightId: s.string('The ID of the light'),
brightness: s.integer('A number between 0-100'),
}),
),
}),
s.object('Suggest scheduling a scene to the system', {
type: s.literal('Schedule Scene'),
sceneId: s.string('The ID of the scene'),
datetime: s.string('The datetime of the scene'),
}),
s.object('Suggest adding a light to a scene', {
type: s.literal('Add Light to Scene'),
lightId: s.string('The ID of the light'),
sceneId: s.string('The ID of the scene'),
brightness: s.integer('A number between 0-100'),
}),
s.object('Suggest removing a light from a scene', {
type: s.literal('Remove Light from Scene'),
lightId: s.string('The ID of the light'),
sceneId: s.string('The ID of the scene'),
}),
]),
),
}),
});
Let's review the code above:
- The
function is used to create a resource that predicts the next possible user action based on the last action. - The
input
option is set to a signal that contains the last user action, allowing the resource to reactively update when the last action changes. - The
system
option provides context to the LLM, instructing it to predict the next possible user action in the app. - The
tools
option defines two tools that the LLM can use to get the current state of lights and scenes in the smart home. - The
schema
defines the expected structure of the response, which includes an array of predictions with their types and details.
When the user performs an action, the LLM will predict the next possible actions and return a structured JSON object. From there, you can wire up a toast notification to be displayed when the LLM provides a prediction. When the user accepts the predictive action, dispatch the action and update the state of the app accordingly.
Next Steps
Generate user interfaces
Expose Angular components to the LLM for generative UI.
Execute LLM-generated JS in the browser (safely)
Use Hashbrown's JavaScript runtime for complex and mathematical operations.