Transform Request Options
Intercept and modify requests before they are sent to LLM providers.
The transformRequestOptions
method enables developers to intercept requests in the adapter to mutate the request before it is sent to the LLM provider.
- Server-side prompts: Inject additional context or instructions that shouldn't be exposed to the client
- Message mutations: Modify, filter, or enhance messages based on business logic
- Request summarization: Compress or summarize lengthy conversation history
- Evaluation and logging: Log requests for debugging, monitoring, or evaluation purposes
- Dynamic configuration: Adjust model parameters based on runtime conditions
How it Works
The transformRequestOptions
function is called just before the request is sent to the LLM provider. It receives the complete request parameters and can return either a modified version synchronously or asynchronously via a Promise.
- Define a transform function that receives platform-specific request parameters
- Modify the parameters as needed (add system prompts, filter messages, etc.)
- Return the transformed parameters
- The adapter sends the modified request to the LLM provider
Basic Usage
import { HashbrownOpenAI } from '@hashbrownai/openai';
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY!,
request: req.body,
transformRequestOptions: (options) => {
return {
...options,
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
...options.messages,
],
};
},
});
In this example, we're adding a system message to every conversation without exposing it to the client-side code.
Server-Side Context Injection
Inject user context and application state that shouldn't be visible to the client:
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY!,
request: req.body,
transformRequestOptions: (options) => {
const userContext = getUserContext(req.user.id);
return {
...options,
messages: [
{
role: 'system',
content: `
You are an AI assistant for ${userContext.companyName}.
User role: ${userContext.role}
Available features: ${userContext.features.join(', ')}
`,
},
...options.messages,
],
};
},
});
This approach keeps sensitive user context on the server while still providing it to the LLM for personalized responses.
Message Processing
Transform requests to modify message content based on business logic:
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY!,
request: req.body,
transformRequestOptions: (options) => {
return {
...options,
messages: options.messages.map(message => {
if (message.role === 'user') {
// Filter out sensitive information
const filteredContent = filterSensitiveData(message.content);
return { ...message, content: filteredContent };
}
return message;
}),
};
},
});
Dynamic Configuration
Adjust model parameters based on runtime conditions:
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY!,
request: req.body,
transformRequestOptions: (options) => {
const userPlan = getUserPlan(req.user.id);
return {
...options,
temperature: userPlan === 'creative' ? 0.8 : 0.2,
max_tokens: userPlan === 'free' ? 500 : undefined,
tools: userPlan === 'premium' ? options.tools : undefined,
};
},
});
Async Transformations
Use async operations for database lookups or external API calls:
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY!,
request: req.body,
transformRequestOptions: async (options) => {
const userPreferences = await fetchUserPreferences(req.user.id);
return {
...options,
messages: [
{
role: 'system',
content: `User prefers ${userPreferences.communicationStyle} responses.`,
},
...options.messages,
],
};
},
});
Platform-Specific Considerations
OpenAI
Supports all OpenAI chat completion parameters. Can modify tools
, tool_choice
, response_format
, and more.
Google (Gemini)
Uses GenerateContentParameters
format with different message structure. System instructions are provided via systemInstruction
parameter.
Writer
Uses Writer-specific parameter format with similar capabilities to OpenAI.
Azure OpenAI
Same parameters as OpenAI but ensure compatibility with your Azure deployment configuration.
Error Handling
Always handle errors gracefully in your transform function:
const stream = HashbrownOpenAI.stream.text({
apiKey: process.env.OPENAI_API_KEY!,
request: req.body,
transformRequestOptions: async (options) => {
try {
const enhancedOptions = await enhanceRequest(options);
return enhancedOptions;
} catch (error) {
console.error('Failed to transform request:', error);
// Return original options as fallback
return options;
}
},
});
Next Steps
OpenAI Platform
Learn how to use transformRequestOptions with OpenAI.
System Instructions
Learn about system prompts and instructions.