Streaming Mode enables real-time, character-by-character display of AI responses using Server-Sent Events (SSE), creating a more dynamic and engaging chat experience similar to modern AI interfaces like ChatGPT.
Overview
When streaming mode is enabled, instead of waiting for the complete response, messages appear progressively as they are generated by the AI model. This provides immediate feedback and a more natural conversation flow.
⚡ Real-time Display Characters appear as they’re generated, not all at once
🌊 Server-Sent Events Uses SSE for efficient, persistent connections
📱 Better UX Immediate feedback and reduced perceived latency
🔧 Easy Setup Simple prop to enable streaming functionality
Enabling Streaming Mode
Enable streaming mode with a single prop:
import { ChatInterface } from '@termix-it/react-tool' ;
function StreamingChat () {
return (
< ChatInterface
projectId = "your-project-id"
aiConfigId = "your-ai-config-id"
authorization = "your-api-key"
enableStreamingMode = { true } // Enable streaming
placeholder = "Ask me anything..."
/>
);
}
< script setup lang = "ts" >
import { ChatInterface } from '@termix-it/vue-tool' ;
import '@termix-it/vue-tool/style.css'
</ script >
< template >
< ChatInterface
project-id = "your-project-id"
ai-config-id = "your-ai-config-id"
authorization = "your-api-key"
: enable-streaming-mode = " true "
placeholder = "Ask me anything..."
/>
</ template >
Streaming vs Regular Mode
Understanding the differences between streaming and regular chat modes:
Feature Regular Mode Streaming Mode Response Display Complete message appears at once Character-by-character display API Endpoint POST /chatPOST /chat/streamConnection Type Single HTTP request Server-Sent Events Visual Feedback Loading spinner Real-time typing effect Usage Tracking Full cost information Limited cost tracking Perceived Speed Slower (wait for complete response) Faster (immediate feedback) Network Usage Lower (single request) Higher (persistent connection)
Technical Requirements
Server-Side Implementation
Your streaming endpoint must support Server-Sent Events and return data in the expected format:
Endpoint Requirements
POST /api/v1/ai/projects/{projectId}/chat/stream
Request: application/json, Response: text/event-stream
The endpoint should return SSE events in this specific format:
data: {"type":"content","data":"Hello"}
data: {"type":"content","data":" there!"}
data: {"type":"content","data":" How"}
data: {"type":"content","data":" can"}
data: {"type":"content","data":" I"}
data: {"type":"content","data":" help"}
data: {"type":"content","data":" you"}
data: {"type":"content","data":"?"}
data: {"type":"done","sessionId":"session-123-456"}
Stream Event Types
Partial message content chunk { "type" : "content" , "data" : "partial text" }
Indicates stream completion with optional session ID { "type" : "done" , "sessionId" : "session-id-here" }
Advanced Streaming Features
Combine streaming with the floating widget for optimal user experience:
import { ChatWidget , ChatInterface } from '@termix-it/react-tool' ;
function StreamingWidget () {
return (
< div className = "fixed bottom-6 right-6" >
< ChatWidget
title = "AI Assistant"
>
< ChatInterface
projectId = "your-project-id"
aiConfigId = "your-ai-config-id"
authorization = "your-api-key"
enableStreamingMode = { true }
className = "h-full"
/>
</ ChatWidget >
</ div >
);
}
< script setup lang = "ts" >
import { ChatWidget , ChatInterface } from '@termix-it/vue-tool' ;
import '@termix-it/vue-tool/style.css'
</ script >
< template >
< div class = "fixed bottom-6 right-6" >
< ChatWidget title = "AI Assistant" >
< ChatInterface
project-id = "your-project-id"
ai-config-id = "your-ai-config-id"
authorization = "your-api-key"
: enable-streaming-mode = " true "
class = "h-full"
/>
</ ChatWidget >
</ div >
</ template >
Streaming with Function Calls
Function calls work seamlessly with streaming mode:
function StreamingWithFunctions () {
const handleFunctionExecuted = ( call : FunctionCall , result : ExecutionResult ) => {
console . log ( 'Function executed during streaming:' , call . name );
if ( result . success ) {
console . log ( 'Function result:' , result . data );
}
};
return (
< ChatInterface
projectId = "your-project-id"
aiConfigId = "your-ai-config-id"
authorization = "your-api-key"
enableStreamingMode = { true }
onFunctionExecuted = { handleFunctionExecuted }
placeholder = "Ask me to use functions..."
/>
);
}
< script setup lang = "ts" >
import { ChatInterface } from '@termix-it/vue-tool' ;
import '@termix-it/vue-tool/style.css'
import type { FunctionCall , ExecutionResult } from '@termix-it/vue-tool' ;
const handleFunctionExecuted = ( call : FunctionCall , result : ExecutionResult ) => {
console . log ( 'Function executed during streaming:' , call . name );
if ( result . success ) {
console . log ( 'Function result:' , result . data );
}
};
</ script >
< template >
< ChatInterface
project-id = "your-project-id"
ai-config-id = "your-ai-config-id"
authorization = "your-api-key"
: enable-streaming-mode = " true "
@ function-executed = " handleFunctionExecuted "
placeholder = "Ask me to use functions..."
/>
</ template >
Connection Status Monitoring
Monitor streaming connection status and handle errors:
import { useState , useCallback } from 'react' ;
function MonitoredStreamingChat () {
const [ connectionStatus , setConnectionStatus ] = useState < 'connected' | 'disconnected' | 'error' >( 'disconnected' );
const [ lastError , setLastError ] = useState < string | null >( null );
const handleError = useCallback (( error : any ) => {
console . error ( 'Streaming error:' , error );
setConnectionStatus ( 'error' );
setLastError ( error . message || 'Unknown streaming error' );
}, []);
const handleMessageSent = useCallback (() => {
setConnectionStatus ( 'connected' );
setLastError ( null );
}, []);
const handleResponseReceived = useCallback (() => {
setConnectionStatus ( 'connected' );
}, []);
return (
< div >
{ /* Connection status indicator */ }
< div className = { `mb-4 p-2 rounded text-sm ${
connectionStatus === 'connected' ? 'bg-green-100 text-green-800' :
connectionStatus === 'error' ? 'bg-red-100 text-red-800' :
'bg-gray-100 text-gray-600'
} ` } >
Status: { connectionStatus }
{ lastError && < span className = "block" > Error: { lastError } </ span > }
</ div >
< ChatInterface
projectId = "your-project-id"
aiConfigId = "your-ai-config-id"
authorization = "your-api-key"
enableStreamingMode = { true }
onError = { handleError }
onMessageSent = { handleMessageSent }
onResponseReceived = { handleResponseReceived }
/>
</ div >
);
}
< script setup lang = "ts" >
import { ref , computed } from 'vue' ;
import { ChatInterface } from '@termix-it/vue-tool' ;
import '@termix-it/vue-tool/style.css'
const connectionStatus = ref < 'connected' | 'disconnected' | 'error' >( 'disconnected' );
const lastError = ref < string | null >( null );
const statusClass = computed (() => {
if ( connectionStatus . value === 'connected' ) return 'bg-green-100 text-green-800' ;
if ( connectionStatus . value === 'error' ) return 'bg-red-100 text-red-800' ;
return 'bg-gray-100 text-gray-600' ;
});
const handleError = ( error : any ) => {
console . error ( 'Streaming error:' , error );
connectionStatus . value = 'error' ;
lastError . value = error . message || 'Unknown streaming error' ;
};
const handleMessageSent = () => {
connectionStatus . value = 'connected' ;
lastError . value = null ;
};
const handleResponseReceived = () => {
connectionStatus . value = 'connected' ;
};
</ script >
< template >
< div >
<!-- Connection status indicator -->
< div : class = " [ 'mb-4 p-2 rounded text-sm' , statusClass ] " >
Status: {{ connectionStatus }}
< span v-if = " lastError " class = "block" > Error: {{ lastError }} </ span >
</ div >
< ChatInterface
project-id = "your-project-id"
ai-config-id = "your-ai-config-id"
authorization = "your-api-key"
: enable-streaming-mode = " true "
@ error = " handleError "
@ message-sent = " handleMessageSent "
@ response-received = " handleResponseReceived "
/>
</ div >
</ template >
Network Optimization
Connection Management : Streaming uses persistent connections. Ensure your server can handle multiple concurrent SSE connections.
Compression : Enable gzip compression on your server for SSE responses to reduce bandwidth usage.
Troubleshooting
Common Issues
Symptoms : Messages appear all at once instead of streamingSolutions :
Verify your server endpoint returns text/event-stream content type
Check that your proxy correctly forwards the Accept: text/event-stream header
Ensure your server doesn’t buffer the SSE response
Symptoms : Streaming stops mid-responseSolutions :
Implement connection retry logic on both client and server
Check for network timeouts and adjust server keep-alive settings
Monitor for proxy/load balancer timeouts in production
Symptoms : Browser memory increases during long streaming sessionsSolutions :
Implement message history limits
Close unused SSE connections properly
Clear old messages from memory periodically
Symptoms : Cross-origin errors in browser consoleSolutions :
Add proper CORS headers to your streaming endpoint
Include Access-Control-Allow-Origin and other necessary headers
Test with same-origin requests first to isolate CORS issues
Debugging Tips
Enable detailed logging for streaming debugging:
function DebugStreamingChat () {
const handleError = ( error : any ) => {
console . group ( 'Streaming Debug Info' );
console . error ( 'Message:' , error . message );
console . error ( 'Stack:' , error . stack );
// HTTP-specific (Axios errors)
if ( error . response ) {
console . error ( 'HTTP Status:' , error . response . status );
console . error ( 'Response data:' , error . response . data );
}
console . error ( 'Full error:' , error );
console . groupEnd ();
};
return (
< ChatInterface
projectId = "your-project-id"
aiConfigId = "your-ai-config-id"
authorization = "your-api-key"
enableStreamingMode = { true }
onError = { handleError }
onMessageSent = { ( msg ) => console . log ( 'Message sent:' , msg ) }
onResponseReceived = { ( msg ) => console . log ( 'Response received:' , msg ) }
/>
);
}
< script setup lang = "ts" >
import { ref } from 'vue' ;
import { ChatInterface } from '@termix-it/vue-tool' ;
import type { Message } from '@termix-it/vue-tool' ;
import '@termix-it/vue-tool/style.css'
const handleError = ( error : any ) => {
console . group ( 'Streaming Debug Info' );
console . error ( 'Message:' , error . message );
console . error ( 'Stack:' , error . stack );
// HTTP-specific (Axios errors)
if ( error . response ) {
console . error ( 'HTTP Status:' , error . response . status );
console . error ( 'Response data:' , error . response . data );
}
console . error ( 'Full error:' , error );
console . groupEnd ();
}
const onMessageSent = ( msg : Message ) => console . log ( 'Message sent:' , msg );
const onResponseReceived = ( msg : Message ) => console . log ( 'Response received:' , msg );
</ script >
< template >
< ChatInterface
project-id = "your-project-id"
ai-config-id = "your-ai-config-id"
authorization = "your-api-key"
: enable-streaming-mode = " true "
@ error = " handleError "
@ message-sent = " onMessageSent "
@ response-received = " onResponseReceived "
/>
</ template >
Browser Compatibility
Streaming mode uses modern web APIs with broad browser support:
Server-Sent Events : Supported in all modern browsers (IE 11+)
EventSource API : Native support across all target browsers
Graceful Degradation : Automatically falls back to regular mode if streaming fails
Best Practices
Error Handling : Always implement proper error handling for streaming connections, as they can be interrupted by network issues.
User Feedback : Consider adding visual indicators when streaming is active vs. when it’s complete.
Resource Management : Ensure SSE connections are properly closed when components unmount to prevent memory leaks.
Streaming mode uses more network resources than regular mode. Consider offering users the choice between modes for bandwidth-conscious applications.
Testing Streaming Implementation
Test your streaming implementation thoroughly:
# Test streaming endpoint directly with curl
curl -N \
-H 'Content-Type: application/json' \
-H 'X-API-Key: YOUR-API-KEY' \
--data-raw '{"messages":[{"role":"user","content":"hello"}]}' \
'https://dashboard.termix.ai/api/v1/sdk-api/projects/YOUR-PROJECT-ID/chat/stream'
Streaming mode transforms the chat experience from static request-response interactions to dynamic, real-time conversations that feel more natural and engaging.