Streaming Mode

Streaming Mode enables real-time, character-by-character display of AI responses using Server-Sent Events (SSE), creating a more dynamic and engaging chat experience similar to modern AI interfaces like ChatGPT.

Overview

When streaming mode is enabled, instead of waiting for the complete response, messages appear progressively as they are generated by the AI model. This provides immediate feedback and a more natural conversation flow.

⚡ Real-time Display

Characters appear as they’re generated, not all at once

🌊 Server-Sent Events

Uses SSE for efficient, persistent connections

📱 Better UX

Immediate feedback and reduced perceived latency

🔧 Easy Setup

Simple prop to enable streaming functionality

Enabling Streaming Mode

Enable streaming mode with a single prop:
import { ChatInterface } from '@termix-it/react-tool';

function StreamingChat() {
  return (
    <ChatInterface
      projectId="your-project-id"
      aiConfigId="your-ai-config-id"
      authorization="Bearer your-token"
      enableStreamingMode={true} // Enable streaming
      placeholder="Ask me anything..."
    />
  );
}

Streaming vs Regular Mode

Understanding the differences between streaming and regular chat modes:
FeatureRegular ModeStreaming Mode
Response DisplayComplete message appears at onceCharacter-by-character display
API EndpointPOST /chatPOST /chat/stream
Connection TypeSingle HTTP requestServer-Sent Events
Visual FeedbackLoading spinnerReal-time typing effect
Usage TrackingFull cost informationLimited cost tracking
Perceived SpeedSlower (wait for complete response)Faster (immediate feedback)
Network UsageLower (single request)Higher (persistent connection)

Technical Requirements

Server-Side Implementation

Your streaming endpoint must support Server-Sent Events and return data in the expected format:

Endpoint Requirements

Endpoint
string
POST /api/v1/ai/projects/{projectId}/chat/stream
Content-Type
string
Request: application/json, Response: text/event-stream
Accept
string
text/event-stream

Response Format

The endpoint should return SSE events in this specific format:
data: {"type":"content","data":"Hello"}
data: {"type":"content","data":" there!"}
data: {"type":"content","data":" How"}
data: {"type":"content","data":" can"}
data: {"type":"content","data":" I"}
data: {"type":"content","data":" help"}
data: {"type":"content","data":" you"}
data: {"type":"content","data":"?"}
data: {"type":"done","sessionId":"session-123-456"}

Stream Event Types

content
object
Partial message content chunk
{"type":"content","data":"partial text"}
done
object
Indicates stream completion with optional session ID
{"type":"done","sessionId":"session-id-here"}

Advanced Streaming Features

Streaming with ChatWidget

Combine streaming with the floating widget for optimal user experience:
import { ChatWidget, ChatInterface } from '@termix-it/react-tool';

function StreamingWidget() {
  return (
    <div className="fixed bottom-6 right-6">
      <ChatWidget
        title="AI Assistant"
      >
        <ChatInterface
          projectId="your-project-id"
          aiConfigId="your-ai-config-id"
          authorization="Bearer your-token"
          enableStreamingMode={true}
          className="h-full"
        />
      </ChatWidget>
    </div>
  );
}

Streaming with Function Calls

Function calls work seamlessly with streaming mode:
function StreamingWithFunctions() {
  const handleFunctionExecuted = (call: FunctionCall, result: ExecutionResult) => {
    console.log('Function executed during streaming:', call.name);
    
    if (result.success) {
      console.log('Function result:', result.data);
    }
  };

  return (
    <ChatInterface
      projectId="your-project-id"
      aiConfigId="your-ai-config-id"
      authorization="Bearer your-token"
      enableStreamingMode={true}
      onFunctionExecuted={handleFunctionExecuted}
      placeholder="Ask me to use functions..."
    />
  );
}

Connection Status Monitoring

Monitor streaming connection status and handle errors:
import { useState, useCallback } from 'react';

function MonitoredStreamingChat() {
  const [connectionStatus, setConnectionStatus] = useState<'connected' | 'disconnected' | 'error'>('disconnected');
  const [lastError, setLastError] = useState<string | null>(null);

  const handleError = useCallback((error: any) => {
    console.error('Streaming error:', error);
    setConnectionStatus('error');
    setLastError(error.message || 'Unknown streaming error');
  }, []);

  const handleMessageSent = useCallback(() => {
    setConnectionStatus('connected');
    setLastError(null);
  }, []);

  const handleResponseReceived = useCallback(() => {
    setConnectionStatus('connected');
  }, []);

  return (
    <div>
      {/* Connection status indicator */}
      <div className={`mb-4 p-2 rounded text-sm ${
        connectionStatus === 'connected' ? 'bg-green-100 text-green-800' :
        connectionStatus === 'error' ? 'bg-red-100 text-red-800' :
        'bg-gray-100 text-gray-600'
      }`}>
        Status: {connectionStatus}
        {lastError && <span className="block">Error: {lastError}</span>}
      </div>

      <ChatInterface
        projectId="your-project-id"
        aiConfigId="your-ai-config-id"
        authorization="Bearer your-token"
        enableStreamingMode={true}
        onError={handleError}
        onMessageSent={handleMessageSent}
        onResponseReceived={handleResponseReceived}
      />
    </div>
  );
}

Performance Considerations

Network Optimization

Connection Management: Streaming uses persistent connections. Ensure your server can handle multiple concurrent SSE connections.
Compression: Enable gzip compression on your server for SSE responses to reduce bandwidth usage.

Client-Side Optimization

import { memo, useCallback, useMemo } from 'react';

const OptimizedStreamingChat = memo(function StreamingChat({ 
  projectId, 
  aiConfigId, 
  userToken 
}: {
  projectId: string;
  aiConfigId: string;
  userToken: string;
}) {
  // Memoize authorization header
  const authorization = useMemo(() => `Bearer ${userToken}`, [userToken]);

  // Memoize callback functions to prevent unnecessary re-renders
  const handleError = useCallback((error: any) => {
    console.error('Optimized streaming error:', error);
  }, []);

  return (
    <ChatInterface
      projectId={projectId}
      aiConfigId={aiConfigId}
      authorization={authorization}
      enableStreamingMode={true}
      onError={handleError}
    />
  );
});

Troubleshooting

Common Issues

Debugging Tips

Enable detailed logging for streaming debugging:
function DebugStreamingChat() {
  const handleError = (error: any) => {
    console.group('Streaming Debug Info');
    console.error('Error type:', typeof error);
    console.error('Error message:', error.message);
    console.error('Error stack:', error.stack);
    console.error('Full error object:', error);
    console.groupEnd();
  };

  return (
    <ChatInterface
      projectId="your-project-id"
      aiConfigId="your-ai-config-id"
      authorization="Bearer your-token"
      enableStreamingMode={true}
      onError={handleError}
      onMessageSent={(msg) => console.log('Message sent:', msg)}
      onResponseReceived={(msg) => console.log('Response received:', msg)}
    />
  );
}

Browser Compatibility

Streaming mode uses modern web APIs with broad browser support:
  • Server-Sent Events: Supported in all modern browsers (IE 11+)
  • EventSource API: Native support across all target browsers
  • Graceful Degradation: Automatically falls back to regular mode if streaming fails

Best Practices

Error Handling: Always implement proper error handling for streaming connections, as they can be interrupted by network issues.
User Feedback: Consider adding visual indicators when streaming is active vs. when it’s complete.
Resource Management: Ensure SSE connections are properly closed when components unmount to prevent memory leaks.
Streaming mode uses more network resources than regular mode. Consider offering users the choice between modes for bandwidth-conscious applications.

Testing Streaming Implementation

Test your streaming implementation thoroughly:
# Test streaming endpoint directly with curl
curl -N -H "Accept: text/event-stream" \
     -H "Authorization: Bearer your-token" \
     -H "Content-Type: application/json" \
     -d '{"message":"Hello"}' \
     https://dashboard.termix.ai/api/v1/ai/projects/your-project/chat/stream
Streaming mode transforms the chat experience from static request-response interactions to dynamic, real-time conversations that feel more natural and engaging.