Streamable HTTP transport

In the Streamable HTTP transport, the server operates as an independent process that can handle multiple client connections. This transport uses HTTP POST and GET requests. Server can optionally make use of Server-Sent Events (SSE) to stream multiple server messages. This permits basic MCP servers, as well as more feature-rich servers supporting streaming and server-to-client notifications and requests.

Ref: https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http

QuickStart

How to use streamable HTTP transport with Euler MCP Server? It's easy! You can just use any streamable mcp client to connect to the endpoint https://dev.euler.ai/mcp/stream

tsx src/stream_client.ts "show me the latest block of the solana mainnet"
Connected to MCP server
Response from OpenAI:  {
  "id": "gen-1748680535-W4yrCtFN6JZoNjjNTa4n",
  "provider": "OpenAI",
  "model": "openai/gpt-3.5-turbo",
  "object": "chat.completion",
  "created": 1748680535,
  "choices": [
    {
      "logprobs": null,
      "finish_reason": "tool_calls",
      "native_finish_reason": "tool_calls",
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "",
        "refusal": null,
        "reasoning": null,
        "tool_calls": [
          {
            "index": 0,
            "id": "call_57vz7Rg0FueeUEUyniH6VW60",
            "type": "function",
            "function": {
              "name": "get_latest_block",
              "arguments": "{\"network\":\"mainnet\"}"
            }
          }
        ]
      }
    }
  ],
  "system_fingerprint": null,
  "usage": {
    "prompt_tokens": 2273,
    "completion_tokens": 16,
    "total_tokens": 2289,
    "prompt_tokens_details": {
      "cached_tokens": 0
    },
    "completion_tokens_details": {
      "reasoning_tokens": 0
    }
  }
}
Calling tool get_latest_block with args "{\"network\":\"mainnet\"}"
Final result:  [Calling tool get_latest_block with args "{\"network\":\"mainnet\"}"]
The latest block of the Solana Mainnet is block number 22601553. 

Here are some details of the block:
- Hash: 0xd734ef8b92851303674195fdea3544c593b33d65bd0c425f16ea2bf900675b7a
- Miner Address: 0x4838b106fce9647bdf1e7877bf73ce8b0bad5f97
- Timestamp: 1748680523
- Gas Limit: 35964845
- Gas Used: 16953014
- Number of Transactions: 100 (List of transaction hashes provided in the response)

This information provides an overview of the latest block on the Solana Mainnet.
Full Example: An alteration from Euler MCP + Code to the utilization of the StreamableHTTPClientTransport!
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { Transport } from "@modelcontextprotocol/sdk/shared/transport.js";
import dotenv from "dotenv";
import OpenAI from "openai";
dotenv.config();

const OPENAI_API_KEY = process.env.OPENAI_API_KEY;
if (!OPENAI_API_KEY) {
  throw new Error("OPENAI_API_KEY is not set");
}

const OPENAI_BASE_URL = process.env.OPENAI_BASE_URL || "https://api.openai.com/v1";
const MCP_SERVER_URL = process.env.MCP_STREAM_SERVER_URL || "https://dev.euler.ai/mcp/stream";

function openAiToolAdapter(tool: {
  name: string;
  description?: string;
  input_schema: any;
}) {
  return {
    type: "function",
    function: {
      name: tool.name,
      description: tool.description,
      parameters: {
        type: "object",
        properties: tool.input_schema.properties,
        required: tool.input_schema.required,
      },
    },
  };
}

class MCPClient {
  private mcp: Client;
  private openai: OpenAI;
  private tools: Array<any> = [];
  private transport: Transport | null = null;

  constructor() {
    this.openai = new OpenAI({
      apiKey: OPENAI_API_KEY,
      baseURL: OPENAI_BASE_URL,
    });

    this.mcp = new Client({ name: "mcp-client", version: "1.0.0" });
  }

  async connectToServer(serverUrl: string) {
    try {
      this.transport = new StreamableHTTPClientTransport(new URL(serverUrl));
      await this.mcp.connect(this.transport);

      const toolsResult = await this.mcp.listTools();
      this.tools = toolsResult.tools.map((tool) => {
        return openAiToolAdapter({
          name: tool.name,
          description: tool.description,
          input_schema: tool.inputSchema,
        });
      });
    } catch (e) {
      console.log("Failed to connect to MCP server: ", e);
      throw e;
    }
  }

  async processQuery(query: string) {
    const messages: any[] = [
      {
        role: "user",
        content: query,
      },
    ];

    // console.log("Tools: ", JSON.stringify(this.tools, null, 2));

    let response = await this.openai.chat.completions.create({
      model: "gpt-3.5-turbo",
      max_tokens: 1000,
      messages,
      tools: this.tools,
    });

    const finalText: string[] = [];
    const toolResults: any[] = [];

    console.log(
      "Response from OpenAI: ",
      JSON.stringify(response, null, 2)
    );

    for (const choice of response.choices) {
      const message = choice.message;
      if (message.tool_calls) {
        toolResults.push(
          await this.callTools(message.tool_calls, toolResults, finalText)
        )
      } else {
        finalText.push(message.content || "xx");
      }
    }

    messages.push({
      "role": "user",
      "content": `Tool Response ${JSON.stringify(toolResults)}`
    })
    messages.push({
      "role": "user",
      "content": `Use the Tool Response to answer the user's question: ${query}`
    })

    response = await this.openai.chat.completions.create({
      model: "gpt-3.5-turbo",
      max_tokens: 1000,
      messages,
    });

    if (response.choices && response.choices.length > 0) {
      finalText.push(
        response.choices[0].message.content || "??"
      );
    }

    return finalText.join("\n");
  }
  async callTools(
    tool_calls: OpenAI.Chat.Completions.ChatCompletionMessageToolCall[],
    toolResults: any[],
    finalText: string[]
  ) {
    for (const tool_call of tool_calls) {
      const toolName = tool_call.function.name;
      const args = tool_call.function.arguments;

      console.log(`Calling tool ${toolName} with args ${JSON.stringify(args)}`);

      const toolResult = await this.mcp.callTool({
        name: toolName,
        arguments: JSON.parse(args),
      });
      toolResults.push({
        name: toolName,
        result: toolResult,
      });
      finalText.push(
        `[Calling tool ${toolName} with args ${JSON.stringify(args)}]`
      );
    }
  }
  async cleanup() {
    await this.mcp.close();
  }
}

const client = new MCPClient();
await client.connectToServer(MCP_SERVER_URL);
console.log("Connected to MCP server");

let query = process.argv[2] || "What is the sum of 2 and 3?";
let result = await client.processQuery(query);
console.log("Final result: ", result);

await client.cleanup();

Streamable HTTP transport offers multiple key advantages:

  • Seamless Integration: As a pure HTTP - based solution, it integrates effortlessly with CDNs, API gateways, and other existing infrastructure. It can utilize their features without major changes and doesn't need special SSE support to implement MCP in HTTP services, adapting well to various deployment environments.

  • Simplified Client - Side: Unlike the traditional HTTP + SSE model with dual channels, it requires handling only one unified endpoint. This streamlines client - side code, reducing it by over 40% and enhancing code maintainability and comprehensibility.

  • Backward - Compatible Upgrade: It's a progressive improvement over HTTP + SSE, fixing issues like connection recovery and server long - connection pressure while keeping SSE's streaming response benefits. Existing systems can upgrade smoothly without major disruptions.

  • Better Performance: Its streaming - based communication enables the server to send data to clients promptly, reducing latency and improving responsiveness, especially for real - time applications. It also optimizes network bandwidth usage with partial responses and data chunk streaming.

Last updated