How to Build an MCP Style Routed AI Agent System with Dynamic Tool Exposure Planning, Execution, and Context Injection

In this tutorial, we build a fully functional MCP-style routed agent system from scratch, combining tool discovery, intelligent routing, structured planning, and execution into a single cohesive workflow. We start by setting up a modular tool server that exposes capabilities such as web search, local retrieval, dataset loading, and Python execution, all defined through structured schemas. We then implement a hybrid router that uses both heuristics and LLM reasoning to dynamically decide which tools to expose for a given task, ensuring minimal yet effective capability exposure. As we progress, we design an agent that plans tool usage, executes calls safely, and synthesizes final answers by injecting context from tool outputs. By the end, we demonstrate multiple real-world tasks and show how MCP principles such as context injection, routing policies, and restricted tool access come together to create a scalable, interpretable, and efficient agent system.

import sys
import subprocess
import pkgutil

def ensure_packages():
required = [
(“openai”, “openai>=1.40.0”),
(“pandas”, “pandas”),
(“numpy”, “numpy”),
(“sklearn”, “scikit-learn”),
(“pydantic”, “pydantic”),
(“duckduckgo_search”, “duckduckgo-search”),
(“rich”, “rich”),
]
missing = []
for import_name, pip_name in required:
if pkgutil.find_loader(import_name) is None:
missing.append(pip_name)
if missing:
subprocess.check_call([sys.executable, “-m”, “pip”, “install”, “-q”] + missing)

ensure_packages()

import os
import io
import re
import json
import math
import time
import textwrap
import traceback
import contextlib
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Callable, Tuple

import numpy as np
import pandas as pd

from openai import OpenAI
from pydantic import BaseModel, Field
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
from duckduckgo_search import DDGS
from rich.console import Console
from rich.panel import Panel
from rich.table import Table
from rich.json import JSON as RichJSON

console = Console()

try:
from google.colab import userdata
OPENAI_API_KEY = userdata.get(“OPENAI_API_KEY”)
except Exception:
OPENAI_API_KEY = os.environ.get(“OPENAI_API_KEY”, “”)

if not OPENAI_API_KEY:
import getpass
OPENAI_API_KEY = getpass.getpass(“Enter OPENAI_API_KEY: “).strip()

os.environ[“OPENAI_API_KEY”] = OPENAI_API_KEY
client = OpenAI(api_key=OPENAI_API_KEY)

MODEL = os.environ.get(“OPENAI_MODEL”, “gpt-4.1-mini”)
MAX_TOOL_CALLS = 3
MAX_WEB_RESULTS = 5
TOP_K_RETRIEVAL = 3

We begin by checking and installing all required Python packages so the tutorial runs smoothly in a single environment. We then import the core libraries for data handling, retrieval, structured schemas, web search, and rich console display. We securely load the OpenAI API key, initialize the client, and define global settings for the model, tool calls, web results, and retrieval depth.

class ToolSpec(BaseModel):
name: str
description: str
input_schema: Dict[str, Any]
tags: List[str] = Field(default_factory=list)

class ToolCall(BaseModel):
tool_name: str
arguments: Dict[str, Any]

class RouteDecision(BaseModel):
selected_tools: List[str]
rationale: str
policy_notes: List[str] = Field(default_factory=list)

class PlanOutput(BaseModel):
requires_tools: bool
tool_calls: List[ToolCall] = Field(default_factory=list)
direct_answer_allowed: bool = False
planner_note: str = “”

class ToolResult(BaseModel):
tool_name: str
ok: bool
output: Any
error: Optional[str] = None

LOCAL_DOCS = [
{
“id”: “doc_001”,
“title”: “Model Context Protocol Basics”,
“text”: “Model Context Protocol standardizes how models connect to tools, resources, and prompts. A client can discover available tools from a server and invoke them using structured arguments.”
},
{
“id”: “doc_002”,
“title”: “Dynamic Capability Exposure”,
“text”: “Dynamic capability exposure means an agent does not always see every tool. A router can expose only the most relevant tools for a task, improving safety, reducing distraction, and lowering tool selection entropy.”
},
{
“id”: “doc_003”,
“title”: “Context Injection for Agents”,
“text”: “Context injection is the process of enriching the model prompt with selected tool descriptions, tool outputs, retrieved documents, prior summaries, and policy hints before the model generates a response.”
},
{
“id”: “doc_004”,
“title”: “Tool Discovery and MCP”,
“text”: “In MCP style systems, tool discovery usually begins with a tools listing step. Each tool includes a name, description, and input schema so the client knows how and when to call it.”
},
{
“id”: “doc_005”,
“title”: “Router Policies for Agents”,
“text”: “Routing policies can combine heuristics, learned scorers, confidence estimates, and LLM reasoning. A router may use task keywords, domain tags, or explicit constraints to decide which tools to expose.”
},
{
“id”: “doc_006”,
“title”: “Why Restrict Tool Access”,
“text”: “Restricting tool access helps minimize accidental misuse, improves reasoning focus, reduces latency, and creates a more interpretable planning process. This is especially helpful in multi-tool agent systems.”
},
{
“id”: “doc_007”,
“title”: “Dataset Loading for Rapid Analysis”,
“text”: “A dataset loader tool can let an agent inspect tabular data quickly. It is useful for classification tasks, summary statistics, schema exploration, and downstream code execution.”
},
{
“id”: “doc_008”,
“title”: “Python Sandboxes in Agent Systems”,
“text”: “Many advanced agents rely on code execution sandboxes for calculations, simulation, plotting, and dataframe inspection. Safe code execution typically uses restricted globals and output capture.”
},
]

class LocalRetriever:
def __init__(self, docs: List[Dict[str, str]]):
self.docs = docs
self.vectorizer = TfidfVectorizer(stop_words=”english”)
self.doc_matrix = self.vectorizer.fit_transform([d[“text”] for d in docs])

def search(self, query: str, top_k: int = 3) -> List[Dict[str, Any]]:
q_vec = self.vectorizer.transform([query])
sims = cosine_similarity(q_vec, self.doc_matrix)[0]
idxs = np.argsort(-sims)[:top_k]
results = []
for i in idxs:
results.append({
“id”: self.docs[i][“id”],
“title”: self.docs[i][“title”],
“text”: self.docs[i][“text”],
“score”: float(sims[i]),
})
return results

retriever = LocalRetriever(LOCAL_DOCS)

We define structured Pydantic models to represent tool specifications, tool calls, routing decisions, planning outputs, and tool results in a clean MCP-style format. We then create a small local knowledge base that explains concepts like MCP, dynamic capability exposure, context injection, router policies, and sandboxed execution. Finally, we built a TF-IDF-based local retriever that searches these documents and returns the most relevant snippets, along with their similarity scores.

def tool_web_search(query: str, max_results: int = 5) -> Dict[str, Any]:
results = []
with DDGS() as ddgs:
for r in ddgs.text(query, max_results=max_results):
results.append({
“title”: r.get(“title”, “”),
“href”: r.get(“href”, “”),
“body”: r.get(“body”, “”),
})
return {“query”: query, “results”: results}

def tool_python_exec(code: str) -> Dict[str, Any]:
allowed_builtins = {
“abs”: abs,
“all”: all,
“any”: any,
“bool”: bool,
“dict”: dict,
“enumerate”: enumerate,
“float”: float,
“int”: int,
“len”: len,
“list”: list,
“max”: max,
“min”: min,
“print”: print,
“range”: range,
“round”: round,
“set”: set,
“sorted”: sorted,
“str”: str,
“sum”: sum,
“tuple”: tuple,
“zip”: zip,
}

local_ns = {}
global_ns = {
“__builtins__”: allowed_builtins,
“np”: np,
“pd”: pd,
“math”: math,
}

stdout_buffer = io.StringIO()
try:
with contextlib.redirect_stdout(stdout_buffer):
exec(code, global_ns, local_ns)
return {
“stdout”: stdout_buffer.getvalue(),
“locals”: {k: repr(v)[:500] for k, v in local_ns.items() if not k.startswith(“__”)}
}
except Exception as e:
return {
“stdout”: stdout_buffer.getvalue(),
“error_type”: type(e).__name__,
“error_message”: str(e),
“traceback”: traceback.format_exc(limit=2),
}

def load_builtin_dataset(name: str = “iris”, n_rows: int = 10) -> Dict[str, Any]:
from sklearn import datasets as sk_datasets
registry = {
“iris”: sk_datasets.load_iris,
“wine”: sk_datasets.load_wine,
“breast_cancer”: sk_datasets.load_breast_cancer,
“diabetes”: sk_datasets.load_diabetes,
}
if name not in registry:
raise ValueError(f”Unsupported dataset ‘{name}’. Choose from {list(registry.keys())}”)
ds = registry[name]()
feature_names = list(ds.feature_names)
df = pd.DataFrame(ds.data, columns=feature_names)
if hasattr(ds, “target”):
df[“target”] = ds.target
return {
“dataset_name”: name,
“shape”: list(df.shape),
“columns”: list(df.columns),
“preview”: df.head(n_rows).to_dict(orient=”records”),
“describe”: df.describe(include=”all”).fillna(“”).to_dict(),
}

def tool_vector_retrieve(query: str, top_k: int = 3) -> Dict[str, Any]:
results = retriever.search(query, top_k=top_k)
return {“query”: query, “results”: results}

We define the main tools our MCP-style agent can use, including web search, safe Python execution, dataset loading, and local vector retrieval. We keep Python execution controlled by limiting built-in functions and capturing printed output, local variables, and errors. We also ensure that the dataset and retrieval tools return structured outputs so the agent can inspect data or retrieve relevant knowledge before generating a final answer.

@dataclass
class MCPTool:
spec: ToolSpec
fn: Callable[…, Any]

class MCPToolServer:
def __init__(self):
self.tools: Dict[str, MCPTool] = {}

def register_tool(self, spec: ToolSpec, fn: Callable[…, Any]):
self.tools[spec.name] = MCPTool(spec=spec, fn=fn)

def tools_list(self) -> List[Dict[str, Any]]:
return [
{
“name”: tool.spec.name,
“description”: tool.spec.description,
“input_schema”: tool.spec.input_schema,
“tags”: tool.spec.tags,
}
for tool in self.tools.values()
]

def tools_call(self, tool_name: str, arguments: Dict[str, Any]) -> ToolResult:
if tool_name not in self.tools:
return ToolResult(tool_name=tool_name, ok=False, output=None, error=”Tool not found”)
try:
output = self.tools[tool_name].fn(**arguments)
return ToolResult(tool_name=tool_name, ok=True, output=output)
except Exception as e:
return ToolResult(tool_name=tool_name, ok=False, output=None, error=f”{type(e).__name__}: {str(e)}”)

server = MCPToolServer()

server.register_tool(
ToolSpec(
name=”web_search”,
description=”Search the public web for recent or general information and return concise results.”,
input_schema={
“type”: “object”,
“properties”: {
“query”: {“type”: “string”},
“max_results”: {“type”: “integer”, “default”: 5}
},
“required”: [“query”]
},
tags=[“web”, “search”, “recent”, “news”, “research”]
),
tool_web_search
)

server.register_tool(
ToolSpec(
name=”python_exec”,
description=”Execute Python code for calculations, dataframe inspection, simulations, or transformations.”,
input_schema={
“type”: “object”,
“properties”: {
“code”: {“type”: “string”}
},
“required”: [“code”]
},
tags=[“python”, “compute”, “analysis”, “code”, “math”]
),
tool_python_exec
)

server.register_tool(
ToolSpec(
name=”vector_retrieve”,
description=”Retrieve relevant local knowledge snippets from a vectorized tutorial corpus.”,
input_schema={
“type”: “object”,
“properties”: {
“query”: {“type”: “string”},
“top_k”: {“type”: “integer”, “default”: 3}
},
“required”: [“query”]
},
tags=[“retrieval”, “memory”, “knowledge”, “vector”, “rag”]
),
tool_vector_retrieve
)

server.register_tool(
ToolSpec(
name=”dataset_loader”,
description=”Load a built-in tabular dataset and return schema, preview, and summary statistics.”,
input_schema={
“type”: “object”,
“properties”: {
“name”: {“type”: “string”, “enum”: [“iris”, “wine”, “breast_cancer”, “diabetes”]},
“n_rows”: {“type”: “integer”, “default”: 10}
},
“required”: [“name”]
},
tags=[“dataset”, “tabular”, “data”, “analysis”, “ml”]
),
load_builtin_dataset
)

We create an MCP-style tool server that stores each tool with its schema, description, tags, and callable function. We add methods for listing available tools and calling a selected tool with structured arguments while safely returning success or error outputs. We then register web search, Python execution, vector retrieval, and dataset loading as discoverable tools that the routed agent can use later.

def extract_json_object(text: str) -> Dict[str, Any]:
text = text.strip()
try:
return json.loads(text)
except Exception:
match = re.search(r”{.*}”, text, flags=re.DOTALL)
if not match:
raise ValueError(“No JSON object found in model output”)
return json.loads(match.group(0))

def llm_json(instructions: str, user_prompt: str) -> Dict[str, Any]:
resp = client.responses.create(
model=MODEL,
input=user_prompt,
instructions=instructions,
temperature=0
)
return extract_json_object(resp.output_text)

def pretty_tools_table(tools: List[Dict[str, Any]], title: str):
table = Table(title=title)
table.add_column(“Tool”)
table.add_column(“Tags”)
table.add_column(“Description”)
for t in tools:
table.add_row(t[“name”], “, “.join(t.get(“tags”, [])), t[“description”])
console.print(table)

class HybridMCPRouter:
def __init__(self, server: MCPToolServer, model: str):
self.server = server
self.model = model

def heuristic_scores(self, task: str) -> Dict[str, float]:
task_l = task.lower()
scores = {name: 0.0 for name in self.server.tools.keys()}

keyword_map = {
“web_search”: [“latest”, “recent”, “search”, “find”, “news”, “paper”, “web”, “look up”, “internet”],
“python_exec”: [“calculate”, “compute”, “plot”, “simulate”, “code”, “python”, “average”, “math”],
“vector_retrieve”: [“mcp”, “memory”, “retrieve”, “context”, “router”, “knowledge”, “protocol”],
“dataset_loader”: [“dataset”, “data”, “iris”, “wine”, “breast cancer”, “diabetes”, “rows”, “columns”],
}

for tool_name, kws in keyword_map.items():
for kw in kws:
if kw in task_l:
scores[tool_name] += 1.0

if “compare” in task_l or “analyze” in task_l or “summary” in task_l:
scores[“python_exec”] += 0.5
scores[“dataset_loader”] += 0.5

if “tutorial” in task_l or “mcp” in task_l or “routing” in task_l:
scores[“vector_retrieve”] += 1.0

return scores

def shortlist(self, task: str, top_n: int = 3) -> List[Dict[str, Any]]:
tools = self.server.tools_list()
scores = self.heuristic_scores(task)
ranked = sorted(tools, key=lambda x: scores.get(x[“name”], 0.0), reverse=True)
top = [t for t in ranked if scores.get(t[“name”], 0.0) > 0][:top_n]
if not top:
top = ranked[:top_n]
return top

def route(self, task: str) -> RouteDecision:
all_tools = self.server.tools_list()
shortlisted = self.shortlist(task, top_n=3)

instructions = “””
You are a routing controller for an MCP-like agent system.
Your job is to decide which tools should be exposed to the downstream agent for this task.
Expose only tools that are relevant.
Return strict JSON only with keys:
selected_tools: array of tool names
rationale: string
policy_notes: array of strings

Rules:
– Prefer minimal exposure.
– Do not expose more than 3 tools.
– Use tool descriptions and tags.
– If recent information is required, include web_search.
– If the task involves local conceptual retrieval, include vector_retrieve.
– If the task requires computation or tabular analysis, include python_exec or dataset_loader as needed.
“””

prompt = f”””
TASK:
{task}

ALL TOOLS:
{json.dumps(all_tools, indent=2)}

HEURISTIC SHORTLIST:
{json.dumps(shortlisted, indent=2)}

Return JSON only.
“””
obj = llm_json(instructions, prompt)
selected_tools = obj.get(“selected_tools”, [])
selected_tools = [t for t in selected_tools if t in self.server.tools]
if not selected_tools:
selected_tools = [t[“name”] for t in shortlisted]

return RouteDecision(
selected_tools=selected_tools[:3],
rationale=obj.get(“rationale”, “”),
policy_notes=obj.get(“policy_notes”, []),
)

We add helper functions to extract clean JSON from model outputs, call the LLM in a structured way, and display exposed tools in a readable table. We then build a hybrid MCP router that first scores tools using keyword-based heuristics and creates a short list of likely relevant tools. Finally, we ask the LLM to make the final routing decision so only the most useful tools are exposed to the downstream agent.

class RoutedAgent:
def __init__(self, server: MCPToolServer, router: HybridMCPRouter, model: str):
self.server = server
self.router = router
self.model = model

def discover_exposed_tools(self, exposed_tool_names: List[str]) -> List[Dict[str, Any]]:
return [t for t in self.server.tools_list() if t[“name”] in exposed_tool_names]

def plan(self, task: str, exposed_tools: List[Dict[str, Any]]) -> PlanOutput:
instructions = “””
You are a planning agent in an MCP-like architecture.
You can only use the exposed tools.
Decide whether tools are needed.
Return strict JSON only with keys:
requires_tools: boolean
tool_calls: array of objects with tool_name and arguments
direct_answer_allowed: boolean
planner_note: string

Rules:
– Use at most 3 tool calls.
– Only call tools from the exposed list.
– Arguments must match each tool’s input schema conceptually.
– Prefer calling vector_retrieve for conceptual local knowledge.
– Prefer calling web_search for recent or external information.
– Prefer dataset_loader if the user asks about a named built-in dataset.
– Prefer python_exec only when computation or code execution is genuinely useful.
– Do not fabricate unavailable tools.
“””

prompt = f”””
USER TASK:
{task}

EXPOSED TOOLS:
{json.dumps(exposed_tools, indent=2)}

Return JSON only.
“””
obj = llm_json(instructions, prompt)

raw_tool_calls = obj.get(“tool_calls”, [])
parsed_calls = []
allowed = {t[“name”] for t in exposed_tools}

for call in raw_tool_calls[:MAX_TOOL_CALLS]:
name = call.get(“tool_name”, “”)
args = call.get(“arguments”, {})
if name in allowed and isinstance(args, dict):
parsed_calls.append(ToolCall(tool_name=name, arguments=args))

return PlanOutput(
requires_tools=bool(obj.get(“requires_tools”, False) or parsed_calls),
tool_calls=parsed_calls,
direct_answer_allowed=bool(obj.get(“direct_answer_allowed”, False)),
planner_note=obj.get(“planner_note”, “”),
)

def run_tools(self, tool_calls: List[ToolCall]) -> List[ToolResult]:
results = []
for tc in tool_calls:
result = self.server.tools_call(tc.tool_name, tc.arguments)
results.append(result)
return results

def answer(self, task: str, route: RouteDecision, exposed_tools: List[Dict[str, Any]], plan: PlanOutput, results: List[ToolResult]) -> str:
instructions = “””
You are the final answering agent in an MCP-style routed tool system.
Use the routed tools and returned tool outputs to answer the user.
Be concrete, concise, and technically correct.
If tool outputs are partial, say so.
Do not mention hidden tools that were not exposed.
“””

tool_result_payload = [r.model_dump() for r in results]

prompt = f”””
USER TASK:
{task}

ROUTE DECISION:
{route.model_dump_json(indent=2)}

EXPOSED TOOLS:
{json.dumps(exposed_tools, indent=2)}

PLAN:
{plan.model_dump_json(indent=2)}

TOOL RESULTS:
{json.dumps(tool_result_payload, indent=2)}

Now answer the user clearly.
“””
resp = client.responses.create(
model=self.model,
input=prompt,
instructions=instructions,
temperature=0.2
)
return resp.output_text

def run(self, task: str, verbose: bool = True) -> Dict[str, Any]:
route = self.router.route(task)
exposed_tools = self.discover_exposed_tools(route.selected_tools)
plan = self.plan(task, exposed_tools)
results = self.run_tools(plan.tool_calls) if plan.requires_tools else []
final_answer = self.answer(task, route, exposed_tools, plan, results)

payload = {
“task”: task,
“route_decision”: route.model_dump(),
“exposed_tools”: exposed_tools,
“plan”: plan.model_dump(),
“tool_results”: [r.model_dump() for r in results],
“final_answer”: final_answer,
}

if verbose:
console.print(Panel.fit(f”USER TASKn{task}”, title=”Input”))
pretty_tools_table(exposed_tools, “Tools Exposed By MCP Router”)
console.print(Panel(route.rationale or “No rationale provided”, title=”Router Rationale”))
if route.policy_notes:
console.print(Panel(“n”.join(f”- {x}” for x in route.policy_notes), title=”Policy Notes”))
console.print(Panel(plan.planner_note or “No planner note provided”, title=”Planner Note”))

if results:
for r in results:
console.print(Panel.fit(RichJSON.from_data(r.model_dump()), title=f”Tool Result: {r.tool_name}”))
console.print(Panel(final_answer, title=”Final Answer”))

return payload

def mcp_jsonrpc_tools_list(server: MCPToolServer) -> Dict[str, Any]:
return {
“jsonrpc”: “2.0”,
“id”: 1,
“result”: {
“tools”: server.tools_list()
}
}

def mcp_jsonrpc_tools_call(server: MCPToolServer, tool_name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
result = server.tools_call(tool_name, arguments)
return {
“jsonrpc”: “2.0”,
“id”: 2,
“result”: result.model_dump()
}

router = HybridMCPRouter(server=server, model=MODEL)
agent = RoutedAgent(server=server, router=router, model=MODEL)

console.print(Panel.fit(“MCP-STYLE TOOL DISCOVERY”, title=”Step 1″))
console.print(RichJSON.from_data(mcp_jsonrpc_tools_list(server)))

demo_tasks = [
“Explain how an MCP tool router should expose tools for an agent task about dynamic capability exposure.”,
“Search the web for recent examples of MCP-related developments and summarize them.”,
“Load the iris dataset, inspect its columns and basic stats, and tell me what kind of ML problem it is.”,
“Retrieve local knowledge about context injection and router policies, then explain why restricting tool access helps agent performance.”,
“Use Python to compute the average of [3, 5, 9, 10, 13] and then explain whether python execution was truly necessary.”,
]

all_runs = []
for idx, task in enumerate(demo_tasks, start=1):
console.print(Panel.fit(f”DEMO RUN {idx}”, title=”=” * 10))
out = agent.run(task, verbose=True)
all_runs.append(out)

custom_task = “Design a routed MCP workflow for an AI research assistant that should use retrieval for local protocol knowledge and web search only when the task explicitly asks for recent information.”
custom_run = agent.run(custom_task, verbose=True)

print(“nPROGRAMMATIC EXAMPLE: tools/list”)
print(json.dumps(mcp_jsonrpc_tools_list(server), indent=2))

print(“nPROGRAMMATIC EXAMPLE: tools/call for vector_retrieve”)
print(json.dumps(mcp_jsonrpc_tools_call(server, “vector_retrieve”, {“query”: “dynamic capability exposure in MCP routers”, “top_k”: 2}), indent=2))

print(“nPROGRAMMATIC EXAMPLE: tools/call for dataset_loader”)
print(json.dumps(mcp_jsonrpc_tools_call(server, “dataset_loader”, {“name”: “iris”, “n_rows”: 5}), indent=2))

print(“nPROGRAMMATIC EXAMPLE: custom final answer”)
print(custom_run[“final_answer”])

We build the routed agent that discovers only the exposed tools, asks the planner whether tool calls are needed, runs those tools, and then generates the final answer from the route, plan, and tool outputs. We also add JSON-RPC-style tools/list and tools/call examples to mirror how MCP clients interact with a tool server. Also, we run several demo tasks to show how the agent handles retrieval, web search, dataset loading, Python execution, and a custom MCP workflow end-to-end.

In conclusion, we implemented an end-to-end MCP-style architecture where tool discovery, routing, planning, and execution work together seamlessly to solve diverse tasks. We observed that dynamic capability exposure improves both efficiency and safety by limiting the agent’s access to only relevant tools, while structured planning ensures controlled, interpretable reasoning. Through multiple demonstrations, we saw how the system adapts to different problem types, whether retrieval, computation, or real-time search, by intelligently selecting and using tools. Also, we can extend this framework with more advanced routing policies, additional memory layers, or specialized tools, thereby providing a strong foundation for building production-grade AI assistants.

Check out the Full Codes with Notebook hereAlso, feel free to follow us on Twitter and don’t forget to join our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us

The post How to Build an MCP Style Routed AI Agent System with Dynamic Tool Exposure Planning, Execution, and Context Injection appeared first on MarkTechPost.

By

Leave a Reply

Your email address will not be published. Required fields are marked *