From some postings by members of the TBP team, I gather that the team has no plans to work on robotics issues, leaving that to the folks who have that as their specialty. This makes perfect sense to me, but it means that the project will need ways to leverage existing work in the field. The best way to do this, IMHO, is to adopt popular and/or emerging standards for exchanging control and detected information.
So, I asked ChatGPT to tell me about possibilities, starting with standards such as Model Context Protocol (MCP) and JavaScript Object Notation (JSON). It said:
1. What MCP Actually Is (and Isn’t)
Model Context Protocol (MCP) is not a robotics control bus or a real-time protocol.
Instead, MCP is:
- A tool- and data-exchange protocol
- Designed to let models, tools, and environments share structured context
- Strongly oriented around:
- JSON (canonical form)
- Tool invocation
- Schema-defined inputs/outputs
- Language neutrality
Think of MCP as operating at the cognitive / coordination layer, not the servo loop.
MCP is closer to “how do intelligent components talk about the world and ask for actions” than “how do motors move”.
That makes it very relevant to modern robotics architectures, just not at the lowest levels.
2. Where MCP Fits in a Robotics Stack
A useful mental model is a four-layer robotics stack:
┌──────────────────────────────────────┐
│ Cognitive / Planning / Reasoning │ ← MCP lives here
├──────────────────────────────────────┤
│ Behavioral / Task Coordination │
├──────────────────────────────────────┤
│ Middleware / Messaging │ (ROS 2, DDS, MQTT)
├──────────────────────────────────────┤
│ Real-Time Control / Fieldbus │ (CAN, EtherCAT)
└──────────────────────────────────────┘
MCP’s natural role:
- Context exchange
- Tool discovery
- Semantic commands
- Cross-language orchestration
- Human ↔ Robot ↔ Model interfaces
3. MCP + JSON in Robotics: Why This Is Appealing
Strengths MCP brings
1. Semantic clarity
Instead of:
{"cmd":42,"x":1.2}
You get:
{
"tool": "move_to_pose",
"arguments": {
"frame": "map",
"position": [1.2, 0.4, 0.0],
"tolerance_mm": 5
}
}
That’s gold for:
- Multi-agent robotics
- Debuggability
- Human-in-the-loop systems
- Model-based reasoning (Monty-esque systems)
2. Language-neutral, actor-friendly
MCP meshes very cleanly with:
- Python (AI, perception)
- Elixir (orchestration, supervision)
- Rust / Go (performance nodes)
- Web UIs
Which mirrors your interest in heterogeneous actor systems.
3. Tool abstraction over implementation
A robot exposes capabilities, not drivers:
{
"name": "scan_surface",
"input_schema": {
"resolution_mm": "number",
"area": "polygon"
}
}
Whether that’s backed by:
- ROS 2 nodes
- A CNC controller
- A Maslow sled
- A simulator
…is hidden behind MCP.
4. MCP Is Not a Replacement for ROS / DDS
This is the key architectural insight:
MCP complements robotics middleware — it does not replace it.
What MCP should NOT do:
- Real-time control loops
- High-rate sensor streaming
- Deterministic motion control
- Safety-critical actuation
JSON alone makes it unsuitable for:
- 1 kHz servo loops
- Tight latency budgets
- Fieldbus replacement
5. MCP as a “Cognitive Bus” for Robotics
Where MCP shines is above ROS, not instead of it.
Pattern: MCP ↔ ROS Bridge
[MCP Client / Model]
│
│ MCP (JSON)
▼
[MCP↔ROS Adapter Node]
│
│ ROS 2 / DDS
▼
[Navigation, Motion, Sensors]
The adapter:
- Translates MCP tools into ROS actions/services
- Translates ROS state into MCP-readable context
- Enforces safety & constraints
This mirrors how OPC UA gateways wrap PLCs — but for cognition.
6. Comparison with Existing Robotics Standards
| Aspect | MCP | ROS 2 | DDS |
|---|---|---|---|
| Level | Cognitive / Semantic | Middleware | Transport |
| Serialization | JSON | Binary msgs | Binary |
| Real-time | |||
| Human-readable | |||
| Tool discovery | Partial | ||
| LLM-native |
MCP is LLM-native robotics glue.
7. MCP vs Other “High-Level” Protocols
MCP vs gRPC
- gRPC: tight coupling, codegen, binary
- MCP: loose coupling, schema-first, introspectable
For robotics:
- gRPC is good for services
- MCP is better for reasoning systems and dynamic tools
MCP vs JSON-RPC
MCP adds:
- Tool metadata
- Context accumulation
- Strong conventions around intent
It’s effectively JSON-RPC + semantic discipline.
8. Likely Near-Future Robotics Architectures (2025–)
What’s emerging now:
Cognitive Plane (NEW)
- MCP
- JSON / JSON Schema
- Context graphs
- Model-driven planning
Coordination Plane
- Actor systems (Elixir/Erlang)
- Task graphs
- Event sourcing
Execution Plane
- ROS 2 + DDS
- Behavior trees
- Motion planners
Control Plane
- EtherCAT
- CAN
- Vendor motor drives
MCP lives at the top two planes.
9. Why MCP Is Especially Relevant to Your Interests
Given your background:
- Monty
- Large numbers of actors
- Language-neutral messaging
- JSON + UDP curiosity
- Avoiding invasive rewrites
MCP offers:
- A semantic contract between actors
- A way to layer cognition on top of existing robotics stacks
- A future-proof interface for AI-enhanced robotics
You can treat robots as tool-bearing agents, not just devices.
10. A Concrete Example
Imagine a Maslow CNC + laser + vision system:
{
"tool": "generate_tactile_surface",
"arguments": {
"source_image": "map.png",
"depth_mm": 2.5,
"material": "foam"
}
}
MCP:
- Doesn’t care how it happens
- Allows reasoning, planning, and validation
- Delegates execution to robotics middleware
That’s the architectural sweet spot.
Bottom Line
- MCP + JSON will not replace ROS, DDS, or EtherCAT
- MCP will become a dominant standard for cognitive-level robotics
- Think of MCP as:
The semantic nervous system above the motor cortex