Proposal: An MVP for Monty, based on MCP and/or CMP

, , ,

There might be a way to jumpstart some sort of Minimum Viable Product (MVP), based on the Model Context Protocol (MCP) and/or the Cortical Messaging Protocol (CMP). In short, an MVP for Monty, based on MCP and/or CMP. <g>

Definitions

A minimum viable product (MVP) is a version of a product with just enough features to be usable by early customers who can then provide feedback for future product development.

The Model Context Protocol (MCP) is an open standard, open-source framework introduced by Anthropic in November 2024 to standardize the way artificial intelligence (AI) systems like large language models (LLMs) integrate and share data with external tools, systems, and data sources. MCP provides a universal interface for reading files, executing functions, and handling contextual prompts. Following its announcement, the protocol was adopted by major AI providers, including OpenAI and Google DeepMind.

We use a common messaging protocol [CMP] that all components (LMs, SMs, and motor systems) adhere to. This makes it possible for all components to communicate with each other and to combine them arbitrarily. The CMP defines what information the outputs of SMs and LMs need to contain.

Discussion

The MCP standard appears to be gaining a lot of traction in the AI community. Aside from the aforementioned vendor buy-in, there are various archives and indexes of MCP servers. So, in theory, any MCP-based client (e.g., an LLM) should be able to locate and use any known tools, systems, and data sources.

Although we can expect some friction, depending on the LLM’s needs and the server’s capabilities, I suspect that useful work will be done using this protocol. So, I wonder what services a Monty instance could offer to the AI community. In addition, I wonder how these might relate to (existing or nascent) LLM-based services.

Anyway, I asked ChatGPT about this:

As I understand it, there are various archives and indexes being built for MCP-based servers. I’m wondering (a) what sorts of services an early instance of Monty might be able to provide and (b) whether any of them could mesh with LLMs, GOFAI, etc.

It responded:

Here’s a structured way to think about early Monty services in the MCP (Model Context Protocol) world, and whether they could plug into LLMs, GOFAI systems, or other AI components. …

FWIW, I think that ChatGPT may be a bit confused about the things that Monty might be able to do in the near term, but even so, there should be some interesting services it can provide.

As usual, helpful comments and corrections are solicited.

I went back to the well, asking ChatGPT to explore more related ideas. Here’s a deepish dive into the questions involved:

11. One-paragraph summary

Treat each Monty LM as a long-lived Python actor with its own reference frame and predictive state; let Elixir supervise, route, and buffer messages; expose only lifecycle and summaries through MCP; use GraphQL and DNS for humans and federation — never for perception. This preserves Monty’s theoretical commitments while making the system buildable today.