Ancillary (non-CMP) protocols, LLMs, MCP, observability, etc

Over in RFC CMP v1 Feedback, @tslominski and I have been chatting about metadata:

Since the following veers sharply from CMP-related topics, here’s a new thread…

I like the idea of letting Monty use multiple message protocols, types, etc. This would allow the CMP to concentrate on messages among simulated cortical columns, sensor families, etc. Meanwhile, other messaging standards can be considered, discussed, experimented with, encouraged, and perhaps selected as Best Practices.

However, this begs the question of what message protocols Monty should embrace. (For clarify, I’m talking about CMP’s somewhat abstract sort of protocol, which defines data structures and usage, but mostly ignores issues such as addressing, encoding, distribution, etc.)

To kick off the discussion, let’s consider the Model Context Protocol:

The Model Context Protocol (MCP) is an open standard, open-source framework introduced by Anthropic in November 2024 to standardize the way artificial intelligence (AI) systems like large language models (LLMs) integrate and share data with external tools, systems, and data sources. MCP provides a universal interface for reading files, executing functions, and handling contextual prompts. Following its announcement, the protocol was adopted by major AI providers, including OpenAI and Google DeepMind.

Conveniently, some Elixir hotshots (e.g., Chris McCord, José Valim) are developing LLM-based development and operation consoles such as Phoenix.new and Tidewave. The MCP will play a major role in these systems. (See this Elixir Forum thread for details.)

These consoles could take advantage of various types of context windows, including:

  • app, dynamic (e.g., events, reports, traces)
  • app, static (e.g., code, configuration, libraries)
  • approach and tooling (e.g., BEAM, Elixir et al)
  • generic (e.g., broad-spectrum, cloud-based)

By adopting MCP as an interoperability protocol, Monty instances could be integrated into a (potentially huge) network of “external tools, systems, and data sources”. For example:

  • As a data source, Monty could provide state information from active and/or specified models, system observability data, etc.

  • As an external tool, Monty could accept configuration hints (e.g., attention and connections) and metadata annotations (e.g., module IDs and addresses, semantic tags).

Although use of one of the Elixir consoles could be very useful, it (and indeed, Elixir itself) aren’t required for this to be useful.

In the previous post, I talked about a Monty instance acting as an MCP data source and/or external tool for an LLM. However, I don’t think it makes sense to incorporate MCP into Monty’s mainline modules (e.g., LMs, SMs), because this could:

  • complicate the design of mainline modules
  • discourage the use of specialized protocols
  • entangle MCP with Monty considerations
  • needlessly replicate MCP functionality

So, I’d suggest creating some support Actors. These could be replicated as needed for performance, reliability, etc. For example:

  • attention (e.g., watch for a given behavior)
  • configuration (e.g., add connections or learning modules)
  • observability (e.g., construct MCP reports on activity)
  • synchronization (generate timing messages to modules)

Let’s assume that we’re using an LLM-based console to control and monitor a Monty instance. The console could request certain types of actions or information from the appropriate support actors, which could use specialized protocols (e.g., GraphQL) to collaborate with module code.

Note that:

  • These actors could be replicated for performance, reliability, etc.
  • A distributed Monty might need these actors in each processor.
1 Like

In case it isn’t obvious, I’m not an expert on AI, LLMs, or MCP. So, if you’re looking for digestible introductions to the terminology, I’m not your guy. Fortunately, I’ve stumbled on a YouTube channel (IBM Technology) and a presenter (Martin Keen) that I’ve found to be very helpful.

That said, here are some Keen videos…

7 AI Terms You Need to Know: Agents, RAG, ASI & More

This is a fast, friendly intro to some LLMish concepts and jargon.

MCP vs API: Simplifying AI Agent Integration with External Data

This seems to support my notions about using actors as MCP servers. These could accept MCP messages from (say) an LLM-based console, then send the necessary messages to Monty modules, using whatever APIs seem expedient.