Model Context Protocol (MCP)

The Model Context Protocol (MCP) is a crucial part of how DeepBlock communicates with AI models. When an LLM is used to answer questions or provide analysis, MCP defines how we feed data and context to that model. Think of MCP as a formatting standard or blueprint for constructing the prompt that the AI sees.

Structured Prompting

Rather than dumping raw data into the prompt, MCP organizes information into a structured format. For example, the prompt given to the LLM might have sections like:

  • “Relevant Data" (followed by a list of facts or graph snippets)

  • “Contextual Notes” (maybe explanations of what those facts mean or any assumptions)

  • “User Query” (the actual question to be answered)

By structuring the prompt, we make it easier for the model to identify what’s factual input versus what it needs to address or explain.

Graph Context Encoding

When dealing with knowledge graph data, MCP ensures that the relationships are preserved in context. For instance, if the relevant data involves a path through the graph

Address A -> Contract B -> Address C

the protocol might represent it in a clear textual form or a bullet list so the model can see the chain of connections.

This way, the LLM understands not just isolated facts but how those facts are linked. It’s akin to giving the AI a mini-map of the relationships involved in the query.

Consistency and Parsing

MCP is standardized. This means every time an AI agent uses DeepBlock, it receives information in a consistent format. Consistency is key for AI because it allows the model to learn the format and become better at extracting what it needs.

If one time data is formatted as a JSON blob, another time as a paragraph, the model could get confused. MCP avoids that by always structuring context similarly.

In some cases, the protocol might use delimiters or special tokens (that the model is trained to recognize) to separate sections of the prompt.

Context Limits and Relevance

Large language models have context length limits (they can only consider so much text at once). MCP works in tandem with the retrieval step to ensure that only the most relevant and high-value information is included in the prompt. It might prioritize including a summary of the knowledge graph relationships, or top N results, rather than every single data point.

The protocol can also enforce rules like “if data exceeds X tokens, truncate or summarize it before feeding to the model.”

This helps prevent overloading the model and focuses its attention.

Integrity and Attribution

Another aspect of MCP is maintaining attribution of facts within the context. For example, it might label each piece of retrieved data with its source (like a transaction hash or a block number) so that when the LLM forms an answer, it can, if needed, reference where the information came from (or at least treat different pieces distinctly).

This reduces confusion and the chances of blending data incorrectly.

In short, MCP is how DeepBlock speaks the model’s language. By bridging the structured world of a knowledge graph with the conversational world of an LLM, MCP ensures that nothing is lost in translation.

It’s a behind-the-scenes feature, but its effect is visible in the improved quality of AI-generated outputs: more factual accuracy, clearer references to data, and coherent multi-hop reasoning (since the model can trace relationships as given by the prompt structure).

For users, this means they can trust the AI answers more, and even complex questions involving many data points can be answered in a clear, traceable way.

Last updated