AI MODEL INTEGRATION 13 MIN READ 2026.03.03

> ECM Protocol: LLM Context Injection Specification

Technical specification for injecting ECM-managed context into large language model prompts and conversations.

ECM Protocol: LLM Context Injection Specification

Context Injection Overview

The ECM Protocol defines standard mechanisms for injecting managed context into LLM interactions. This specification covers context retrieval, formatting, and injection patterns.

Injection Points

System Prompt Injection

Static or slowly-changing context in system prompt:

<system>
You are an assistant for Acme Corp.
<context type="company-profile">
{{ecm.get("entity-context", "company-acme").data}}
</context>
<context type="user-preferences">
{{ecm.get("user-context", user_id).data.preferences}}
</context>
</system>

Message Augmentation

Dynamic context injected with user messages. RAG-style retrieval based on query. Semantic similarity to identify relevant context.

Tool Responses

Context provided as tool/function call results. LLM requests context explicitly. Structured context in response.

Context Retrieval Protocol

Query Specification

{
  "operation": "context.retrieve",
  "query": {
    "semantic": "customer support history with billing issues",
    "filters": {
      "context_type": "interaction-context",
      "customer_id": "cust-123",
      "date_range": {"gte": "2024-01-01"}
    },
    "max_tokens": 4000,
    "ranking": "relevance"
  }
}

Response Format

{
  "contexts": [
    {
      "context_id": "ctx-abc",
      "relevance_score": 0.92,
      "token_count": 150,
      "data": {...}
    }
  ],
  "total_tokens": 3500,
  "truncated": false
}

Token Management

Budget Allocation

Allocate context window budget. System context: fixed allocation. Retrieved context: dynamic based on query. Reserve space for response.

Truncation Strategies

When context exceeds budget: relevance-based pruning, recency-based pruning, summarization of older context, and chunking across turns.

Formatting Specifications

XML Context Blocks

Structured XML for LLM parsing. Type attributes for semantic clarity. Clear delimiters for boundaries.

Markdown Formatting

Human-readable markdown for transparency. Headers for context sections. Lists for structured data.

Protocol Extensions

LLM-specific extensions:

  • x-ecm-token-count: Pre-computed token counts
  • x-ecm-relevance: Relevance scoring metadata
  • x-ecm-priority: Injection priority ranking
  • x-ecm-format: Rendering format hints

Conclusion

The ECM Protocol LLM Context Injection specification enables standardized context delivery to language models. Implementations should support multiple injection points, respect token budgets, and provide formatted context appropriate for LLM consumption.

//TAGS

LLM INJECTION PROTOCOL SPECIFICATION