Skip to main content

Overview

Providers are the boundary between Coevolved and an LLM API (or local model). To add a new provider, you implement Coevolved’s provider protocol(s) and plug the provider into llm_step(...).

Implementing LLMProvider

Implement a complete(...) method that:
  • Accepts an LLMRequest
  • Returns an LLMResponse (text + tool calls + usage metadata if available)
Minimal example:
from typing import Any

from coevolved.core.types import LLMProvider, LLMRequest, LLMResponse

class MyProvider:
    def __init__(self, client: Any) -> None:
        self.client = client

    def complete(self, request: LLMRequest) -> LLMResponse:
        # 1) map request.prompt + request.context to your API
        # 2) call your client
        # 3) map result to LLMResponse
        return LLMResponse(text="...")
If your backend supports tool calling, translate between:
  • ToolSpec (JSON schema) and your provider’s tool schema format
  • Tool call outputs and LLMResponse.tool_calls

Streaming providers

If you want streaming, implement stream(request) -> Iterator[LLMStreamChunk]. Streaming is optional. Coevolved’s llm_step uses complete(...) by default; you can build streaming steps for UI-specific paths.

Testing a provider

Test providers like any other adapter:
  • Verify prompt/message mapping
  • Verify tool schema mapping (if supported)
  • Verify tool call parsing
  • Verify usage extraction (tokens/cost) if you track it
Keep a golden test corpus: prompt payload + expected wire request + expected parsed response. This catches regressions as you upgrade SDKs.

Next steps