All public result objects & helper parameters rely on a small set of immutable Pydantic models defined in ai_sdk.types.

Messages

CoreMessage subclasses

ClassRoleTypical usage
CoreSystemMessagesystemSet behaviour & identity of the assistant.
CoreUserMessageuserExplicit user instructions (text / images / files).
CoreAssistantMessageassistantModel responses (text, reasoning, tool calls).
CoreToolMessagetoolContainer for tool results fed back to the model.
Each subclass exposes a .to_dict() helper that returns the OpenAI-compatible representation, so you can seamlessly drop them into the low-level LanguageModel interface if needed.

Content parts

parts_breakdown.py
from ai_sdk.types import TextPart, ImagePart, ToolCallPart, ToolResultPart
Parts make multimodal messages explicit. For example a user message containing both text and an image:
CoreUserMessage(
content=[
  TextPart(text="Describe this painting"),
  ImagePart(image="https://example.com/monet.jpg", mime_type="image/jpeg"),
]
)

TokenUsage

Simple container holding prompt_tokens, completion_tokens, total_tokens – populated on all helpers when the provider returns usage data.

ToolCall / ToolResult

Used internally for the tool-calling loop but exposed on GenerateTextResult so you can inspect or persist the chain-of-thought.

OnStepFinishResult

Object passed to the optional on_step callback after every model ↔ tool interaction. Fields include:
  • step_type – one of initial, continue, tool-result.
  • finish_reason, usage, text, tool_calls, tool_results, …

All types are frozen (pydantic.ConfigDict(frozen=True)) meaning you can safely log & share them without accidental mutation.