Generate text synchronously with a single call - returns a GenerateTextResult with rich metadata.
generate_text
is a high-level helper for synchronous text generation that returns a GenerateTextResult
with rich metadata including usage statistics, finish reasons, and tool calls.
Name | Type | Required | Description |
---|---|---|---|
model | LanguageModel | ✓ | Provider instance created via e.g. openai() or anthropic() |
prompt | str | one of prompt /messages | User prompt (plain string). |
system | str | – | System instruction prepended to the conversation. |
messages | List[AnyMessage] | – | Fine-grained message array providing full control over roles & multimodal parts. Overrides prompt . |
tools | List[Tool] | – | Enable iterative tool-calling (see further below). |
max_steps | int | 8 | Safeguard to abort endless tool loops. |
on_step | Callable[[OnStepFinishResult], None] | – | Callback executed after every model ↔ tool round-trip. |
**kwargs | provider-specific | – | Forwarded verbatim to the underlying SDK – e.g. temperature=0.2 . |
GenerateTextResult
exposes rich metadata:
generate_text
is provider-agnostic. Swap openai()
for
anthropic()
or any other future implementation – no code changes required.