Inference as a Tool
When a work chain needs “inference” (reasoning, summarization, planning), treat it as a tool with a well-specified contract rather than an implicit side-effect.
- Invocation kinds: prefer
runtime.invocation: inference(LLM/human) alongsideruntime.invocation: manualfor human-only steps. Useentrypoint: nonewhen no script exists and the action is conversational. - Skill shape: define inputs/outputs so tool calls can be validated and logged.
For LLM-backed calls, accept
messages+contextand returntextplus any structured fields the chain needs. - Routing: callers supply credentials via headers (e.g.,
X-OpenAI-Key) or session context; skills should not embed provider keys. - Audit: log tool calls and results as skill executions, even for inference-only steps, to keep chains observable.
- Fallbacks: allow manual/human fulfillment by setting
invocation: manualand recording the resulting text in the same output schema.
This keeps inference interchangeable (LLM or human), observable, and composable with other skills in an agential semioverse.