OpenAI Workflows
OpenAI Component Example
This example demonstrates how to use the OpenAI built-in component with Stepflow.
Prerequisites
- An OpenAI API key
- The Stepflow CLI built from this repository
Setup
Set your OpenAI API key as an environment variable:
export OPENAI_API_KEY=your_api_key_here
Running the Example
From the repository root, run:
cargo run --manifest-path stepflow-rs/Cargo.toml -- run --flow=examples/openai/openai_chat.yaml --input=examples/openai/input.json --config=examples/openai/stepflow-config.yml
Example Flow
The example flow (openai_chat.yaml) demonstrates:
- Calling the OpenAI chat completion API
- Passing system and user messages
- Extracting the response from the API
Additional Configuration
The OpenAI component supports several configuration options:
- messages: An array of messages to send to the API (required)
- max_tokens: Maximum number of tokens to generate (optional)
- temperature: Controls randomness of responses (optional, between 0 and 2)
- api_key: Override the API key from environment (optional)
Available Components
- builtins:/openai/chat- Uses the- gpt-3.5-turbomodel
- builtins:/openai/chat/gpt4- Uses the- gpt-4model
Example Input
{
  "prompt": "Explain quantum computing in simple terms.",
  "system_message": "You are a helpful expert explaining complex topics to beginners."
}
Example Output
{
  "response": "Quantum computing is like...",
  "message": {
    "role": "assistant",
    "content": "Quantum computing is like..."
  }
}