Skip to main content

Langflow

Stepflow Langflow Integration

A Python package for seamlessly integrating Langflow workflows with Stepflow, enabling conversion and execution of Langflow components within the Stepflow ecosystem.

โœจ Featuresโ€‹

  • ๐Ÿ”„ One-Command Conversion & Execution: Convert Langflow JSON to Stepflow YAML and execute in a single command
  • ๐Ÿš€ Real Langflow Execution: Execute actual Langflow components with full type preservation
  • ๐Ÿ”ง Tweaks System: Modify component configurations (API keys, parameters) at execution time
  • ๐Ÿ›ก๏ธ Security First: Safe JSON-only parsing without requiring full Langflow installation

๐Ÿš€ Quick Startโ€‹

Installationโ€‹

cd integrations/langflow
uv sync --dev

# Or with pip
pip install -e .

Basic Executionโ€‹

# Execute with real Langflow components (requires OPENAI_API_KEY)
export OPENAI_API_KEY="your-api-key-here"
uv run stepflow-langflow execute tests/fixtures/langflow/basic_prompting.json '{"message": "Write a haiku about coding"}'

# Execute without environment variables using tweaks
uv run stepflow-langflow execute tests/fixtures/langflow/basic_prompting.json \
'{"message": "Write a haiku"}' \
--tweaks '{"LanguageModelComponent-xyz123": {"api_key": "your-api-key-here"}}'

Using Tweaks for Configurationโ€‹

The tweaks system allows you to modify component configurations at execution time, perfect for:

  • Setting API keys without environment variables
  • Adjusting model parameters
  • Debugging component configurations
# Basic tweaks example - modify API key and temperature
uv run stepflow-langflow execute my-workflow.json '{"message": "Hello"}' \
--tweaks '{"OpenAIComponent-abc123": {"api_key": "sk-...", "temperature": 0.7}}'

# Multiple component tweaks
uv run stepflow-langflow execute my-workflow.json '{"message": "Hello"}' \
--tweaks '{
"LanguageModelComponent-xyz": {"api_key": "sk-..."},
"EmbeddingComponent-def": {"api_key": "sk-...", "model": "text-embedding-3-small"}
}'

Debugging and Inspectionโ€‹

# Dry-run to see converted workflow without execution
uv run stepflow-langflow execute my-workflow.json --dry-run

# Apply tweaks and see the modified workflow
uv run stepflow-langflow execute my-workflow.json --dry-run \
--tweaks '{"LanguageModelComponent-xyz": {"temperature": 0.9}}'

# Keep temporary files for debugging
uv run stepflow-langflow execute my-workflow.json '{"message": "Hello"}' \
--keep-files --output-dir ./debug

# Analyze workflow structure before execution
uv run stepflow-langflow analyze my-workflow.json

# Convert to Stepflow YAML for inspection
uv run stepflow-langflow convert my-workflow.json output.yaml

Finding Component IDs for Tweaksโ€‹

To apply tweaks, you need the component IDs from your Langflow workflow:

# Method 1: Analyze the workflow to see component structure
uv run stepflow-langflow analyze my-workflow.json

# Method 2: Dry-run to see the converted Stepflow YAML with component IDs
uv run stepflow-langflow execute my-workflow.json --dry-run

# Method 3: Open the Langflow JSON file and look for node IDs
# Component IDs follow the pattern: ComponentType-nodeId (e.g., "LanguageModelComponent-kBOja")

๐Ÿ› ๏ธ CLI Referenceโ€‹

execute Commandโ€‹

The primary command for converting and executing Langflow workflows:

uv run stepflow-langflow execute [OPTIONS] INPUT_FILE [INPUT_JSON]

Arguments:

  • INPUT_FILE: Path to Langflow JSON workflow file
  • INPUT_JSON: JSON input data for the workflow (default: {})

Options:

  • --config PATH: Use custom stepflow configuration file
  • --stepflow-binary PATH: Path to stepflow binary (auto-detected if not specified)
  • --timeout INTEGER: Execution timeout in seconds (default: 60)
  • --dry-run: Only convert, don't execute (shows converted YAML)
  • --keep-files: Keep temporary files after execution
  • --output-dir PATH: Directory to save temporary files

Examples:

# Basic execution with real Langflow components
uv run stepflow-langflow execute examples/simple_chat.json '{"message": "Hi there!"}'

# Execution with custom timeout
uv run stepflow-langflow execute examples/basic_prompting.json '{"message": "Write a haiku"}' --timeout 120

# Debug with file preservation
uv run stepflow-langflow execute examples/memory_chatbot.json --dry-run --keep-files

submit-batch Commandโ€‹

Execute a Langflow workflow with multiple inputs using batch processing. This command converts the workflow once and then processes multiple inputs efficiently.

uv run stepflow-langflow submit-batch [OPTIONS] INPUT_FILE INPUTS_JSONL

Arguments:

  • INPUT_FILE: Path to Langflow JSON workflow file
  • INPUTS_JSONL: Path to JSONL file with batch inputs (one JSON object per line)

Options:

  • --url TEXT: Stepflow server URL (required, e.g., http://localhost:7837/api/v1)
  • --tweaks TEXT: JSON tweaks to apply to the workflow (same format as execute command)
  • --max-concurrent INTEGER: Maximum number of concurrent executions (optional)
  • --output PATH: Path to output JSONL file for results (optional)

JSONL Input Format:

Each line in the inputs JSONL file should be a complete JSON object representing one execution input:

{"message": "Write a haiku about coding"}
{"message": "Explain quantum computing"}
{"message": "What is machine learning?"}
{"message": "Tell me a joke"}
{"message": "Summarize photosynthesis"}

Output Format:

When --output is specified, results are written as JSONL with the following structure:

{"index": 0, "status": "completed", "result": {"outcome": "success", "result": {...}}}
{"index": 1, "status": "failed", "result": {"outcome": "failure", "error": {...}}}
{"index": 2, "status": "completed", "result": {"outcome": "success", "result": {...}}}

Each result includes:

  • index: Input index (0-based)
  • status: Either "completed" or "failed"
  • result: Workflow execution result or error details

Examples:

# Basic batch execution with running server
uv run stepflow-langflow submit-batch \
examples/basic_prompting.json \
inputs.jsonl \
--url http://localhost:7837/api/v1

# Batch execution with tweaks (API keys, parameters)
uv run stepflow-langflow submit-batch \
examples/basic_prompting.json \
inputs.jsonl \
--url http://localhost:7837/api/v1 \
--tweaks '{"LanguageModelComponent-kBOja": {"api_key": "sk-...", "temperature": 0.7}}'

# Batch execution with concurrency limit and output file
uv run stepflow-langflow submit-batch \
examples/vector_store_rag.json \
queries.jsonl \
--url http://localhost:7837/api/v1 \
--max-concurrent 5 \
--output results.jsonl

# Complete workflow: start server, run batch, analyze results
# Terminal 1: Start Stepflow server
stepflow-server --port 7837 --config stepflow-config.yml

# Terminal 2: Submit batch job
uv run stepflow-langflow submit-batch \
examples/basic_prompting.json \
batch_inputs.jsonl \
--url http://localhost:7837/api/v1 \
--output batch_results.jsonl

# Analyze results
cat batch_results.jsonl | jq '.result.result'

Exit Codes:

  • Returns 0 if all executions completed successfully
  • Returns non-zero if any executions failed (results still written to output file)

Performance Characteristics:

  • Workflow conversion happens once (not per input)
  • Tweaks are applied once (not per input)
  • Inputs are processed concurrently by the Stepflow server
  • Use --max-concurrent to control resource usage

Other Commandsโ€‹

# Analyze workflow dependencies and structure
uv run stepflow-langflow analyze workflow.json

# Convert with validation and pretty printing
uv run stepflow-langflow convert workflow.json --validate --pretty

# Validate workflow structure
uv run stepflow-langflow validate workflow.json

# Start component server for Stepflow integration
uv run stepflow-langflow serve

๐Ÿ Python APIโ€‹

from stepflow_langflow_integration.converter.translator import LangflowConverter

# Basic conversion
converter = LangflowConverter()
stepflow_yaml = converter.convert_file("workflow.json")

# Analyze workflow structure
with open("workflow.json") as f:
langflow_data = json.load(f)

analysis = converter.analyze(langflow_data)
print(f"Nodes: {analysis['node_count']}")
print(f"Component types: {analysis['component_types']}")

# Save converted workflow
with open("output.yaml", "w") as f:
f.write(stepflow_yaml)

๐Ÿงช Testing & Developmentโ€‹

Running Testsโ€‹

# Run all tests
uv run pytest

# Run with coverage
uv run pytest --cov=stepflow_langflow_integration --cov-report=html

# Run integration tests with example flows
uv run pytest tests/integration/

Development Setupโ€‹

# Development installation with all dependencies
uv sync --dev

# Code formatting and linting
uv run ruff format
uv run ruff check --fix

# Type checking
uv run mypy src tests

๐Ÿ“š Example Flowsโ€‹

The tests/fixtures/langflow/ directory contains example workflows from official Langflow starter projects that work with real Langflow component execution.

Available Example Flowsโ€‹

All workflows accept a "message" field as their primary input:

WorkflowDescriptionStatus
simple_chat.jsonDirect message passthroughโœ… Working
basic_prompting.jsonLLM with prompt templateโœ… Working
memory_chatbot.jsonConversational AI with memoryโœ… Working
document_qa.jsonDocument-based Q&Aโœ… Working
vector_store_rag.jsonRetrieval augmented generationโœ… Working
simple_agent.jsonAI agent with toolsโš ๏ธ Use --timeout 180

Running Example Flowsโ€‹

With Environment Variablesโ€‹

# Set API key in environment
export OPENAI_API_KEY="sk-your-key-here"

# Run any example flow
uv run stepflow-langflow execute tests/fixtures/langflow/basic_prompting.json \
'{"message": "Write a haiku about coding"}'

With Tweaks (No Environment Variables)โ€‹

# First, find the component ID that needs API key configuration
uv run stepflow-langflow analyze tests/fixtures/langflow/basic_prompting.json

# Then execute with tweaks
uv run stepflow-langflow execute tests/fixtures/langflow/basic_prompting.json \
'{"message": "Write a haiku about coding"}' \
--tweaks '{"LanguageModelComponent-kBOja": {"api_key": "sk-your-key-here"}}'

Complex Example: Vector Store RAGโ€‹

This example demonstrates using tweaks with multiple components:

# Analyze to find component IDs
uv run stepflow-langflow analyze tests/fixtures/langflow/vector_store_rag.json

# Execute with multiple component tweaks
uv run stepflow-langflow execute tests/fixtures/langflow/vector_store_rag.json \
'{"message": "Explain machine learning"}' \
--tweaks '{
"LanguageModelComponent-abc123": {
"api_key": "sk-your-openai-key",
"temperature": 0.7
},
"EmbeddingComponent-def456": {
"api_key": "sk-your-openai-key",
"model": "text-embedding-3-small"
}
}'

Workflow Input Patternsโ€‹

All workflows expect JSON input with a message field:

Input ExampleUse Case
{"message": "Hello there!"}Basic greeting or question
{"message": "Write a poem about nature"}Creative prompts for LLM
{"message": "Remember I like coffee"}Conversational input with memory
{"message": "Summarize this document"}Questions about documents
{"message": "Calculate 42 * 17"}Tasks requiring tools/calculations
{"message": "Explain machine learning"}Knowledge retrieval queries

Debugging Workflowsโ€‹

# Analyze workflow structure and find component IDs
uv run stepflow-langflow analyze tests/fixtures/langflow/vector_store_rag.json

# View converted workflow without execution
uv run stepflow-langflow execute tests/fixtures/langflow/basic_prompting.json --dry-run

# Keep temporary files for inspection
uv run stepflow-langflow execute tests/fixtures/langflow/memory_chatbot.json \
'{"message": "Test"}' --keep-files --output-dir ./debug

๐Ÿ”ง Advanced Usageโ€‹

Working with Your Own Flowsโ€‹

# Convert your Langflow export to Stepflow
uv run stepflow-langflow convert your-workflow.json your-workflow.yaml

# Analyze your workflow structure
uv run stepflow-langflow analyze your-workflow.json

# Execute with custom configuration
uv run stepflow-langflow execute your-workflow.json \
'{"your_input": "value"}' \
--tweaks '{"YourComponent-id": {"api_key": "sk-..."}}'

Tweaks System Referenceโ€‹

Tweaks modify component configurations at execution time:

{
"ComponentType-nodeId": {
"field_name": "new_value",
"api_key": "sk-your-key",
"temperature": 0.8,
"model": "gpt-4"
}
}

Common tweak patterns:

  • API Keys: {"api_key": "sk-your-key"}
  • Model Parameters: {"temperature": 0.7, "max_tokens": 1000}
  • Model Selection: {"model": "gpt-4", "provider": "openai"}
  • Custom Settings: Any component field can be modified

Manual Stepflow Integrationโ€‹

For advanced use cases with custom Stepflow configurations:

# Convert workflow
uv run stepflow-langflow convert workflow.json workflow.yaml

# Create custom stepflow-config.yml
# Run with Stepflow directly
cargo run -- run --flow workflow.yaml --input input.json --config stepflow-config.yml

๐Ÿ†˜ Supportโ€‹

  • Examples: Comprehensive examples in tests/fixtures/langflow/
  • Issues: Report bugs via GitHub Issues
  • Testing: Use --dry-run mode for safe experimentation

๐Ÿ“„ Licenseโ€‹

This project follows the same Apache 2.0 license as the main Stepflow project.