ai
workflowdev
AI Capabilities
Every task independently selects its AI provider and model. No global lock-in — mix OpenAI, Anthropic, Gemini, and Ollama in the same workflow.
Multi-Provider Chat
OpenAI
- name: summarize
type: ai.openai.chat
properties:
model: gpt-4o-mini
messages:
- role: system
content: You are a concise summarizer.
- role: user
content: 'Summarize: {{ outputs["fetch"]["body"] }}'
temperature: 0.3
max_tokens: 500
Outputs: text, model, prompt_tokens, completion_tokens, total_tokens
Anthropic (Claude)
- name: analyze
type: ai.anthropic.chat
properties:
model: claude-sonnet-4-20250514
messages:
- role: user
content: "Analyze this dataset and find anomalies."
max_tokens: 1000
Google Gemini
- name: translate
type: ai.gemini.chat
properties:
model: gemini-2.0-flash
messages:
- role: user
content: "Translate to Japanese: Hello, how are you?"
Ollama (Local Models)
- name: local-inference
type: ai.ollama.chat
properties:
base_url: http://ollama:11434
model: llama3.2
messages:
- role: user
content: "Explain this error log."
Run any model locally — no API keys, no data leaving your network.
Embeddings
Generate vector embeddings for RAG pipelines.
- name: embed-docs
type: ai.openai.embedding
properties:
model: text-embedding-3-small
input: '{{ outputs["fetch_docs"]["body"] }}'
Outputs: embedding (float array), dimensions, model
Image Generation
- name: generate-thumbnail
type: ai.openai.image
properties:
prompt: "A minimalist logo for a workflow automation tool"
size: "1024x1024"
model: dall-e-3
MCP Tool Integration
Connect to any MCP-compatible tool server. The AI agent calls tools through the standard Model Context Protocol.
MCP via SSE (Remote Server)
- name: mcp-tools
type: ai.openai.chat
properties:
model: gpt-4o
messages:
- role: user
content: "Use the available tools to find recent GitHub issues."
mcp:
url: http://mcp-server:3001/sse
MCP via Docker
- name: github-agent
type: ai.openai.chat
properties:
model: gpt-4o
messages:
- role: user
content: "List open PRs in excalibase/workflow"
mcp:
image: mcp/github
env:
GITHUB_PERSONAL_ACCESS_TOKEN: "${GITHUB_TOKEN}"
RAG Pipeline Example
Ingest documents into PGVector, search by cosine similarity, feed context into AI.
name: rag-pipeline
tasks:
- name: fetch-docs
type: http.request
properties:
url: https://api.example.com/documents
method: GET
- name: embed-and-store
type: ai.openai.embedding
depends_on: [fetch-docs]
properties:
model: text-embedding-3-small
input: '{{ outputs["fetch-docs"]["body"] }}'
store: true
collection: knowledge_base
- name: search
type: ai.openai.embedding
depends_on: [embed-and-store]
properties:
model: text-embedding-3-small
input: "What are the deployment requirements?"
search: true
collection: knowledge_base
top_k: 5
- name: answer
type: ai.openai.chat
depends_on: [search]
properties:
model: gpt-4o
messages:
- role: system
content: "Answer based on this context only."
- role: user
content: |
Context: {{ outputs["search"]["results"] }}
Question: What are the deployment requirements?
Per-Task API Keys
Each task can use its own API key, or fall back to the environment variable:
# Uses OPENAI_API_KEY from environment
- name: default-key
type: ai.openai.chat
properties:
model: gpt-4o-mini
messages: [...]
# Uses a specific key (e.g. from credentials store)
- name: custom-key
type: ai.openai.chat
properties:
model: gpt-4o
api_key: "${PROJECT_OPENAI_KEY}"
messages: [...]