A GitHub Action to interact with OpenAI-compatible LLM services, supporting custom endpoints, self-hosted models (Ollama, LocalAI, vLLM), SSL/CA certificates, Go template prompts, and structured output via function calling.
- LLM Action
- Table of Contents
- Features
- Inputs
- Outputs
- Usage Examples
- Basic Example
- Version Pinning
- With System Prompt
- With Multiline System Prompt
- System Prompt from File
- System Prompt from URL
- Input Prompt from File
- Input Prompt from URL
- Using Go Templates in Prompts
- Structured Output with Tool Schema
- Self-Hosted / Local LLM
- Using with Azure OpenAI
- Using Custom CA Certificate
- Using with Ollama
- Chain Multiple LLM Calls
- Debug Mode
- Supported Services
- Security Considerations
- License
- Contributing
- 🔌 Connect to any OpenAI-compatible API endpoint
- 🔐 Support for custom API keys
- 🔧 Configurable base URL for self-hosted services
- 🚫 Optional SSL certificate verification skip
- 🔒 Custom CA certificate support for self-signed certificates
- 🎯 System prompt support for context setting
- 📝 Output response available for subsequent actions
- 🎛️ Configurable temperature and max tokens
- 🐛 Debug mode with secure API key masking
- 🎨 Go template support for dynamic prompts with environment variables
- 🛠️ Structured output via function calling (tool schema support)
| Input | Description | Required | Default |
|---|---|---|---|
base_url |
Base URL for OpenAI Compatible API endpoint | No | https://api.openai.com/v1 |
api_key |
API Key for authentication | Yes | - |
model |
Model name to use | No | gpt-4o |
skip_ssl_verify |
Skip SSL certificate verification | No | false |
ca_cert |
Custom CA certificate. Supports certificate content, file path, or URL | No | '' |
system_prompt |
System prompt to set the context. Supports plain text, file path, or URL. Supports Go templates with environment variables | No | '' |
input_prompt |
User input prompt for the LLM. Supports plain text, file path, or URL. Supports Go templates with environment variables | Yes | - |
tool_schema |
JSON schema for structured output via function calling. Supports plain text, file path, or URL. Supports Go templates | No | '' |
temperature |
Temperature for response randomness (0.0-2.0) | No | 0.7 |
max_tokens |
Maximum tokens in the response | No | 1000 |
debug |
Enable debug mode to print all parameters (API key will be masked) | No | false |
| Output | Description |
|---|---|
response |
The raw response from the LLM (always available) |
<field> |
When using tool_schema, each field from the function arguments JSON becomes a separate output |
Output Behavior:
- The
responseoutput is always available, containing the raw LLM response - When using
tool_schema, the function arguments are parsed and each field is added as a separate output in addition toresponse - Reserved field: If your tool schema defines a field named
response, it will be skipped and a warning will be displayed. This is becauseresponseis reserved for the raw LLM output
Example: If your schema defines city and country fields, the outputs will be:
steps.<id>.outputs.response- The raw JSON responsesteps.<id>.outputs.city- The city field valuesteps.<id>.outputs.country- The country field value
name: LLM Workflow
on: [push]
jobs:
llm-task:
runs-on: ubuntu-latest
steps:
- name: Call LLM
id: llm
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
input_prompt: "What is GitHub Actions?"
- name: Use LLM Response
run: |
echo "LLM Response:"
echo "${{ steps.llm.outputs.response }}"You can pin to specific versions of this action:
# Use major version (recommended - automatically gets compatible updates)
uses: appleboy/LLM-action@v1
# Use specific version (for maximum stability)
uses: appleboy/LLM-action@v1.0.0
# Use latest development version (not recommended for production)
uses: appleboy/LLM-action@mainRecommendation: Use the major version tag (e.g., @v1) to automatically receive backward-compatible updates and bug fixes.
- name: Code Review with LLM
id: review
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: "You are a code reviewer. Provide constructive feedback on code quality, best practices, and potential issues."
input_prompt: |
Review this code:
```python
def add(a, b):
return a + b
```
temperature: "0.3"
max_tokens: "2000"
- name: Post Review Comment
run: |
echo "${{ steps.review.outputs.response }}"- name: Advanced Code Review
id: review
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: |
You are an expert code reviewer with deep knowledge of software engineering best practices.
Your responsibilities:
- Identify potential bugs and security vulnerabilities
- Suggest improvements for code quality and maintainability
- Check for adherence to coding standards
- Evaluate performance implications
Provide constructive, actionable feedback in a professional tone.
input_prompt: |
Review the following pull request changes:
${{ github.event.pull_request.body }}
temperature: "0.3"
max_tokens: "2000"Instead of embedding long prompts in YAML, you can load them from a file:
- name: Code Review with Prompt File
id: review
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: ".github/prompts/code-review.txt"
input_prompt: |
Review this code:
```python
def calculate(x, y):
return x / y
```Or using file:// prefix:
- name: Code Review with File URI
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
system_prompt: "file://.github/prompts/code-review.txt"
input_prompt: "Review the main.go file"Load prompts from a remote URL:
- name: Code Review with Remote Prompt
id: review
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: "https://raw.githubusercontent.com/your-org/prompts/main/code-review.txt"
input_prompt: |
Review this pull request:
${{ github.event.pull_request.body }}You can also load the input prompt from a file:
- name: Analyze Code from File
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: "You are a code analyzer"
input_prompt: "src/main.go" # Load code from fileLoad input content from a remote URL:
- name: Analyze Remote Content
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
system_prompt: "You are a content analyzer"
input_prompt: "https://raw.githubusercontent.com/user/repo/main/content.txt"Both system_prompt and input_prompt support Go templates, allowing you to dynamically insert environment variables into your prompts. This is especially useful for GitHub Actions workflows where you want to include context like repository names, branch names, or custom variables.
Key Features:
- Access any environment variable using
{{.VAR_NAME}} - Environment variables with
INPUT_prefix are available both with and without the prefix- Example:
INPUT_MODELcan be accessed as{{.MODEL}}or{{.INPUT_MODEL}}
- Example:
- All GitHub Actions default environment variables are available (e.g.,
GITHUB_REPOSITORY,GITHUB_REF_NAME) - Supports full Go template syntax including conditionals and functions
- name: Analyze Repository with Context
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4o"
system_prompt: |
You are an expert code analyzer.
Focus on the {{.GITHUB_REPOSITORY}} repository.
input_prompt: |
Please analyze this repository: {{.GITHUB_REPOSITORY}}
Current branch: {{.GITHUB_REF_NAME}}
Using model: {{.MODEL}}
Provide insights on code quality and potential improvements.- name: Set Custom Variables
run: |
echo "INPUT_PROJECT_TYPE=web-application" >> $GITHUB_ENV
echo "INPUT_LANGUAGE=Go" >> $GITHUB_ENV
- name: Code Review with Custom Context
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
system_prompt: |
You are reviewing a {{.PROJECT_TYPE}} written in {{.LANGUAGE}}.
Focus on best practices specific to {{.LANGUAGE}} development.
input_prompt: |
Review the code changes in {{.GITHUB_REPOSITORY}}.
Project type: {{.PROJECT_TYPE}}
Language: {{.LANGUAGE}}Create a template file .github/prompts/review-template.txt:
Please review this pull request for {{.GITHUB_REPOSITORY}}.
Repository: {{.GITHUB_REPOSITORY}}
Branch: {{.GITHUB_REF_NAME}}
Actor: {{.GITHUB_ACTOR}}
Model: {{.MODEL}}
Focus on:
- Code quality
- Security issues
- Performance implications
Then use it in your workflow:
- name: Code Review with Template File
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
input_prompt: ".github/prompts/review-template.txt"- name: Conditional Prompt
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
input_prompt: |
Analyze {{.GITHUB_REPOSITORY}}
{{if .DEBUG}}
Enable verbose output and detailed explanations.
{{else}}
Provide a concise summary.
{{end}}Common variables you can use in templates:
{{.GITHUB_REPOSITORY}}- Repository name (e.g.,owner/repo){{.GITHUB_REF_NAME}}- Branch or tag name{{.GITHUB_ACTOR}}- Username of the person who triggered the workflow{{.GITHUB_SHA}}- Commit SHA{{.GITHUB_EVENT_NAME}}- Event that triggered the workflow{{.GITHUB_WORKFLOW}}- Workflow name{{.GITHUB_RUN_ID}}- Unique workflow run ID{{.GITHUB_RUN_NUMBER}}- Unique workflow run number- And any other environment variable available in your workflow
Use tool_schema to get structured JSON output from the LLM using function calling. This is useful when you need the LLM to return data in a specific format that can be easily parsed and used in subsequent workflow steps.
- name: Extract City Information
id: extract
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
input_prompt: "What is the capital of France?"
tool_schema: |
{
"name": "get_city_info",
"description": "Get information about a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The name of the city"
},
"country": {
"type": "string",
"description": "The country where the city is located"
}
},
"required": ["city", "country"]
}
}
- name: Use Extracted Data
run: |
echo "City: ${{ steps.extract.outputs.city }}"
echo "Country: ${{ steps.extract.outputs.country }}"- name: Structured Code Review
id: review
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: "You are an expert code reviewer."
input_prompt: |
Review this code:
```python
def divide(a, b):
return a / b
```
tool_schema: |
{
"name": "code_review",
"description": "Structured code review result",
"parameters": {
"type": "object",
"properties": {
"score": {
"type": "integer",
"description": "Code quality score from 1-10"
},
"issues": {
"type": "array",
"items": { "type": "string" },
"description": "List of identified issues"
},
"suggestions": {
"type": "array",
"items": { "type": "string" },
"description": "List of improvement suggestions"
}
},
"required": ["score", "issues", "suggestions"]
}
}
- name: Process Review Results
env:
SCORE: ${{ steps.review.outputs.score }}
ISSUES: ${{ steps.review.outputs.issues }}
SUGGESTIONS: ${{ steps.review.outputs.suggestions }}
run: |
echo "Score: $SCORE"
echo "Issues: $ISSUES"
echo "Suggestions: $SUGGESTIONS"Why use environment variables instead of direct interpolation?
- Automatic escaping: GitHub Actions automatically handles special characters in environment variables, avoiding shell parsing errors
- More secure: Prevents injection attacks and accidental command execution from LLM outputs
- Cleaner code: The workflow is easier to read and maintain
Store your schema in a file for reusability:
- name: Analyze with Schema File
id: analyze
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
input_prompt: "Analyze the sentiment of: I love this product!"
tool_schema: ".github/schemas/sentiment-analysis.json"Use Go templates in your schema for dynamic configuration:
- name: Dynamic Schema
uses: appleboy/LLM-action@v1
env:
INPUT_FUNCTION_NAME: analyze_text
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
input_prompt: "Analyze this text"
tool_schema: |
{
"name": "{{.FUNCTION_NAME}}",
"description": "Analyze text content",
"parameters": {
"type": "object",
"properties": {
"result": { "type": "string" }
}
}
}- name: Call Local LLM
id: local_llm
uses: appleboy/LLM-action@v1
with:
base_url: "http://localhost:8080/v1"
api_key: "your-local-api-key"
model: "llama2"
skip_ssl_verify: "true"
input_prompt: "Explain quantum computing in simple terms"Azure OpenAI Service requires a different URL format. You need to specify your resource name and deployment ID in the base URL:
- name: Call Azure OpenAI
id: azure_llm
uses: appleboy/LLM-action@v1
with:
base_url: "https://{your-resource-name}.openai.azure.com/openai/deployments/{deployment-id}"
api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
model: "gpt-4" # This should match your deployment model
system_prompt: "You are a helpful assistant"
input_prompt: "Explain the benefits of cloud computing"Configuration Notes:
- Replace
{your-resource-name}with your Azure OpenAI resource name - Replace
{deployment-id}with your model deployment name - The
modelparameter should match the model you deployed - API key can be found in Azure Portal under your OpenAI resource's "Keys and Endpoint"
Example with all parameters:
- name: Azure OpenAI Code Review
id: azure_review
uses: appleboy/LLM-action@v1
with:
base_url: "https://my-openai-resource.openai.azure.com/openai/deployments/gpt-4-deployment"
api_key: ${{ secrets.AZURE_OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: "You are an expert code reviewer"
input_prompt: |
Review this code for best practices:
${{ github.event.pull_request.body }}
temperature: "0.3"
max_tokens: "2000"For self-hosted services with self-signed certificates, you can provide a custom CA certificate. The ca_cert input supports three formats:
- name: Call LLM with CA Certificate Content
uses: appleboy/LLM-action@v1
with:
base_url: "https://your-llm-server.local/v1"
api_key: ${{ secrets.LLM_API_KEY }}
ca_cert: |
-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIQAqx...
-----END CERTIFICATE-----
input_prompt: "Hello, world!"- name: Call LLM with CA Certificate File
uses: appleboy/LLM-action@v1
with:
base_url: "https://your-llm-server.local/v1"
api_key: ${{ secrets.LLM_API_KEY }}
ca_cert: "/path/to/ca-cert.pem"
input_prompt: "Hello, world!"Or using file:// prefix:
- name: Call LLM with CA Certificate File URI
uses: appleboy/LLM-action@v1
with:
base_url: "https://your-llm-server.local/v1"
api_key: ${{ secrets.LLM_API_KEY }}
ca_cert: "file:///path/to/ca-cert.pem"
input_prompt: "Hello, world!"- name: Call LLM with CA Certificate from URL
uses: appleboy/LLM-action@v1
with:
base_url: "https://your-llm-server.local/v1"
api_key: ${{ secrets.LLM_API_KEY }}
ca_cert: "https://your-server.com/ca-cert.pem"
input_prompt: "Hello, world!"- name: Call Ollama
id: ollama
uses: appleboy/LLM-action@v1
with:
base_url: "http://localhost:11434/v1"
api_key: "ollama"
model: "llama3"
system_prompt: "You are a helpful assistant"
input_prompt: "Write a haiku about programming"- name: Generate Story
id: generate
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
input_prompt: "Write a short story about a robot"
max_tokens: "500"
- name: Translate Story
id: translate
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
system_prompt: "You are a translator"
input_prompt: |
Translate the following text to Spanish:
${{ steps.generate.outputs.response }}
- name: Display Results
run: |
echo "Original Story:"
echo "${{ steps.generate.outputs.response }}"
echo ""
echo "Translated Story:"
echo "${{ steps.translate.outputs.response }}"Enable debug mode to troubleshoot issues and inspect all parameters:
- name: Call LLM with Debug
id: llm_debug
uses: appleboy/LLM-action@v1
with:
api_key: ${{ secrets.OPENAI_API_KEY }}
model: "gpt-4"
system_prompt: "You are a helpful assistant"
input_prompt: "Explain how GitHub Actions work"
temperature: "0.8"
max_tokens: "1500"
debug: true # Enable debug modeDebug Output Example:
=== Debug Mode: All Parameters ===
main.Config{
BaseURL: "https://api.openai.com/v1",
APIKey: "sk-ab****xyz9", // Masked for security
Model: "gpt-4",
SkipSSLVerify: false,
SystemPrompt: "You are a helpful assistant",
InputPrompt: "Explain how GitHub Actions work",
Temperature: 0.8,
MaxTokens: 1500,
Debug: true
}
===================================
=== Debug Mode: Messages ===
[... message details ...]
============================Security Note: When debug mode is enabled, the API key is automatically masked (only showing first 4 and last 4 characters) to prevent accidental exposure in logs.
This action works with any OpenAI-compatible API, including:
- OpenAI -
https://api.openai.com/v1 - Azure OpenAI -
https://{your-resource}.openai.azure.com/openai/deployments/{deployment-id} - Ollama -
http://localhost:11434/v1 - LocalAI -
http://localhost:8080/v1 - LM Studio -
http://localhost:1234/v1 - Jan -
http://localhost:1337/v1 - vLLM - Your vLLM server endpoint
- Text Generation WebUI - Your WebUI endpoint
- Any other OpenAI-compatible service
- Always use GitHub Secrets for API keys:
${{ secrets.YOUR_API_KEY }} - Only use
skip_ssl_verify: 'true'for trusted local/internal services - Be careful with sensitive data in prompts, as they will be sent to the LLM service
MIT License - see LICENSE file for details
Contributions are welcome! Please feel free to submit a Pull Request.