procmaillm is a lightweight, zero-dependency Go utility designed to integrate Large Language Models (LLMs) into legacy Unix mail pipelines.
Acting as a standard Unix pipe, it reads incoming emails from stdin via Procmail or Maildrop, processes the content through an OpenAI-compatible API, and dispatches a reply via a local SMTP server.
- Zero External Dependencies: Built entirely using Go's standard library (
net/mail,net/smtp). Nogo.modbloat. - Universal API Support: Works with any endpoint compatible with the OpenAI Chat Completions API specification.
- Recursive MIME Parsing: Natively handles
multipart/mixedandmultipart/alternativeemail structures to extract plain text payloads. - Conversation Threading: Correctly sets
In-Reply-ToandReferencesheaders to maintain email threading in client views. - Smart Loop Detection: Automatically ignores auto-replies and emails sent from the bot's own address to prevent infinite loops.
- MTA Agnostic: Compatible with Procmail, Maildrop, and other MDA piping tools.
- Go 1.18+ installed on the host machine.
- A working Mail Transfer Agent (MTA) like Postfix, Sendmail, or Exim running locally (default:
127.0.0.1:25). - Procmail or Maildrop installed and configured.
git clone https://github.com/eja/procmaillm
cd procmaillm
go build -ldflags="-s -w" -o procmaillm main.goMove the binary to a location in your user's path:
mkdir -p $HOME/bin
mv procmaillm $HOME/bin/
chmod +x $HOME/bin/procmaillmprocmaillm is designed to be executed by a Message Delivery Agent (MDA). It accepts configuration via command-line flags.
| Flag | Description | Default |
|---|---|---|
-key |
The API Key for the LLM provider. | Empty |
-url |
The API Endpoint URL. | https://api.openai.com/v1/chat/completions |
-model |
The model identifier to use for inference. | gpt-4o |
-from |
The email address the bot should reply as. | bot@yourdomain.com |
-smtp |
The address of the local SMTP server. | 127.0.0.1:25 |
-log |
Enable logging. | false |
-log-file |
Path to log file. If -log is true but this is empty, logs to stdout. |
Empty |
1. Basic OpenAI Integration
:0
* ^Subject:.*(Question|Help)
| $HOME/bin/procmaillm \
-key "sk-proj-..." \
-model "gpt-4o" \
-from "assistant@example.com"
2. Groq Integration with Logging
:0
* ^Subject:.*(Urgent|Bot)
| $HOME/bin/procmaillm \
-url "https://api.groq.com/openai/v1/chat/completions" \
-key "gsk_..." \
-model "llama-3.3-70b-versatile" \
-from "bot@example.com" \
-log -log-file "/tmp/procmaillm.log"
If you use maildrop (common with Courier/Postfix/Virtual users), use the to "| command" syntax inside your filter file.
# ~/.mailfilter
if (/^Subject:.*(Help|Support)/)
{
to "| $HOME/bin/procmaillm \
-key 'sk-proj-...' \
-model 'gpt-4o' \
-from 'assistant@example.com'"
}
if (/^Subject:.*LocalBot/)
{
to "| $HOME/bin/procmaillm \
-url 'http://localhost:11434/v1/chat/completions' \
-key 'ollama' \
-model 'llama3' \
-from 'ai@local.lan'"
}
procmaillm contains internal logic to prevent infinite email loops. It will automatically exit without replying if:
- The sender matches the configured
-fromaddress. - The incoming email contains
Auto-SubmittedorX-Auto-Response-Suppressheaders.
Optimization Tip: While the binary handles this safely, it is still recommended to filter out the bot's own email address in your Procmail/Maildrop config to save system resources by preventing the process from starting at all.
Procmail:
:0
* !^From:.*bot@yourdomain.com
* ^Subject:.*(Help)
| $HOME/bin/procmaillm ...
Maildrop:
if (/^Subject:.*(Help)/ && !/^From:.*bot@yourdomain.com/)
{
to "| $HOME/bin/procmaillm ..."
}