One API key. All AI models. Save 15–50% vs official pricing.
Access Claude, GPT, Gemini, and free models through a single OpenAI-compatible endpoint.
Drop-in replacement for Anthropic, OpenAI, and Google APIs.
# That's it. Use any OpenAI-compatible SDK or tool.
curl https://texapi.dev/v1/chat/completions \
-H "Authorization: Bearer tex-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "claude-sonnet-4-5", "messages": [{"role": "user", "content": "Hello!"}]}'- Sign up → texapi.dev (Google / GitHub / Discord). Get 2 free credits.
- Create API key → Dashboard → API Keys → Create.
- Use anywhere → Set base URL to
https://texapi.dev/v1in any OpenAI-compatible tool.
export ANTHROPIC_BASE_URL=https://texapi.dev
export ANTHROPIC_API_KEY=tex-YOUR_KEY
claude |
export OPENAI_BASE_URL=https://texapi.dev/v1
export OPENAI_API_KEY=tex-YOUR_KEY
codex |
export GEMINI_API_BASE=https://texapi.dev/v1
export GEMINI_API_KEY=tex-YOUR_KEY
gemini |
from openai import OpenAI
client = OpenAI(
api_key="tex-YOUR_KEY",
base_url="https://texapi.dev/v1"
) |
Also works with Cursor, Continue, Cline, Aider, LiteLLM, and any tool that accepts a custom OpenAI base URL.
| Model | Family | Input ($/1M) | Output ($/1M) | Official | Savings |
|---|---|---|---|---|---|
claude-opus-4-7 |
Claude | $4.25 | $21.25 | $5 / $25 | 15% off |
claude-opus-4-6 |
Claude | $4.25 | $21.25 | $5 / $25 | 15% off |
claude-sonnet-4-6 |
Claude | $2.55 | $12.75 | $3 / $15 | 15% off |
claude-haiku-4-5 |
Claude | $0.85 | $4.25 | $1 / $5 | 15% off |
gpt-5.5 |
GPT | $2.50 | $15.00 | $5 / $30 | 50% off |
gpt-5.4 |
GPT | $1.25 | $7.50 | $2.50 / $15 | 50% off |
gpt-5.4-mini |
GPT | $0.38 | $2.25 | $0.75 / $4.50 | 50% off |
gpt-5-codex |
GPT | $0.63 | $5.00 | $1.25 / $10 | 50% off |
o4-mini |
GPT | $0.55 | $2.20 | $1.10 / $4.40 | 50% off |
gemini-2.5-pro |
Gemini | $0.75 | $6.00 | $1.25 / $10 | 40% off |
gpt-image-2 |
Image | ~$0.012 / image | ~$0.02–$0.19 / image | ~40% off |
Prices in USD per 1M tokens. Official prices sourced from OpenAI, Anthropic, Google. Full model list at texapi.dev/api/models
| Model | Description | Daily Limit |
|---|---|---|
gpt-oss-120b |
120B open-source model | 100 – 10,000 req/day |
minimax-m2.5-free |
MiniMax M2.5 | 100 – 10,000 req/day |
nemotron-3-super-free |
NVIDIA Nemotron 3 Super | 100 – 10,000 req/day |
big-pickle |
Big Pickler | 100 – 10,000 req/day |
Free models require one successful payment. Limit scales with plan: No plan (100) → Starter (300) → Builder (1,000) → Pro (3,000) → Business (10,000)
| Endpoint | Format | Use with |
|---|---|---|
POST /v1/chat/completions |
OpenAI Chat | All models, all tools |
POST /v1/messages |
Anthropic Messages | Claude Code, native Anthropic SDK |
POST /v1/responses |
OpenAI Responses | Codex CLI, Responses API |
POST /v1/images/generations |
OpenAI Images | gpt-image-2 |
GET /v1/models |
OpenAI Models | List available models |
| Amount | Credits | ~USD |
|---|---|---|
| 50,000₫ | 90 | $2 |
| 100,000₫ | 180 | $4 |
| 200,000₫ | 360 | $8 |
| 500,000₫ | 900 | $19 |
| 1,000,000₫ | 1,800 | $38 |
Payment via VietQR (instant bank transfer). International payments coming soon.
| Starter | Builder | Pro | Business | |
|---|---|---|---|---|
| Monthly | 299,000₫ | 749,000₫ | 2,090,000₫ | 5,290,000₫ |
| Credits | 550 | 1,400 | 3,900 | 10,000 |
| RPM | 60 | 240 | 1,000 | 3,000 |
| API Keys | 3 | 10 | 25 | 100 |
| Free model/day | 300 | 1,000 | 3,000 | 10,000 |
┌─────────────────────────────────────────────────────────────┐
│ Your App / IDE / CLI │
│ (Claude Code, Codex, Gemini CLI, Cursor, SDK, ...) │
└─────────────────────┬───────────────────────────────────────┘
│ Authorization: Bearer tex-...
▼
┌─────────────────────────────────────────────────────────────┐
│ TexAPI Gateway (https://texapi.dev/v1) │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌───────────┐ │
│ │ Auth & │→ │ Rate │→ │ Credit │→ │ Smart │ │
│ │ Key Check│ │ Limiting │ │ Check │ │ Routing │ │
│ └──────────┘ └──────────┘ └──────────┘ └─────┬─────┘ │
└───────────────────────────────────────────────────┼─────────┘
│ Priority-based fallback │
┌───────────┼───────────┬─────────────────┘
▼ ▼ ▼
┌────────────┐ ┌────────┐ ┌────────────┐ ┌────────────┐
│ Claude │ │ GPT │ │ Gemini │ │ Free Pool │
│ (Anthropic)│ │(OpenAI)│ │ (Google) │ │ (OSS 120B) │
└────────────┘ └────────┘ └────────────┘ └────────────┘
Key features:
- 🔀 Smart routing — Priority-based with automatic fallback. If one provider is down, your request goes to the next.
- 🔄 Format translation — Send OpenAI format, get Anthropic response (or vice versa). TexAPI handles conversion.
- 📊 Real-time analytics — Usage per model, per key, per day. Costs, latency, error rates in your dashboard.
- 🔒 Security — Keys hashed (HMAC-SHA256), upstream creds encrypted (AES-256-GCM), zero content storage.
- 💰 Spending controls — Daily cap, per-key monthly limits, real-time balance tracking.
- 🤝 Referral program — Earn 5% commission on every top-up from users you invite. Permanent.
Python — Streaming
from openai import OpenAI
client = OpenAI(api_key="tex-YOUR_KEY", base_url="https://texapi.dev/v1")
stream = client.chat.completions.create(
model="claude-sonnet-4-5",
messages=[{"role": "user", "content": "Write a Python quicksort"}],
stream=True,
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")Node.js / TypeScript — Streaming
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "tex-YOUR_KEY", baseURL: "https://texapi.dev/v1" });
const stream = await client.chat.completions.create({
model: "claude-sonnet-4-5",
messages: [{ role: "user", content: "Explain async/await in TypeScript" }],
stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}cURL — Non-streaming
curl https://texapi.dev/v1/chat/completions \
-H "Authorization: Bearer tex-YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"messages": [{"role": "user", "content": "Hello!"}],
"max_tokens": 100
}'Anthropic Messages API (native)
curl https://texapi.dev/v1/messages \
-H "Authorization: Bearer tex-YOUR_KEY" \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-5",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Explain quantum computing"}]
}'Is my data safe?
Yes. TexAPI does not store any request or response content. Only metadata (token counts, latency, model used) is logged for billing and analytics purposes.
Can I use this in production?
Absolutely. The Business plan supports 3,000 RPM, 100 API keys, and 99.9%+ uptime with multi-provider fallback.
What payment methods are accepted?
VietQR bank transfer (instant) — works with all Vietnamese banks. International payment methods coming soon.
Do credits expire?
Top-up credits never expire. Subscription credits expire at the end of each billing period.
What happens if a provider goes down?
TexAPI automatically routes your request to the next available provider. You don't need to change anything.
| Doc | Description |
|---|---|
| API Reference | Full endpoint documentation |
| Setup Claude Code | Use TexAPI with Claude Code |
| Setup Codex CLI | Use TexAPI with OpenAI Codex |
| Setup Gemini CLI | Use TexAPI with Gemini CLI |
| Pricing Comparison | Detailed pricing vs official APIs |
🌐 texapi.dev · 💬 Discord · 📧 support@texapi.dev
If TexAPI saves you money, consider giving this repo a ⭐
