Skip to content
This repository was archived by the owner on May 3, 2026. It is now read-only.

flollama/flollama-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Flollama API

Public AI inference API providing streaming conversational responses and utility access through a unified endpoint.


Overview

Flollama API is a lightweight HTTP service designed to integrate AI capabilities into applications with minimal complexity.

It exposes a single endpoint that supports both:

  • Conversational AI (POST requests)
  • Utility responses (GET requests)

Endpoint

/chat

GET /chat

Returns a programming joke in plain text.

Example

curl https://flollama.in/api/chat

Response

  • Content-Type: text/plain
  • Status: 200 OK

POST /chat

Generates AI responses based on a sequence of messages.

Request Body

{
  "messages": [
    { "role": "user", "content": "Explain recursion simply" }
  ]
}

Response

  • Content-Type: text/plain
  • Streaming (chunked response)

Features

  • Single endpoint API design
  • Streaming AI responses
  • Simple HTTP interface
  • Fast and lightweight integration
  • Works across web, mobile, and backend environments

Usage Notes

  • Intended for development and non-commercial use
  • Responses are generated dynamically and may vary
  • Proper handling of streaming responses is required

License

This project is licensed under the Unlicense.

See the LICENSE file for full terms.

About

Flollama API is a public AI inference API providing streaming conversational responses powered by Gemini

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors