Skip to content

ZhaoJackson/News_Operator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Llama3 Copilot ChatWindow with LLM-Axe Integration

Overview

  1. Use LLM-Axe to search for the latest information from the internet about your topic.
  2. Pass the retrieved data to Llama 3 (via Ollama) to generate a contextual blog-style summary.
  3. Display both the fetched data and Llama 3’s insights in the Chainlit chat.
  4. After each user message, the app acts as your personal agent to capture key trends and summarize the topic as a short blog entry.

Features

  1. Chainlit Chat Interface: Conversational UX for entering topics and viewing results.
  2. Latest Web Retrieval (LLM-Axe): Fetches up-to-date info for any topic.
  3. Blog Summary per Message: Converts the latest info into a concise blog-style summary after each conversation turn.
  4. Personal Trend Agent: Captures key trends and aspects of topics you care about, so you can revisit and compare over time.
  5. Configurable Llama Model: Choose an Ollama Llama model via LLAMA3_MODEL_NAME.

Requirements

  • Python 3.9
  • Chainlit
  • LLM-Axe
  • Ollama

Configuration

  • LLAMA3_MODEL_NAME (env var): Defaults to llama3. Set it to another available Ollama model tag if desired, e.g. llama3.1.

Installation

  1. Clone the repository:

    git clone https://github.com/yourusername/branchname.git
  2. Create a virtual environment:

    python -m venv venv
    source venv/bin/activate
  3. Install required dependencies:

    pip install -r requirements.txt
  4. Ensure Chainlit, LLM-Axe, and Ollama are installed:

    pip install chainlit llm-axe ollama
  5. Pull a Llama model for Ollama:

    ollama pull llama3
    # optionally
    # export LLAMA3_MODEL_NAME=llama3.1 && ollama pull llama3.1

Setup and Usage

  1. Run the ChatWindow: To launch the Chainlit-powered chat window, run:

    chainlit run app.py
  2. Access the Chat Interface: After running the app, open your browser and go to http://localhost:8000 to interact with the Copilot ChatWindow.

  3. Interact with the Chat:

    • Enter any query (e.g., "latest in AI", "iPhone updates") in the chat window.
    • The app will:
      1. Fetch the latest updates using LLM-Axe.
      2. Pass the information to Llama 3, which generates a concise, blog-style summary.
      3. Display both the raw data and the generated blog summary in the chat.
    • This happens after each message, so you continuously capture the topic’s key trends.

Code Explanation

1. LLM-Axe Integration:

The OnlineAgent from LLM-Axe is used to search the internet for updated information related to the user’s query.

searcher = OnlineAgent(llm=llm)
latest_info = searcher.search(search_prompt)

2. Llama3 Integration (Ollama):

The output from LLM-Axe is passed to Llama 3 via Ollama to generate a contextual blog summary.

blog_response = ollama.chat(model="llama3", messages=[{"role": "user", "content": response_prompt}])

3. Display to User:

Both the raw data from LLM-Axe and the insights generated by Llama3 are displayed in the Chainlit chat window.


Components and Configuration

  • app.py: Chainlit entrypoint that handles each message and orchestrates retrieval and summary.
  • src/llm_axe.py: Runs the web retrieval step and fallback function-calling if needed.
  • src/llama3.py: Calls Ollama to generate the blog-style summary; falls back to a simulated response if the model isn’t available.
  • src/commonconst.py: Prompts, constants, and model configuration (reads LLAMA3_MODEL_NAME).
  • src/agents.py: Local stubs for OnlineAgent, OllamaChat, and FunctionCaller used when the llm_axe package isn’t present.
  • .chainlit/config.toml: Chainlit UI and runtime configuration.
  • chainlit.md: Optional welcome message for the UI.

About

Fetch latest web info + blog summaries per message via Chainlit + Llama3

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages