- Use LLM-Axe to search for the latest information from the internet about your topic.
- Pass the retrieved data to Llama 3 (via Ollama) to generate a contextual blog-style summary.
- Display both the fetched data and Llama 3’s insights in the Chainlit chat.
- After each user message, the app acts as your personal agent to capture key trends and summarize the topic as a short blog entry.
- Chainlit Chat Interface: Conversational UX for entering topics and viewing results.
- Latest Web Retrieval (LLM-Axe): Fetches up-to-date info for any topic.
- Blog Summary per Message: Converts the latest info into a concise blog-style summary after each conversation turn.
- Personal Trend Agent: Captures key trends and aspects of topics you care about, so you can revisit and compare over time.
- Configurable Llama Model: Choose an Ollama Llama model via
LLAMA3_MODEL_NAME.
- Python 3.9
- Chainlit
- LLM-Axe
- Ollama
LLAMA3_MODEL_NAME(env var): Defaults tollama3. Set it to another available Ollama model tag if desired, e.g.llama3.1.
-
Clone the repository:
git clone https://github.com/yourusername/branchname.git
-
Create a virtual environment:
python -m venv venv source venv/bin/activate -
Install required dependencies:
pip install -r requirements.txt
-
Ensure Chainlit, LLM-Axe, and Ollama are installed:
pip install chainlit llm-axe ollama
-
Pull a Llama model for Ollama:
ollama pull llama3 # optionally # export LLAMA3_MODEL_NAME=llama3.1 && ollama pull llama3.1
-
Run the ChatWindow: To launch the Chainlit-powered chat window, run:
chainlit run app.py
-
Access the Chat Interface: After running the app, open your browser and go to
http://localhost:8000to interact with the Copilot ChatWindow. -
Interact with the Chat:
- Enter any query (e.g., "latest in AI", "iPhone updates") in the chat window.
- The app will:
- Fetch the latest updates using LLM-Axe.
- Pass the information to Llama 3, which generates a concise, blog-style summary.
- Display both the raw data and the generated blog summary in the chat.
- This happens after each message, so you continuously capture the topic’s key trends.
The OnlineAgent from LLM-Axe is used to search the internet for updated information related to the user’s query.
searcher = OnlineAgent(llm=llm)
latest_info = searcher.search(search_prompt)The output from LLM-Axe is passed to Llama 3 via Ollama to generate a contextual blog summary.
blog_response = ollama.chat(model="llama3", messages=[{"role": "user", "content": response_prompt}])Both the raw data from LLM-Axe and the insights generated by Llama3 are displayed in the Chainlit chat window.
app.py: Chainlit entrypoint that handles each message and orchestrates retrieval and summary.src/llm_axe.py: Runs the web retrieval step and fallback function-calling if needed.src/llama3.py: Calls Ollama to generate the blog-style summary; falls back to a simulated response if the model isn’t available.src/commonconst.py: Prompts, constants, and model configuration (readsLLAMA3_MODEL_NAME).src/agents.py: Local stubs forOnlineAgent,OllamaChat, andFunctionCallerused when thellm_axepackage isn’t present..chainlit/config.toml: Chainlit UI and runtime configuration.chainlit.md: Optional welcome message for the UI.