This document provides instructions for testing the UI and Chat API of the Semem system.
Before testing, make sure you have:
- Properly set up environment variables or a config file
- SPARQL endpoint running (Apache Fuseki recommended)
- LLM service available (Claude API access or Ollama with required models)
# Start with default configuration
node ui-server.js
# Or with a custom configuration file
node ui-server.js config.jsonYou can test the API endpoints directly using the provided test script:
# Make sure the script is executable
chmod +x test-chat-api.js
# Run the test script
./test-chat-api.jsThis script will:
- Check the API health
- Test the standard chat endpoint
- Test chat with search context interjection
- Test the streaming chat endpoint
- Open your browser and navigate to http://localhost:4100
- Navigate to the "Chat" tab
- Enter a message and press Send
For more advanced testing in the browser, open the developer console and include our test script:
// Load the test script
const script = document.createElement('script');
script.src = 'test-chat.js';
document.head.appendChild(script);
// After it loads, run the test
setTimeout(() => testChat(), 500);This will:
- Add the search context checkbox to the UI
- Submit a test message that will use search context interjection
- Display the results including any sources found
The system will try to use LLM providers in this order:
- Claude via hyperdata-clients (requires CLAUDE_API_KEY in environment)
- Ollama (requires local Ollama server running)
- Claude via direct API (requires CLAUDE_API_KEY in environment)
- OpenAI (not fully implemented)
You can modify config.sample.json and save it as config.json to customize the system, including:
- SPARQL endpoints to use (with fallback)
- LLM providers and their priorities
- Port and graph name settings
If tests fail, check:
- Environment variables (especially API keys)
- SPARQL endpoint availability and connection
- LLM service availability and connection
- Server logs for detailed error messages