Because Let's Be Honest, this is what Rabbit should've made
This repository explores the true potential of Large Action Models (LAMs), unlike the recent... well, let's just say the Rabbit R1 wasn't exactly hopping mad with functionality.
Imagine a LAM that's less like a glorified Alexa in a bunny suit and more like your personal robot butler. We're talking:
- ๐ Actually getting you food - Real DoorDash integration
- ๐ Actually getting you a ride - Real Uber integration
- ๐จ Actually booking your travel - Real Expedia integration
- ๐ฐ Actually managing your finances - Real Plaid integration
- ๐ง Actually understanding context - Advanced AI with emotion recognition
- ๐ Actually protecting your privacy - Homomorphic encryption for tasks
But if I'm being honest, LAM should actually be called ALLM or LLM-A for Actionable LLM because that's essentially what it is - at least it's what's being presented/discovered.
The R1 sparked a lot of discussion, but some might say it was more like a baby bunny learning to walk โ a bit wobbly and unsure of itself. Here's why the R1 might not be the LAM champion:
- Limited Action Moves: More like a "Slightly More Animated Paperweight" Model
- Privacy Concerns: Is the R1 phoning home a little too much? Maybe it just misses its cardboard box origins
- Too Ambitious: While the tests yielded interesting findings, for something mass market, trying to get around captcha shouldn't be an afterthought
- To dream about what LAMs can truly do
- To champion responsible AI development โ because with great power comes great responsibility, even for bunnies
- To (hopefully) temper some expectations regarding these AI hardware assistants
- To provide a working example of what's possible with current technology
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ Main Entry โ โ Agent Layer โ โ Core LAM โ
โ (main.py) โโโโโถโ (agent.py) โโโโโถโ (LAM.py) โ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ โ
โผ โผ
โโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโ
โ API Layer โ โ AI Models โ
โ (DoorDash, โ โ (BLOOMZ, โ
โ Uber, etc.) โ โ Whisper, โ
โโโโโโโโโโโโโโโโโโโ โ EmoRoBERTa) โ
โโโโโโโโโโโโโโโโโโโ
- Language Generation: BLOOMZ model for intelligent responses
- Speech Recognition: Whisper model for audio transcription
- Emotion Recognition: EmoRoBERTa for sentiment analysis
- Text-to-Speech: Natural voice output
- Neural Networks: BindsNET with Izhikevich neurons
- Priority-based scheduling with intelligent urgency detection
- Dependency tracking for complex workflows
- Encrypted storage using homomorphic encryption
- Smart reminders with time-based notifications
- Context awareness for location and time-based decisions
- Food Delivery: DoorDash API integration
- Transportation: Uber API for ride services
- Travel Planning: Expedia API for hotels
- Financial Services: Plaid API for banking
- Homomorphic Encryption: Tasks encrypted at rest
- Secure API Handling: Environment-based configuration
- Privacy-First Design: Local processing where possible
- Python 3.8+
- GPU with DirectML support (Windows) or CUDA (Linux/Mac)
- Microphone and speakers for audio features
# Clone the repository
git clone https://github.com/yourusername/LAM.git
cd LAM
# Install dependencies
pip install -r requirements.txt
# Set up environment variables
cp env.example .env
# Edit .env with your API keys
# Run the demo
python main.py
# Or use the interactive CLI
python cli.pyCreate a .env file with your API keys:
DOORDASH_API_KEY=your_key_here
UBER_API_KEY=your_key_here
EXPEDIA_API_KEY=your_key_here
PLAID_CLIENT_ID=your_id_here
PLAID_SECRET=your_secret_here# Start the CLI
python cli.py
# Available commands:
LAM> help # Show all commands
LAM> demo # Run full demonstration
LAM> add_task "Buy milk" 1 # Add high-priority task
LAM> show_tasks # List all tasks
LAM> ai_chat "Hello!" # Chat with AI
LAM> emotion "I'm happy!" # Analyze emotion
LAM> record 5 # Record 5 seconds of audio
LAM> status # Show system status
LAM> quit # Exit CLIfrom agent import Agent
from datetime import datetime, timedelta
# Create agent
agent = Agent("My Assistant")
# Add tasks
agent.add_task("Meeting at 3 PM", priority=1,
due_time=datetime.now() + timedelta(hours=2))
# Generate AI response
response = agent.generate_response("What should I do today?")
# Record and analyze audio
transcription = agent.record_and_transcribe_audio(duration=5)
emotion = agent.recognize_emotion(transcription)
# Use real-world APIs
agent.order_food("New York")
agent.request_ride(40.7128, -74.0060, 40.730610, -73.935242)from config import Config
# Customize AI models
Config.TEXT_MODEL_NAME = "your-preferred-model"
Config.SPEECH_MODEL_NAME = "your-speech-model"
Config.EMOTION_MODEL_NAME = "your-emotion-model"
# Audio settings
Config.DEFAULT_AUDIO_DURATION = 10
Config.DEFAULT_SAMPLE_RATE = 22050# Check API status
Config.print_config_status()
# Validate keys
validation = Config.validate_api_keys()
missing = Config.get_missing_api_keys()# Run tests
pytest
# Code formatting
black .
flake8 .
# Type checking
mypy .- API Depth: Basic API implementations (enhancement needed for production)
- Model Loading: Large models require significant memory and GPU resources
- Error Handling: Some edge cases may need additional robustness
- Testing: Comprehensive test suite in development
We welcome contributions! Here's how to help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
# Install development dependencies
pip install -r requirements.txt
# Set up pre-commit hooks
pre-commit install
# Run tests before committing
pytest
black .
flake8 .- API Reference: See docstrings in source code
- Examples: Check
main.pyandcli.pyfor usage patterns - Architecture: Review class diagrams and flow charts
"DirectML not available"
- Ensure you have a DirectML-compatible GPU
- Fallback to CPU will occur automatically
"Model loading failed"
- Check internet connection for model downloads
- Ensure sufficient disk space for model storage
- Verify GPU memory availability
"API calls failing"
- Verify API keys in
.envfile - Check API service status
- Ensure proper API permissions
This project is licensed under the MIT License - see the LICENSE file for details.
This project is not affiliated with Rabbit Inc. or any other company. (But hey, if they're looking for some pointers on real LAM development, we're happy to chat.)
- Hugging Face for the amazing transformer models
- BindsNET for spiking neural network capabilities
- OpenAI for Whisper speech recognition
- The AI community for pushing the boundaries of what's possible
Let's Hop to It! ๐ฐ
This repository is open for collaboration. We encourage you to join the conversation and help us build a future filled with awesome LAMs!