๐ ๐ View Complete Documentation - Comprehensive guides for setup, operation, and development
This project implements a complete robotics platform built on the Pololu Romi Chassis Kit, featuring a custom carrier PCB that integrates all components into a compact, efficient design. The platform combines hardware, firmware, and software to create a fully functional autonomous robot.
Hardware Components:
- Pololu Romi Chassis Kit - Robust base platform with encoders and caster
- Custom PARTS CRP Board - Integrated carrier PCB with all connections
- Teensy 4.0 Microcontroller - High-performance ARM Cortex-M7 processor
- TB9051FTG Motor Drivers - Dual brushed DC motor control
- MPU-9250 9DOF IMU - Nine-axis motion sensing for navigation
- Raspberry Pi with AI HAT - Main computer for ROS2 and onboard autonomous vision inferance at 26 TOPS
- Power Management - 5V regulator and battery voltage monitoring
Current Implementation: The platform now features a complete ROS2 integration with micro-ROS running on the Teensy 4.0, enabling real-time communication between the Raspberry Pi and the robot's sensors and actuators. The system supports autonomous navigation, sensor fusion, and closed-loop control, making it an ideal platform for robotics education and development in the Portland Area Robotics Society.
Connect your micro HDMI cable to your Pi and monitor, and a keyboard to the USB-A port.
OR
ssh rcr@192.168.1.nReplace n with your robot number.
- User:
rcr - Password:
siliconforest
๐ Complete Documentation - Comprehensive guides for setup, operation, and development
- ๐ Getting Started - Complete setup from hardware assembly to first robot movement
- โ๏ธ Hardware Assembly - Step-by-step soldering and assembly instructions
- ๐ป Software Setup - Host configuration, ROS2 installation, and Teensy programming
- ๐ฎ Robot Operations - Startup, shutdown, teleoperation, and navigation procedures
- ๐งช Testing & Validation - Smoke tests and system verification
- ๐ง Troubleshooting - Common issues and recovery procedures
- Hardware: Custom PARTS CRP board with integrated motor drivers and Teensy 4.0
- Software: ROS2 with micro-ROS for real-time communication
- Sensors: LiDAR, IMU, and camera support for autonomous navigation
- Programming: Arduino CLI for Teensy firmware development
- Testing: Basic smoke test to verify hardware functionality
The platform includes a complete PyTorch training pipeline for robot maze navigation using imitation learning with a transformer-based architecture. For detailed documentation, see scripts/imitation_learning/src/imitation_learning/README.md.
If you have a previously trained model without the Z variable (CVAE style variable) and want to initialize a new model with Z using those weights, you can use transfer learning:
# Navigate to the imitation learning directory
cd scripts/imitation_learning/src/imitation_learning
# Train with transfer learning from a no-Z model checkpoint
python train.py \
--config config.yaml \
--transfer_from_noz /path/to/best_model_without_z.pthWhat gets transferred:
- โ Vision encoder weights
- โ Temporal encoder weights
- โ Action decoder weights
- โ Action queries and action head weights
- ๐ New CVAE components (
action_encoder,z_projection) are randomly initialized
Optional: Freeze layers during transfer learning
You can freeze specific layers while training the new CVAE components by enabling transfer learning in your config:
# In config.yaml
transfer_learning:
enabled: true
freeze_vision: true # Freeze vision encoder
freeze_encoder: false # Keep temporal encoder trainable
freeze_decoder: false # Keep action decoder trainableExample: Freeze vision encoder only
python train.py \
--config config.yaml \
--transfer_from_noz checkpoints/best_model_noz.pthWith freeze_vision: true in the config, the vision encoder will remain frozen while the new CVAE components and other layers are trained.
For more training options and examples, see the imitation learning README.
Mobile base platform - unassembled components
Assembled chassis with controller
Custom PCB showing integrated motor drivers and controller connections
Completed chassis - top view with Raspberry Pi AI HAT+
Raspberry Pi 5 with AI HAT+
Side view with LiDAR sensor mounted
See the open issues for a full list of proposed features (and known issues).
Portland Area RoboTics Society (PARTS) Our work is based on the PARTS Common Platform
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
Distributed under the MIT License and the Solderpad Hardware License v2.1. See LICENSE.txt for more information.






