This repository provides a Docker-based development environment for robot navigation using ROS 2 Navigation Stack (Nav2). It includes configurations for both simulated TurtleBot3 examples and real sensor integration with LiDAR.
- Development Workflow
- Prerequisites
- Quick Start: TurtleBot3 Simulation
- Real Sensor Integration
- Project Structure
- Troubleshooting
develop- Use this branch for feature development and solid implementationsmaster- Competition-ready, stable codebase only
- Pull the latest changes from the repository
- Build the Docker image
- Run the Docker container with mounted source code (
app/) - Develop on your host machine - changes reflect instantly in the container
- Test and run scripts inside the container
- Commit solid features to the
developbranch
Remember: Develop on your host, run in the container!
- Docker installed on your system
- X11 server for GUI applications
- (Optional) Livox LiDAR sensor for real sensor integration
Before running any GUI applications, allow Docker to access your display:
xhost +local:dockerFrom the repository root directory:
docker build -t corra09/nav2_docker:dev .docker run -it --privileged --net=host \
--env="DISPLAY=$DISPLAY" \
--env="QT_X11_NO_MITSHM=1" \
--volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
--volume="./app/nav2_ws/src:/root/nav2_ws/src" \
--volume="./app/initialize:/root/initialize" \
--device /dev/dri:/dev/dri \
corra09/nav2_docker:devNote: Some options may need adjustment depending on your system configuration.
Inside the Docker container:
ros2 launch nav2_bringup tb3_simulation_launch.py headless:=Falseπ Congratulations! You can now interact with the default TurtleBot3 Nav2 example in Gazebo.
This section describes how to integrate real sensor data (LiDAR) with the Nav2 stack, replacing simulated sensor data.
Follow steps 1-2 from the Quick Start section to build and run the Docker container.
Power on your Livox LiDAR and connect it to your computer via Ethernet.
Troubleshooting:
- Check network connection to LiDAR
- Verify sensor IP configuration matches your network setup
The following modifications disable simulation components and prepare the system to receive real sensor data:
File: app/nav2_ws/src/navigation2/nav2_bringup/launch/tb3_simulation_launch.py
Set use_sim_time to False:
File: app/nav2_ws/src/navigation2/nav2_bringup/launch/tb3_simulation_launch.py
Comment out or remove the Gazebo launch configuration:
File: app/nav2_ws/src/navigation2/nav2_bringup/worlds/waffle.model
Remove or disable the odometry plugin to prevent the simulated model from publishing on /odom:
File: app/nav2_ws/src/navigation2/nav2_bringup/worlds/waffle.model
Remove or disable the LiDAR plugin to prevent publishing on /scan:
File: app/nav2_ws/src/navigation2/nav2_bringup/worlds/waffle.model
Remove or disable the IMU plugin:
Inside the Docker container, start the sensor launcher to publish real sensor data:
ros2 launch sensor_launcher launch.pyThis will publish essential topics: /scan, /odom, /tf, /tf_static, etc.
In a new terminal (inside the container):
ros2 launch nav2_bringup tb3_simulation_launch.py headless:=FalseNav2 will now consume data from your real sensors instead of simulation.
To fully integrate real sensors with Nav2, you need to address the following:
- Replace the default TurtleBot3 map with a map of your actual environment
- Update the map file path in the launch configuration
- File:
app/nav2_ws/src/navigation2/nav2_bringup/params/nav2_params.yaml - Tune parameters for your specific robot and environment
- The URDF of the Waffle robot is still loaded by default (it's not such a big issue)
- Verify anyway that sensor frames are correctly transformed relative to the base frame
- Use
rqt_tf_treeto visualize and debug the transform tree:ros2 run rqt_tf_tree rqt_tf_tree
- Ensure proper frame relationships:
base_linkβsensor_frame
Only after completing the above configurations should you test the AMCL (Adaptive Monte Carlo Localization) module for robot localization.
roboto_navigation/
βββ app/
β βββ initialize/ # Initialization scripts and Gazebo models
β βββ nav2_ws/
β βββ src/
β βββ navigation2/ # Nav2 stack (modified)
β βββ sensor_launcher/ # Real sensor integration package
β βββ FAST_LIO_ROS2/ # LiDAR odometry
β βββ livox_converter/ # Livox sensor converter
β βββ Livox-SDK2/ # Livox SDK
β βββ pointcloud_to_laserscan/
βββ notes/ # Documentation and reference images
βββ Dockerfile # Docker environment definition
βββ README.md # This file
This project integrates several third-party ROS2 packages alongside custom-developed packages:
The following packages are based on open-source GitHub repositories:
-
navigation2 - ROS 2 Navigation Stack
Repository:https://github.com/ros-planning/navigation2 -
FAST_LIO_ROS2 - Fast LiDAR-Inertial Odometry
Repository:https://github.com/Ericsii/FAST_LIO_ROS2 -
Livox-SDK2 - Livox LiDAR SDK
Repository:https://github.com/Livox-SDK/Livox-SDK2 -
pointcloud_to_laserscan - Point cloud to laser scan converter
Repository:https://github.com/ros-perception/pointcloud_to_laserscan -
ws_livox - Livox ROS2 driver workspace
Repository:https://github.com/Livox-SDK/livox_ros_driver2
The following packages were developed specifically for this project:
- livox_converter - Custom converter for Livox sensor data processing
- sensor_launcher - Custom launch package for real sensor integration
When contributing to this repository:
- Create a feature branch from
develop - Make your changes and test thoroughly
- Commit with clear, descriptive messages
- Merge to
developfor review - Only merge to
masterwhen competition-ready




