πΉ Background of Topic Selection
As autonomous driving technology continues to expand, the importance of camera-based visual recognition has increased. This project was initiated with the goal of implementing lane detection and driving control using image processing techniques.
πΉ Objective
Simulate real-world road environments using TurtleBot3 and ROS2. Implement key autonomous driving functionalities such as lane detection, speed control, obstacle avoidance, and adaptation to lighting changes.
πΉ Equipment Used
TurtleBot3 Waffle Pi, DRGO Webcam, Dynamixel Manipulator, Nvidia Jetson Nano, ArUco Marker, OpenCV, ROS2 Humble
πΉ Expected Outcomes
Develop driving technologies applicable to various service robots such as logistics, guidance, and surveillance. Enhance understanding of image processing and autonomous driving algorithms.
In turtle_pubimg.py, the camera input undergoes the following steps:
HSV Conversion β Birdβs Eye View Transformation β Masking β Polynomial Fitting β Center Line Calculation β ROS2 Topic Publishing β Control Logic Execution
βΆ HSV Color Space Conversion
Used to detect white and yellow lanes. Applied color filtering using cv2.inRange().
βΆ Trackbar (Control Panel)
GUI to adjust HSV ranges and Region of Interest (ROI) in real-time. Allows users to tune optimal lane detection parameters interactively.
Removes perspective distortion for lane alignment. Uses cv2.findHomography and cv2.warpPerspective. Improves the accuracy of curvature calculation and path estimation.
βΆ Histogram Equalization
V channel (brightness): Enhances contrast.
S channel (saturation): Improves color clarity.
βΆ CLAHE (Contrast Limited Adaptive Histogram Equalization)
Solves over-brightening issue in global equalization by applying localized corrections.
Ensures robust recognition under varying lighting and reflections.
βΆ Overlapping Mask Removal
If yellow and white masks overlap due to reflections, prioritize the yellow mask.
βΆ Morphology β Erosion
Removes small noise (e.g., crosswalks) using a structural kernel to erode unnecessary areas.
βΆ Labeling
Keeps only the largest connected component, effectively removing noise missed by morphology.
Used a second-degree polynomial curve fitting in the form of x = AyΒ² + By + C using np.polyfit(). To enable smoother and gentler cornering, we reduced the A coefficient slightly and increased the B coefficient, making the turning behavior more gradual and stable.
The red and light blue lanes represent the curve before coefficient adjustment, while the orange and light blue lanes represent the curve after applying the adjusted coefficients.
Also if both lanes are detected, the midpoint is calculated. If only one lane is detected, the path is estimated 350px from that lane.
Automatically adjusts the V (brightness) value based on the number of masked pixels.
Ensures stable lane recognition even under changing lighting conditions.
βΆ ROS2-Based Control Node Configuration
Subscribers: /control/lane, /control/max_vel, /avoid_control, /robot_state
Publisher: /control/cmd_vel (velocity command)
βΆ PD Controller-Based Lane Following
π Error Calculation
error = center - 500 # based on image center
βοΈ Control Formula
angular.z = Kp * error + Kd * (error - last_error) Kp = 0.0025, Kd = 0.007
β Ensures smooth turning and fast directional adjustment.
πΉ Marker Detection Utilized OpenCVβs ArUco library. Detected markers β Pose Estimation β Homography β Transformed to robot base coordinates.
πΉ Manipulator Control Upon marker detection:
Issue /robot_state = slow if marker is at a distance.
Issue /robot_state = stop if marker is close.
Perform pick β place with arm_client.send_request.
Resume driving: /robot_state = drive.
Captured a total of 13 test images under different driving environments.
Designed to evaluate performance under lighting reflections and shadows.
Adjusted HSV ranges, applied histogram equalization, CLAHE, and removed mask overlaps.
linear.x = min(((1 - abs(error)/500) ** 2.2), 0.1)
β As error increases, speed decreases sharply to ensure safe cornering.
When ArUco marker is detected:
β /robot_state = slow # Reduce speed at a distance β /robot_state = stop # Stop near the marker β /robot_state = drive # Resume driving after manipulation
https://drive.google.com/file/d/1BRyIH8yQ3AaU35Ih1dRi5DRQY31I70fs/view?usp=drive_link
https://drive.google.com/file/d/1hqbYXowimGbrBbTyqLsK_fHKh3g4IQIR/view?usp=drive_link
ROS2 Humble + TurtleBot3 waffle-pi manipulator environment
# 1. Launch the turtlebot3_manipulation_bringup
$ ros2 launch turtlebot_manipulation_bringup hardware.launch.py
# 2. Launch the turtlebot3_manipulation_moveit_config
$ ros2 launch turtlebot3_manipulation_moveit_config moveit_core.launch.py
# 3. Run the arm_controller
$ ros2 run turtlebot_moveit turtlebot_arm_controller
# 4. Execute Lane Detection
$ python3 turtle_pubimg.py
# 5. Execute Lane Control
$ ros2 launch turtlebot3_autorace_mission control_lane.launch.py
# 6. Run Aruco_detector
$ ros2 run aruco_yolo aruco_detector
# 7. Run task_aruco
$ python3 task_aruco.py