This repository contains two coursework projects for my robotics course, implemented using the Webots simulator. The projects focus on robotic control and computer vision, showcasing a manipulator arm for a Pick-and-place with a robot manipulator and Perception and Behaviour Coordination.
- Overview
- Coursework 1: Pick-and-place with a robot manipulator
- Coursework 2: Perception and Behaviour Coordination
This repository showcases two robotics coursework projects developed as part of my academic coursework. Both projects are implemented in the Webots simulator:
- Coursework 1: Controlling a UR5e robotic arm to pick up cubes and place them in a crate using depth camera data and kinematic control.
- Coursework 2: Navigating an autonomous vehicle through a road network, following lanes, detecting traffic lights, and handling crossroads.
The projects demonstrate skills in robot kinematics, computer vision, and state-based control for robotic manipulation and autonomous navigation.
Control of a Robot Manipulator Arm for a Pick-and-place with a robot manipulator
A UR5e robotic arm, equipped with a depth camera at its end-effector, detects and picks up five cubes randomly distributed in its workspace and places them into a fixed crate. The system uses the depth camera to locate cubes, computes inverse kinematics to position the arm, and controls the gripper to grasp and release objects. A state-based controller manages the sequence of actions, including searching, grasping, and placing.
Video:
1.mp4
Stacked cube:
1_1.mp4
Learning Outcomes:
- Implement forward and inverse kinematics for robotic control.
- Apply computer vision for object detection in a robotic context.
- Develop a controller for a Pick-and-place with a robot manipulator.
- Cube Detection: Identifies cubes using depth camera images, filtering for square shapes and depth values.
- State Machine: Manages states like
moving_to_home,searching_cube,moving_to_cube,grasping_cube, andmoving_to_basket. - Kinematic Control: Uses the
kinpylibrary for forward and inverse kinematics to position the arm accurately. - Gripper Control: Opens and closes the gripper to pick up and release cubes.
- Search Strategy: Employs a grid-based random search to locate cubes when none are detected.
- Camera Calibration: Converts pixel coordinates to real-world coordinates using camera parameters.
- Webots Simulator: For running the simulation environment (version R2023b).
- Python: Version 3.10 recommended.
- Libraries:
numpy: Numerical computations.scipy: Rotation transformations.kinpy: Kinematic calculations.opencv-python: Image processing and cube detection.
- URDF File:
ur5e_2f85_camera.urdfdefines the robot’s kinematic chain (included inRAS_coursework_1/resources).
-
Install Webots:
- Download and install Webots from https://cyberbotics.com/.
- Ensure Webots is configured to use Python 3.10.
-
Create a Conda Environment:
conda create -n robotics python=3.10 conda activate robotics
-
Install Python Dependencies:
pip install numpy scipy kinpy opencv-python
-
Clone the Repository:
git clone https://github.com/ahrzeroday/Robotics-Coursework.git cd robotics-coursework -
Set Up Webots Project:
- Copy the contents of the
RAS_coursework_1folder to your Webots project directory. - Ensure the
resources/ur5e_2f85_camera.urdffile is in the correct relative path (../../resources/).
- Copy the contents of the
-
Launch Webots:
- Open Webots and load the world file for the UR5e robot (
../../worlds/).
- Open Webots and load the world file for the UR5e robot (
-
Run the Controller:
- In Webots, set the controller to use
ras.pyfrom thecoursework1folder. - Alternatively, run the script directly if Webots is configured for external controllers:
python RAS_coursework_1/controllers/ras/ras.py
- In Webots, set the controller to use
-
Simulation:
- The robot moves to the home position and opens the gripper.
- It searches for cubes using the depth camera, centers detected cubes in the view, adjusts orientation, grasps, and places them in the crate.
- If no cubes are found, it moves to a random grid position to continue searching.
- The process repeats until all cubes are placed.
-
Debugging:
- Set
DEBUG = Trueinras.pyto enable debug messages (state transitions, kinematics).
- Set
Autonomous Vehicle Lane Following and Traffic Light Navigation
An autonomous vehicle navigates a road network in Webots, following lanes, detecting traffic lights (red, yellow, green), and handling crossroads. The vehicle uses a front-facing RGB and depth camera to detect the road and traffic lights. A lane controller adjusts steering and speed based on road center detection, while a state machine manages stopping at red lights and turning at crossroads.
Video:
2.mp4
Learning Outcomes:
- Implement computer vision for lane detection and traffic light recognition.
- Develop a reactive controller for lane following and speed adjustment.
- Manage complex navigation behaviors using a state-based approach.
- Lane Detection: Identifies the road using HSV color filtering for tarmac and yellow markings.
- Road Center Tracking: Computes the road center to guide steering, with memory to smooth transitions.
- Traffic Light Detection: Detects red, yellow, and green lights using HSV color segmentation and depth filtering.
- Crossroads Handling: Recognizes crossroads and executes timed left or right turns.
- State Machine: Manages states like
driving,stopping,turning_left, andturning_right. - Speed Control: Adjusts speed based on steering angle to prevent tipping during turns.
- Webots Simulator: For running the simulation environment (version R2023b).
- Python: Version 3.10 recommended.
- Libraries:
numpy: Numerical computations.opencv-python: Image processing for lane and traffic light detection.
- Webots Vehicle Controller: Provided by
ras.py(included inRAS_coursework_2).
-
Install Webots:
- Download and install Webots from https://cyberbotics.com/.
- Ensure Webots is configured to use Python 3.10.
-
Create a Conda Environment (if not already done for Coursework 1):
conda create -n robotics python=3.10 conda activate robotics
-
Install Python Dependencies:
pip install numpy opencv-python
-
Clone the Repository (if not already done):
git clone https://github.com/ahrzeroday/Robotics-Coursework.git cd robotics-coursework -
Set Up Webots Project:
- Copy the contents of the
RAS_coursework_2folder to your Webots project directory.
- Copy the contents of the
-
Launch Webots:
- Open Webots and load the world file for the autonomous vehicle (
../../worlds/).
- Open Webots and load the world file for the autonomous vehicle (
-
Run the Controller:
- In Webots, set the controller to use
ras.pyfrom theRAS_coursework_2folder. - Alternatively, run the script directly if Webots is configured for external controllers:
python RAS_coursework_2/controllers/ras/ras.py
- In Webots, set the controller to use
-
Simulation:
- The vehicle starts in
drivingmode, following the road by tracking the center. - It detects traffic lights and stops at red lights, resuming when the light turns yellow or green.
- At crossroads, it executes a timed right turn (configurable to left turn by modifying the code).
- The vehicle adjusts speed based on steering angle to maintain stability.
- The vehicle starts in
-
Debugging:
- Set
debug = Trueinras.pyto enable visualization and logs. - Visualizations include the processed road image (with steering and crossroad rows marked) and bounding boxes around detected traffic lights or signs.
- Console logs report traffic light status, crossroad detection, and mode changes.
- Set