This repository is based on
UniversalRobots/Universal_Robots_ROS2_Gazebo_Simulation
and extends the UR Gazebo simulation with an offline sensor placement optimizer for
e.g. ToF sensors mounted around a UR robot.
The optimizer works on
- a voxelized workspace,
- a set of candidate sensor poses, and
- multiple robot poses,
and selects a single set of sensors that gives good visibility across all poses.
Since Gazebo Classic is no longer supported from ROS 2 Jazzy onwards, this repository is configured for ROS 2 Humble.
| Humble | |
|---|---|
| Branch | humble |
| Build status |
|
export COLCON_WS=~/workspaces/ur_gazebo
mkdir -p $COLCON_WS/srcFeel free to change ~/workspaces/ur_gazebo to any absolute path.
It is usually convenient to keep multiple ROS workspaces in a dedicated workspaces/ folder and
include the ROS version in the workspace name.
cd $COLCON_WS/src
git clone -b humble https://github.com/janAlonola/ur_sensor_sim.git
# or your fork/branch
rosdep update && rosdep install --ignore-src --from-paths . -ycd $COLCON_WS
colcon build --symlink-installThen source the workspace as usual.
Start UR robot + Gazebo Classic:
ros2 launch ur_simulation_gazebo ur_sim_control.launch.pyMove the robot using the test script from ur_robot_driver (if installed):
ros2 launch ur_robot_driver test_joint_trajectory_controller.launch.pyExample using MoveIt with the simulated robot:
ros2 launch ur_simulation_gazebo ur_sim_moveit.launch.pyThe high-level workflow is: ToDo: Sensor Meshing and Xacro Generation for Validation
- Create a voxel map of the workspace.
- Define a set of robot poses (
poses.yaml). - Compute per-voxel weights for each pose (distance to robot, etc.).
- Compute sensor visibility with occlusion for all poses.
- Run a multi-pose GRASP optimizer to choose a single set of sensors that performs well across all poses.
The optimizer does not give extra score for the same voxel being seen by multiple sensors in the same pose: per pose it only matters whether a voxel is covered at least once.
-
capsule.yamlVoxel description of the workspace (output ofvoxel_grid.py). -
poses.yamlList of robot joint configurations. Each entry has aname(e.g.b1_w1) andjoints_deg/joints_radfor all UR joints. -
selected_candidates.yamlCandidate sensor poses (reduced set frommesh_sampling/sensor_picker). -
weighted_poses_*.yaml&weighted_poses_*.npyToDo: Update Per-pose voxel weights. The YAML files contain metadata + weights; the.npyfiles contain just the weight vector for fast loading. -
occlusion_heatmaps/*.yamlToDo: Update Per-pose visibility: for each voxel, which sensors see it (visible_by), including occlusion by the robot based on ray casting. -
optimizer_all_poses.pyGRASP-based optimizer that selects one global sensor set over all poses.
Script: voxel_grid.py
-
Input: workspace definition / URDF
-
Output:
capsule.yamlwith e.g.voxels: list of[x, y, z]voxel_size_m,voxel_count
This can be visualized in RViz or a custom visualizer if desired.
File: poses.yaml
-
Contains:
-
joint_names: order of UR joints -
poses: list ofname: e.g.b1_w1joints_deg/joints_rad: joint angles
-
These poses are used both for voxel weighting and for the visibility computation.
Scripts: e.g. weighted_voxel.py / weighted_voxel_robot.py
-
Inputs:
capsule.yamlposes.yaml- URDF (
ur10.urdf) - robot collision meshes via
trimesh.closest_point
-
Idea:
-
For each pose and voxel, compute the distance to the robot surface.
-
Map this distance to a weight using a falloff function (linear, gamma, exp, or a sigmoid/saturation curve):
- within a minimum distance (sensor blind zone) → weight 0
- around a preferred distance band → high weight
- very far away → weight decreases again
-
-
Output:
- One weighted voxel YAML per pose, e.g.
weighted_poses_sigmoid/b1_w1.yamlcontainingweights: [...]and metadata.
- One weighted voxel YAML per pose, e.g.
Optionally, visualize the weighted voxels.
Script: weights_yaml_to_npy.py
- Reads each
weighted_poses_*.yaml - Writes
weighted_poses_*.npycontaining a single 1-D NumPy array (length = voxel_count).
The optimizer uses these numpy files because they are much faster to load.
Script: e.g. compute_visibility_all_poses.py
-
Inputs:
poses.yamlcapsule.yamlselected_candidates.yaml- URDF + collision meshes (via
trimeshand ray casting)
-
For each pose:
-
Transform all candidate sensors into the world frame.
-
For each voxel:
- check if it is within sensor FOV and range,
- cast a ray and test for intersections with the robot mesh
(
trimesh.ray.intersects_locationor similar).
-
Store for each voxel the list of sensor indices that can see it:
visible_by.
-
-
Output:
-
Heatmap YAMLs in
occlusion_heatmaps/, one per pose (or per group of poses), each containingvoxel_count,sensor_countvisible_byas[pose][voxel] -> list[int]or[voxel] -> list[int]
-
These can be visualized as coverage/heatmap plots.
Script: optimize_across_poses_multipose.py
-
Inputs:
--heatmaps: directory withocclusion_heatmaps/*.yaml--weights-list: list or glob forweighted_poses_*.npy(one per pose)
-
Internally:
-
For each pose, builds a sparse CSR matrix
Aof shape(S, V):- rows = sensors, columns = voxels, entries = 1 if sensor sees voxel.
-
Loads pose-specific weights
Wof shape(V, P):V= voxel count,P= pose count.
-
Runs a GRASP algorithm:
- greedy construction with a Restricted Candidate List (
--rcl-size), - multiple restarts (
--iters), - then 1-swap local search (
--local-rounds).
- greedy construction with a Restricted Candidate List (
-
-
Objective modes (
--objective):sum: Maximizesum_p sum_v W[v,p] * covered[p,v](no extra credit for seeing the same voxel with multiple sensors in one pose).softmin: “Robust” objective using a soft minimum across poses (--softmin-temp).frac: A voxel only counts if it is covered in at leastalphafraction of poses (--frac-alpha).
-
Important note: For a given pose, if several sensors see the same voxel, the voxel is counted once. Extra redundant coverage does not increase the score in that pose.
-
Output:
-
Printed list of selected sensor indices.
-
JSON file (via
--export-json) including:selected_sensors- objective value and mode
- numbers of sensors, voxels, poses
- source heatmap and weight files
- optimizer parameters (k, iters, etc.)
-
These selected sensors can then be used to generate Xacro/URDF files and visualized in Gazebo / RViz.
- Voxel map:
voxel_grid.py→capsule.yaml - Poses: edit
poses.yaml - Weights:
weighted_voxel*.py→weighted_poses_*.yaml→weights_yaml_to_npy.py→weighted_poses_*.npy - Visibility:
compute_visibility_all_poses.py→occlusion_heatmaps/*.yaml - Optimization:
optimize_across_poses_multipose.py→ final list of selected sensor indices - Visualization / export: build Xacro/URDF with the selected sensors and inspect in Gazebo / RViz.