Skip to content

mit-rss/localization

Repository files navigation

Deliverable Due Date
Part A individual submission (gradescope) Friday, March 20st at 11:59PM EST
Part B team submission (gradescope) Monday, April 6th at 11:59PM EST
OPTIONAL: Parts D and E Monday, April 6th at 11:59PM EST
Pushed code Monday, April 6th at 11:59PM EST
Briefing (8 min presentation + 3 min Q&A) Monday, April 6th at 1:00PM EST
Report (on team github pages website) Friday, April 10th at 11:59PM EST
Checkoff Wednesday, April 8th at 7:00PM EST
Team Member Assessment Wednesday, April 8th at 11:59PM EST

Note: We will assume the lowest score if the team member assessment form is not submitted on time

Note: Late checkoffs will incur a percentage penalty to your participation grade. Missed checkoffs will result in -1% to your overall grade

notebook with detailed instructions for each module: README.ipynb

Lab 5: Monte Carlo Localization (MCL)

Introduction

Determining a robot’s orientation and position in a known environment, also known as localization, is a critical problem in the field of robotics. As is common in robotics, this seemingly simple problem is surprisingly difficult, and remains an active research area. In this lab, you will solve robotic localization by implementing Monte Carlo Localization (aka MCL or particle filter).

Localization is one major aspect of navigation; the next lab will serve as an extension of this one and cover the other major aspect, path planning. This is a challenging lab and we'd recommend starting early and moving fast, as your code/analysis for this lab will lay the groundwork for the next one.

This lab consists of a number of modules, some required and some extra credit, some individual and some with your team - for details on submission, grading, and deliverables, see the next section.

Submission and Grading

This section describes the grading scheme for Lab 5, while the instructions to get started on the lab at available in the instructions notebook. The deliverables for this lab consist of five parts (two of which are optional) for a total of 10 points with 2 possible extra credit points. Parts A and D must be submitted individually, while the rest are teamwork. Parts A, B, and C are required and will be graded out of 10 points. Parts D and E are optional and 2 extra credit points are possible for a maximum grade of 12/10 points with extra credit. Note that part A is due sooner than the others, to encourage getting an early start on it - see the deliverables table at the top of this handout for all deadlines. In addition, due to the individual nature of parts A and D, overall lab grades may differ across teammates.

There will also be a report and briefing for this lab, and each teammate must submit the team member assessment form. The lab 5 report will serve as a basis for your next lab, and you will be expected to revise and update sections in the future when writing the lab 6 report.

This grade out of 10 points is then combined with the report and briefing grades (each also out of 10 points - same rubrics used as in the previous labs for reports and briefings). The grades will be weighted according to the table below for an overall lab grade out of 10 points.

Deliverable Grade Weighting
briefing grade (out of 10) 20%
report grade (out of 10) 40%
grade for parts A-E (out of 10, up to 12/10 with extra credit) 40%
  • Part A - (Writing Assignment, 3pts) Understand the motion and sensor model.
  • Part B - (Programming Assignment, 4pts) Develop and test the particle filter algorithm in 2D racecar simulation environment.
  • Part C - (Localization, 3pts) Adapt your solution from part B to work in your car and conduct experimental analysis for your report and briefing.
  • Part D - (OPTIONAL: Extra Credit, 1pts) Derive the Bayes' Filter presented in Lecture 10.
  • *Part E - (OPTIONAL: Extra Credit, 1pts) From localization to SLAM: Exploring SLAM and Visualizing with Foxglove!

Initial Setup

In order to build this package, we need to include a few dependencies that are not already included on the car. Note that this does not affect your work in the simulator.

Please pull the new docker image using the command sudo docker pull staffmitrss/racecar2026.

Part A: Grading for writing assignment (3 points) - INDIVIDUAL EFFORT, REQUIRED

Submit your answers individually to the writing assignment on gradescope, preferably LaTeXed. You must show work (derivations, formulas used, etc.) to receive full credit. You may collaborate on problems but you cannot simply share answers - please note collaborators in your submission. You must write up your solutions independently. The 3 points of part A will be assigned based on the submitted written exercises:

  • 1 point for part (i) in question 1
  • 1 point for part (ii) in question 1
  • 1 point for question 2

Submit your numeric answers along with your justifications to the gradescope assignment Lab 5 Part A: Individual Submission. You may check your numeric results by putting your answers in /autograder/solutions_go_here.py and uploading to the gradescope autograder Lab 5 Part A: OPTIONAL, but your grade will be based only on your explanations and answers submitted for the written portion. Just writing the final answer will not give total credit, even if correct. These questions will help you understand the algorithm before diving into coding.

Part B: Grading for simulation implementation (4 points) - TEAMWORK, REQUIRED

Implement MCL in the simulator. Augment the simulated odometry data with various types of noise and compare how your solution does compared to the ground truth odometry. Points will be assigned based on completion and the convergement of your solution in the 2D racecar simulation environment.

You should submit your implementation to the Lab 5 Part B: Localization in Simulation assignment on gradescope as a zip of your localization package. We will expect to see your implementation in simulation during checkoffs.

We have provided a few unit tests for you to test your sensor model and motion model. To run these tests, do:

# ====== motion model ======
ros2 launch localization motion_model_test.launch.py
# ==========================
# ====== sensor model ======
ros2 launch localization sensor_model_test.launch.py
# this will wait for you to run test_map.launch.xml in another terminal
ros2 launch localization test_map.launch.xml
# ==========================

If your code errors out or fails, the console will indicate that. Otherwise, you should see a few messages indicating that the tests have passed, and the script will exit. If you are curious about what might have gone wrong, please inspect the localization/test/*.py files.

A couple notes about the tests, should you wish to use them:

  • In the motion model test, we assume a deterministic motion model to keep things simple. For this reason, please have your motion model behavior be controlled by a self.deterministic field. If self.deterministic = True, the motion model evaluate should not add noise to the odometry. If self.deterministic = False, the motion model should add noise to the odometry (needed for localization).

  • You may notice that the sensor_model precompute test is difficult to debug. To make this process easier, we have included a file assets/debug_precomputed_table.pkl containing precomputed tables you should be getting if alpha = 1 for each of alpha_hit, alpha_rand, alpha_max, alpha_short. For example, if you load the dict into the variable results_each, results_each['hit'] gives the table for alpha_hit = 1 and alpha_rand = alpha_short = alpha_max = 0.

  • Please note that for short, the first column may be nan if you are trying to normalize the columns. As np.nan != np.nan, we followed the convention of 0/0 = 0. This will not be an issue in practice, since you won't be having alpha_short = 1.

Part C: Grading for localization in ROBOT (3 points) - TEAMWORK, REQUIRED

For this part you will need to adapt your MCL implementation from part B to work in your car, and conduct experimental analysis of your algorithm's performance for your report and briefing. See part C of the instructions notebook for more details on how to adapt your code to run in your car.

In your report and briefing, make sure to provide:

  • Numerical evidence that your algorithm is working in the form of charts / data
    • Convergence rates, cross track error, etc
    • Show how the robust the simulator is in the presence of noisy odometry, using ground truth odometry for comparison.
  • An illustrative video of your particle filter working, overlaying
    • Visualization of inferred position
    • Visualization of the particle distribution
    • The known map
    • Laser scan data in the coordinate frame of your inferred position (it should align fairly well with the walls in the known map)
    • Any other byproducts of your algorithm which you find worthy of visualization.

Part B & C: Tips and Tricks

As the algorithm must run in realtime with a large number of particles, an efficient implementation is a requirement for success. There are a few tricks you can use, primarily:

  • Use numpy arrays for absolutely everything
    • Use numpy functions on numpy arrays to do any computations.
    • Avoid Python for loops like the plague.
    • Slice indexing is your (best) friend.
    • Cache and reuse important numpy arrays by setting them to the right size during initialization of your particle filter as “self” variables.
  • Downsample your laser scan: your lidar has > 1000 beams but many of them are redundant. Downsample to ~100 for good performance (you can try lower as well). This will make the probability distribution over your state space less "peaked" and increase the number of particles you can maintain in real time.
  • "Squash" your sensor model output probability by raising it to a power of less than one (1/3 for example) to make your distribution even less peaked. If you are confused by this paragraph, look at [4,5]
  • Start with ~200 particles; don't go crazy with particles. You can probably get your code running with thousands of particles but it will take some well crafted code to run in real time.
  • Your sensor model and motion model don't need to run at the same rate! The motion model is probably much faster and over short periods of time it will accurately track the motion of the car. The sensor model can correct the dift of the motion model at a slower rate if necessary.
  • Use ros2 topic hz to check the rate at which you are publishing the expected transformation from the map to the car's position. It should be greater than 20hz for realtime performance.
  • Use the smallest number of operations required to perform your arithmetic, avoid unnecessary memory allocations, and avoid excessive function calls
  • Identify your critical code paths, and keep them clean. Conversely, don’t worry too much about code that is called infrequently.
  • Don’t publish visualization messages unless someone is subscribed to those topics, this can cause your system to be slower.
  • Use a profiler to identify good candidates for optimization, but also, try a teammate's computer, some computers are just slower.
  • On the real car, make sure your Jetson is running in Max-N mode for best performance
  • If you want an even faster (albeit more complicated to interface with) ray tracer check out range_libc. This was written by RSS TA Corey Walsh and it is heavily optimized.

Part D: Grading for the Bayes' filter derivation (1 bonus point) - INDIVIDUAL EFFORT, OPTIONAL EXTRA-CREDIT

Derive the form of the Bayes' Filter presented in Lecture 10. Submit as a typed PDF uploaded to the Lab 5 Part D: OPTIONAL gradescope assignment.

Part E: Grading for SLAM (1 bonus point) - TEAMWORK, OPTIONAL EXTRA-CREDIT (This part is very cool so try to have everyone onboard with what's happening here.)

Running this part requires an installation of the ros rtabmap package (installed with apt install ros-humble-rtabmap-ros) and the foxglove bridge package (apt install ros-humble-foxglove-bridge). Luckily we have already done this for you, but keep this in mind in case you ever want to deploy this on a different machine or robot.

Assignment Goal: To earn the points for this part, we would like to see 3 things. 1) A video recording of your localization module from part C running side by side with a video of the car navigating in real life. 2) We would love to see a screen recording of movement and mapping occurring from the foxglove visualization platform and a loop closure optimization happen (does not have to be real-time). 3) A brief write-up of how a visual SLAM localization method differs from your Monte Carlo Localization. You can try using the non-mapping mode to see it in action but it's not necessary.

Step 1: Getting familiar with the Foxglove viewer (it's like Rviz but better and has nicer Colors)

The reason we are choosing not to use RViz for this part, is because with our current setup, RViz is rendering on the car which consumes alot of resources, and we want to dedicate as much of our resources on the racecar's computer to the SLAM package and also the ZED camera.

TBW (To be written)

Perfect, now that you are familiar with this visualization software, let's move on

Step 2: Running SLAM and saving a map!

Checkout what RTABMAP SLAM is all about at this link and then follow these instructions:

TBD (To be done... soon)

  • Enable saved layout or manually make layout for mapping visualization purposes on Foxglove
  • You will need to have these 3 processes running before you begin: ZED camera, teleop, and the foxglove bridge.
  • Now you are ready to start
  • Run the provided launch file. Feel free to experiment with parameters (if you can find and work through the documentation LOL good luck) hint: source code >:) or claude code >:(
  • Once you feel your map is complete (loop closures done, looks complete with no extreme distortions), save your map with the provided command.
    • For this part, don't close out anything until you save your map this way. Your map and data you collected with slam will still live on in a .db file but for our purposes, we can quickly save the 2D map that you see on your screen.
  • Now you can view it as a png, and it also comes with a yaml file, note where you saved these to.
ros2 run nav2_map_server map_saver_cli -f my_map --ros-args -r /map:=/rtabmap/grid_map

Step 3: Part C...2... (Code Geass reference?)

  • Now modify the params files of racecar simulator and launch it with the map you just created!
  • Now film a video demonstrating your localization solution (for the hardware) on your newly created map alongside a video of the robot moving in the space you mapped.
  • Submit this on gradescope

Lab Modules

The instructions to get started with Lab 5 Parts A-C are available in the instructions notebook.

About

Localize the car in a static map with a particle filter.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages