ROS Navigation from Scratch
Navigation in ROS from Scratch

Description

This repository contains files that that implements odometry and EKF SLAM for a differential drive robot, as well as various supporting libraries and testing nodes. Currently, there is no path planning implementation. The currently repository also contains files to run everything on the TurtleBot3 Burger.

Summary Videos:

Packages

Here is a high level description of each package, more details for the nodes and libraries can be found in the API.

Select libraries and functions also have accompanying test files usings gtest and rostest.

How to use this repo:

Most likely entirety of this repo will not be plug and play since a lot all of the real world implementations are configured for our lab's specific turtlebots with stripped down firmware targeted at this project. But everything should work out of the box if you use the simulation options instead of the real world options.

1) Get all of the necessary files.

Use the nuturtle.rosinstall file to clone this repo as well a peripheral one that contains some custom messages

2) Launch something!

The main launch files:

Parameters:

nuturtle_robot/follow_waypoints.launch: This file will run a waypoint following script that uses only odometry to estimate the robot pose as it follows a list of waypoints and compares the pose to the 'perfect' robot (nodes in the fake namespace). Once launched, call the /start service to actually start sending velocity commands. Once the path has been completed, call /start again to complete another loop. Currently only proportional control is used to follow the waypoints.

To use this file with the simulated robot, just launch nuturtle_gazebo/gazebo_waypoints.launch. No need to pass any arguments.

Parameters:

The other launch files:

nuslam:

nuturtle_gazebo:

nuturtle_robot:

tsim:

Under the hood:

All of the odometry calculations are built on the conversions from the desired body velocity to individual wheel velocity commands that actually are sent to the robot. The derivation for this can be found here in the rigid2d package.

This SLAM implementation is using an EKF to perform the pose estimation for the robot and each landmark. Here is a detailed resource for practically implementing the EKF.

Simulation results using the groundtruth data from gazebo:

GT_data.gif
gt_data_slam

Since there in no noise on from the groundtruth data, the landmark position estimates stay virutally still. This results in a near perfect robot pose estimate from the EKF SLAM algorithm.

Simulation results using the laser scan data from the simulated sensor:

RSim_data.gif
sim_data_slam

Due to sensor noise, the landmark detection now experience variance in the data fed to the SLAM measurement update. Also, the laser scan data is not being adjusted based on the robot's movement while the scan is taking place. This also contributes to the shifting of the map.

This implementation has the constraint that all of the landmarks it expects to see are cylindrical pillars of a uniform radius. The landmarks are identified using laser scan data reported by the simulation/real robot. First the laser scan data is divided into clusters based on the range values reported by the scanner. If a cluster has more than 3 data points it is then processed using a circle fitting algorithm based on this practical guide to identify the center and estimated radius. For more information on the circle fitting see this paper and related website. After fitting the circle any fit with a radius greater than the threshold parameter is discarded. Initially, a classification algorithm based on this paper was also implemented, but it yielded worse results than screening by radius in this application since the approximate size of each landmark is known. A more advanced classification scheme would be more useful when running the robot in a real world as seen by all of the false positive readings in summary video.

landmark_fitting.gif
landmark_fitting

In order to associate incoming data with the current estimation of the landmark states, the Mahalanobis distance was used. While this method is more complex than just comparing the physical distance, it has the advantage of taking into account the covariance of the estimated pose. See this resource for how to implement this type of data association. If the distance between a data point and an estimated landmark is under a minimum threshold it is considered a match to an existing landmark. If the distance between a data point and all estimated landmarks is greater than a maximum threshold it is considered a new landmark. These parameters will likely change based on the environment the robot is operating in to yield optimal results.

Future Development