
MOBILE ROBOT FOR BOTH INDOOR AND OUTDOOR USE
funded by IEDC(Innovation and Enterpreneurship Development Centre)
Objective: To build a mobile robot which can be used indoor and outdoor to perform autonomous navigation and mapping.
​
Motivation: There is rapid increase in the need of mobile robots for both industrial and personal use as the need for autonomous navigation rises. The industries start using the autonomous bots as the bots based on marker based localisation needs markers. The already available mobile robots are costly and lack the adaptability for both indoor and outdoor use. And many of the commercial mobile robots do not support open-sourced codes so there is lack of functions which are already available.
​
​
The parts of the mobile robot can be sub-divided into three parts:
1. Hardware components.
2. Software packages.
3. Working Algorithm.
​
1. Hardware Components :


1.1. Arduino: This microcontroller was used for taking the odometry and it converts the required rpm into corresponding PWM.
​
1.2. Motor driver: Since current supplied by Arduino is not sufficient to drive 300 RPM motors so we used this hardware that can run motors smoothly.
​
1.3. Gyro sensor: This was used to measure angular velocity of the bot so as to compensate with the variance of the wheels.
​
1.4. Motor and encoder: 300 RPM motors were used with an encoder attached to each wheel.The main function of this encoder is to track the number of revolutions each wheel has made.
​
1.5. Kinect: This is used to convert the 2-D image into depth image.
​
1.6. Laptop: This is used to for tele-operating the mobile robot besides enabling ros communication between the nodes.
​
1.7. LiPo battery: This is used as a power source for the bot.
​
1.8. Voltage regulator: This was used to convert 24V voltage coming from LiPo battery to 12 V for kinect.
​
2. Software packages :
​
2.1. Robotics Operating System( ROS ): It is the meta-operating system used for the communication of the sensors, actuators and planning are taking place.
​
2.2. rosserial: rosserial is a protocol for wrapping standard ROS serialized messages and multiplexing multiple topics and services over a character device such as a serial port or network socket.
​
2.3. DepthImageToLaserscan: depthimage_to_laserscan takes a depth image and generates a 2D laser scan based on the provided parameters.

2.4. Gmapping: The gmapping package provides laser-based SLAM (Simultaneous Localization and Mapping). Using gmapping, you can create a 2-D occupancy grid map from laser and pose data collected by a mobile robot.
​
2.5. Local planner: The base_local_planner package provides a controller that drives a mobile base in the plane. This controller serves to connect the path planner to the robot. Using a map, the planner creates a kinematic trajectory for the robot to get from a start to a goal location. Along the way, the planner creates a value function, represented as a grid map. This value function encodes the costs of traversing through the grid cells. The controller's job is to use this value function to determine dx, dy, dtheta velocities to send to the robot.
​
2.6. Global planner: This package provides an implementation of a fast, interpolated global planner for navigation.
​
2.7. AMCL: amcl takes in a laser-based map, laser scans and tranforms messages and outputs pose estimates
​
​
3. Working Algorithm:
Gmapping needs laserscan data and wheel odometry data to develop map. The kinect does not publish the laserscan data directly. It publishes the 3d raw images of the surrounding which needs to be converted to laserscan data. The ROS package DepthImageToLaserscan subscribes to the depthImage published by OpenNI and publishes the LaserScan data. The odometry data of the wheel is published through rosserial from the Arduino. From these two data, we can generate map. The map can be now be used for autonomous navigation.
Results :


Maps generated using gmapping


Actual Area Mapped
