top of page

 PROJECTS

1. S.L.A.M

​

(Simultaneous Localization And Mapping)

        Our objective is to develop a mobile robot which can perform S.L.A.M in an unknown environment to produce map of the particular environment and use path planning algorithms to perform autonomous navigation.

               Robots need a sense of its location in an environment to navigate autonomously. Let's take an example of robot serving as a waiter in an Hotel. This robot will need the map of the hotel to determine its path and need  to know the position of the robot in the map. And if robots are deployed in an unknown environment, it needs to construct its' map and localize itself simultaneously while navigating as human beings do. In this project, we will be using the Kinect as a depth sensor and wheel odometry from iroomba will be used. The subscribing and publishing of these data will be done using R.O.S (Robot Operating System) 

2.HUMANOID

      The main aim of this project is to study the dynamics and kinematics of biped humanoid in different postures and  balancing during walking and running

      Humanoid robotics is an emerging and challenging research field, which has received significant attention during the past years and will continue to play a central role in robotics research and in many applications of the 21st century.              

      We are also working on the development of human shaped bots so as to study its dynamics and movements. For this we firstly made a Human shaped bot using the SERVO motors, Wood and thin metal sheets were used to make the structure steady and rigid . 17 DOF (Degree Of  Freedom) were allowed to this bot.

      At the starting we are working on its Biped movements to check its proper balancing by itself. MatLab model of the Humanoid was designed to facilitate easy simulation and calculations. It was controlled using Arduino Mega, A special circuitry was designed  for proper regulation of voltage and cooling unit was also attached to cool the processors and regulators. 

​

This project is still under development and lot of work needs to be done on it.

Future scope:

  • Exactly replicate human movements

  • Adding Feedback Methods

  • Autonomous Working

3. 3D Scanner

             Our objective is to develop an affordable 3D scanner with higher resolution for personal use.

      3D scanner is a device that analyses a real-world object or environment to collect data on its shape and possibly its appearance. The collected data can then be used to construct digital three-dimensional image. Our 3D scanner works by illuminating an object with line laser and then using 3D triangulation to generate a point cloud for each location where the laser hits the model. It is also having a turntable over which the object to be scanned is placed.  Neighbouring points are then connected as triangles to form a 3D model. Our scanner uses the freelss software to collect the data and construct the 3D model.

​

​      All of the software runs on board the Raspberry Pi, so there is no requirement of drivers or software packages to install. A web browser is used to communicate with the scanner on your home network.  Once a scan is performed, the web browser is used to download the resulting models.​

​

      Our 3D scanner consists of a rotating table using stepper motor, 1 camera module, 2 laser modules and a processing software  freelss on Raspberry Pi.

4. Driverless Car

      Our objective is to develop a prototype of driver-less car with scale-down size which can detect the road and follow the path and also recognizes the traffic signal.

            The car consists of a Raspberry Pi with a camera and an ultrasonic sensor as inputs. The neural network is trained in OpenCV using back propagation method. Once training is done, weights are saved. To generate predictions, the same neural network is constructed and loaded with the trained xml file. This project adapted the shape-based approach and used Haar feature-based cascade classifiers for object detection. Since each object requires its own classifier and follows the same process in training and detection, this project only focused on stop sign and traffic light detection.

      OpenCV provides a trainer as well as detector. 

      For distance measurement aspect, the ultrasonic sensor is only used to determine the distance to an obstacle in front of the RC car and provides accurate results. On the other hand, Pi camera provides “good enough” measurement results. In fact, as long as we know the corresponding number to the actual distance, we know when to stop the RC car.

 

5. Teleoperation using leap motion

                 Our objective is to develop a system which can reproduce the gesture of our hand for teleoperation using leap motion

            Teleoperation indicates operation of a machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academic and technical environments. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance.We used leap motion as a interface device. Leap motion is a device which captures hand and finger motions accurately up to a great extent and requires no contact with the device. It captures hand movements in virtual reality which can easily be developed using common programming languages to read different values and send these values to a web socket and interface it with a microcontroller.​

      A robotic hand was made using servo motors so as to get precise finger movements of robotic hand.

      Leap motion data is sent to a web socket which is interfaced with a microcontroller and the microcontroller give the commands to each servo motor of robotic hand so as to get human hand like movements.

6.Vision based PnP robotic arm

               Our objective is to develop a robotic arm for pick and place operation using opencv and robotic arm.

Most of the industrial robots need to recognise the objects and determine the co-ordinate of the object for pick and place operation. In this project, we are implementing a vision system on the robotic arm to recognise objects and their coordinates using openCV. The position of the object is serial communicated to the servos of the robotic arm and the particular pick & place operation is performed.

bottom of page