Adaptive Cruise Control and Autonomous Lane Keeping

The main objective of this project is to maintain a safe distance from obstacles. With the help of ultrasonic sensors that were mounted on the front and on the sides of the vehicle, the goal was accomplished. Kalman filter was implemented to obtain accurate distance measurements from the sensors. Steering control was carried out by implementing PID. The difference between the sensor readings on the sides were taken as error input to the PID and steering angle inputs to the servo motor was calculated. For maintaining safe distance from the obstacle, the front sensor readings were read and if it is less than a set distance, the throttle inputs were given as zero.

Tools and Technology:
Ultrasonic sensor, Kalman Filter, Arduino, PID Controlller, Electronic Speed Controller and Servo Motor

Autonomous Navigation on Road Using F1/10th Vehicle

1. Autonomous Lane Keeping:
Autonomously track the lane clockwise using a camera and a laptop as fast as possible.
2. Recognize road signs:
Autonomously recognize a stop sign and as school zone using a second camea.
3. Communications:
Send the road sign information from the second laptop to first laptop through UDP protocol Wi-Fi communications.
4. Vehicle Controls:
Stop the vehicle at the stop sign for 2 seconds and then run at high speed; Reduce the speed by half at the school zone sign.

Tools and Technologies:
MATLAB, Convolotional Neural Network, Transfer Learning, HSV Tranformation, Camera, Controller, Camera Calibration, UDP Protocol

Autonomous Navigation of Turtlebot3

Autonomous Navigation of Turtlebot3 in Simulated and Real Environment.
Task 1: Wall following/Obstacle avoidance
It successfully follow the wall and avoid the obstacles until it reaches the yellow line. Create a map of this corridor using a SLAM package of choice.
Task 2: Line following
The Turtlebot must successfully follow the yellow line.
Task 3: Stop Sign Detection
While navigating the yellow line, the the Turtlebot should stop at the stop sign for 3 seconds before continuing. The stop-sign will be detected by Tiny YOLO.
Task 4: April Tag tracking
For this task you will need to spawn another TB3 in the environment in the empty space past the yellow line and attach an AprilTag to the robot. The Turtlebot3 with the AprilTag will be teleoperated by the user and the preceding TB3 needs to track its motion.

Tools and Technologies:
Robot Operating System (ROS), Gazebo, TinyYOLO for Object Detection, HSV Tranformation, Camera, LiDAR, OpenCV, SLAM -Gmapping

Behavioural Cloning: End to End Learning for Self-Driving Cars

The goal of this project was to train a end-to-end deep learning model that would let a car drive itself around the track in a driving simulator.
Task 1: Data Collection and Balancing
Task 2: Data Augmentation and Pre-processing
Task 3: Model
I started with the model described in Nvidia paper [End to End Learning for Self-Driving Cars] and kept simplifying and optimising it while making sure it performs well on both tracks.
Task 4: Tesing the trained model

Tools and Technologies:
Convolutional Neural Network, Udacity Simulator, End to End learning

Anomaly Detection in Manufacturing Data Using RNN

• Build an RNN model to classify text and an LSTM model for anomaly detection (also outlier detection) on the temperature sensor data.
In manufacturing industries, the anomaly detection technique is applied to predict the abnormal activities of machines based on the data read from sensors. In machine learning and data mining, anomaly detection is the task of identifying the rare items, events, or observations that are suspicious and seem different from the majority of the data. In this task, you will predict the possible failure of the system based on the temperature data. And this failure can be detected by check if they follow the trend of the majority of the data. The given dataset (ambient_temperature_system_failure.csv) is a part of Numenta Anomaly Benchmark (NAB) dataset, which is a novel benchmark for evaluating machine learning algorithms in anomaly detection.

Tools and Technologies:
Recurrent Neural Network, Tensorflow, Palmetto Cluster (HPC)


Multi Task Learning of Deep Neural Networks in Vehicle Perception

In this project, we tried implementing a multi-task learning-based neural network for autonomous vehicles and mobile robot applications. The attempt is to implement an architecture that performs multiple tasks simultaneously and uses asymmetric datasets with uneven numbers of annotations per modality. We showcase how our model can perform two tasks on two datasets, all at once, performing depth estimation and segmentation with a single model. This sort of network is essential in autonomous vehicles.

Tools and Technologies:
Encoder-Decoder Network, Light-Weight Refinenet on Top of MobileNet-v2, PyTorch, Tensorboard, NYUD and KITTI dataset

Mini Projects

My other mini projects in deep learning and robotics technologies can be seen on my GitHub profile.

Phone

+1 (864) 553-4965

Address

1461, Kerley Dr
San Jose, California - 95112
United States of America