I am a first year graduate student at NewYork University pursuing my Masters in Mechatronics, Robotics and Automation Engineering.I am currently graduate research assistant atCILVR Lab, NYU under Dr. Lerrel Pinto. Here I work on self-supervised and reinforcement learning for Dexterous Manipulation. Previously I was a Researcher at Movement Generation and Control Group, Max Planck Institute for Intelligent Systems, Tubingen,Germany. Here I worked on learning highly dynamical locomotion policies for Quadrupedal Robot(Solo12) using Model Free Deep Reinforcement Learning, Imitation Learning and bridge the Sim2Real gap, under the supervision of Dr. Majid Khadiv. and Dr Ludovic Righetti.
My research interests revolve around robot learning, perception and locomotion. In my previous works, I have worked on theoritical aspects of Reinforcement Learning as well as Domain Adaptation. On a broader perspective, my research focuses on learning based approaches for Autonomous Robot Locomotion.
Apart from my research activities, I am a core-coordinator of IvLabs where I mentor a lot of students on various research projects. I also have conducted various IEEE workshops on Robotics in my sophomore year. I also served as the secretary of IEEE Student Branch ,VNIT Nagpur(Bombay Section), conducting and volunteering various workshops in and around my college.
September 2022 Grad Student at New York University
May 2022 Started Researcher position at Max Planck Institute for Intelligent Systems, Tuebingen, Germany
October 2021: My paper "Open-Set Multi-Source Multi-Target Domain Adaptation" got accepted in NeuRIPS PreRegistration Workshop 2021.
July 2021: Started Guest Researcher Position at Movement Generation and Control Group, Max Planck Institute for Intelligent Systems, Tuebingen,Germany.
June 2021: My Patent named "Navigation System for a vehicle and method for Navigation" got published.
March 2021: Started Research Internship at Movement Generation and Control Group, Max Planck Institute for Intelligent Systems, Tuebingen,Germany.
Research
My Research interests are Reinforcement learning, Deep Learning, and Robot Learning. I work at the intersection of Machine learning and Robotics. My major works are highlighted.
Worked on imitation learning of robust policy generated by larger horizon NonLinearMPC that can be deployed on a Quadruped Robot(Solo12)for multiple high-dynamical motions and devising Robust Sim2Real Transfer approach.
Worked on a Two-stage Online Reinforcement Learning approach for going from a single demonstration trajectory to a robust goal-conditioned policy that can be deployed on a Quadruped Robot(Solo12)for multiple high-dynamical motions and devising Robust Sim2Real Transfer approach.
A Novel generic domain adaptation (DA) setting with
a graph attention based framework named DEGAA which can capture information from multiple source and target domains without knowing the exact label-set of the
target which can be used in the real world.
Two stage mechanism to learn an optimal staircase alignment policy. Trained a model to obtain segmented images (UNet),aimed to devise a custom made gym environ-
ment and simulation environment in Gazebo.
A device prototype made for easy two-wheeler navigation.It has all commands of Google Maps in a single device and uses no Custom made app, but just the free voice pack feature of Google Maps which makes it easy
to use. It uses an Arduino ProMini Microcontroller and the directions were displayed on the MAX7219 LED Display
Trained an agent to solve the OpenAI-gym CartPole-v0 and MountainCar-v0 environment using Value-Based methods like
DQN and Policy Gradient approaches like REINFORCE and Actor-Critic. Results were compared
graphically.
Trained an agent to find the optimal policy in a custom-made gridworld-CliffWorld using TabularRL
algorithms.Trained using SARSA, Q-learning and Expected SARSA from scratch and compared their
results graphically.