Image
Simon Stepputtis

Simon Stepputtis

Assistant Professor | Virginia Tech

  • Image

    I will start as Assistant Professor at Virginia Tech in Fall 2025

    In my work, I create intelligent robots and systems that can effectively learn, autonomosly adapt to, and operate in unstructured human-centric environments. Find out more on my Research Page
    • Image

      Paper: ShapeGrasp: Zero-Shot Task-Oriented Grasping with Large Language Models through Geometric Decomposition

      A novel method enables robots to intuitively grasp unfamiliar objects by decomposing their shapes and utilizing large language models, achieving high success rates in experimental trials. Paper on arXiv
    • Image

      Paper: A Comparison of Imitation Learning Algorithms for Bimanual Manipulation

      Explore how different imitation learning algorithms tackle complex industrial tasks, revealing key strengths and weaknesses in precision, efficiency, and adaptability. Paper on arXiv
    • Image

      University of Washington: Invited Talk

      I am excited to give a talk at the University of Washington about Neuro-Symbolic Robot Intelligence!
    • Image

      Multiple ICRA Workshop Papers!

      I will be at ICRA 2024 to present some of our most recent work. Check out the Publications!.
    • Image

      Multiple New Papers (NeurIPS, EMNLP, CoLLAs, AURO, CVPR)

      I updated the website with multiple new papers, including EMNLP 2023, NeurIPS 2023, and CVPR 2024, CoLLAs 2024 and the Autonomous Robots Journal.
    • Image

      Paper: Sigma: Siamese Mamba Network for Multi-Modal Semantic Segmentation

      Introducing Sigma, a groundbreaking Siamese Mamba network that revolutionizes AI scene understanding by combining thermal, depth, and RGB data for more accurate predictions in challenging environments. Paper on arXiv
    • Image

      Paper: Sample-Efficient Learning of Novel Visual Concepts

      Sample-efficient extraction of novel objects, affordances, and attributes from images using symbolic domain knowledge, which will be presented at CoLLAs 2023
    • Image

      Paper: Introspective Action Advising for Interpretable Transfer Learning

      We propose an alternative approach to transfer learning between tasks based on action advising, which will be presented at CoLLAs 2023!
    • Image

      RSS 2023: Articulate Robots Workshop

      I am organizing a workshop at RSS 2023 in Daegu, Republic of Korea on Articulate Robots: Utilizing Language for Robot Learning
    • Image

      Paper: Explainable Action Advising for Multi-Agent Reinforcement Learning

      Our new paper will be presented at ICRA 2023 in London, England! Paper on arXiv
    • Image

      Paper: Modularity through Attention: Efficient Training and Transfer of Language-Conditioned Policies for Robot Manipulation

      Our new paper with our collaborators at Intel will be presented at CORL 2022 in Auckland, New Zeland! Paper on OpenReview
    • Image

      Paper: Concept Learning for Interpretable Multi-Agent Reinforcement Learning

      Interpretable concept learning for multi-agent robot systems: CORL 2022 in Auckland, New Zeland! Paper on OpenReview
    • Image

      Paper: A System for Imitation Learning of Contact-Rich Bimanual Manipulation Policies

      Our paper in collaboration with Intrinsic got accepted to IROS 2022, which we will be presenting in Kyoto, Japan. View full paper
    • Image

      IROS 2022: TOM4HAT Workshop

      I organized a workshop at IROS 2022 in Kyoto, Japan on Theory of Mind
    • Image

      Workshop: RSS Pioneers

      I was accepted to the RSS Pioneers Workshop 2022 with my work on Language-Conditioned Human-Agent Teaming.
    • Image

      Postdoctoral Fellow at Carnegie Mellon University

      I started as a postdoctoral fellow at Carnegie Mellon University (CMU) with Prof. Katia Sycara.
    • Image

      Graduation: Ph.D. in Computer Science

      I completed my Ph.D. in Computer Science at Arizona State University with Prof. Heni Ben Amor!
    • Image

      Imperial College London: Invited Talk

      I will be giving brief summary and outlook of my work presented in our NeurIPS 2020 paper at the Imperial College London!
    • Image

      Resident @ X, The Moonshot Factory

      Over the summer, I will be a resident at X, The Moonshot Factory, where I will be working on industrial manipulation tasks for Intrinsic, a robotics software and AI project at X .
    • Image

      Video: Language Conditioned Imitation Learning

      We contributed a video to the robot expo at IJCAI 2021 that is a direct extension to our NeurIPS 2020 paper. You can check out the video here!
    • Image

      Paper: Language-Conditioned Imitation Learning for Robot Manipulation Tasks

      We published a new paper at NeurIPS 2020! Our paper got accepted as a spotlight presentation (top ~4% of accepted papers). View full paper
    • Image

      Intel AI Labs: Invited Talk

      I am excited to give a talk, Language for Robotics at Intel AI Labs summarizing our efforts on learning robot policies from natural language instructions.
    • Image

      Teaching Introduction to Theoretical Computer Science at ASU

      I will be teaching CSE 355: Introduction to Theoretical Computer Science at Arizona State University as the main instructor during the upcoming Summer 2020 semester!
    • Image

      Intel AI: Talk at the Deep Learning Community

      I will be giving a talk at the Deep Learning Community of Practice titled Imitation Learning for Adaptive Robot Control Policies from Language, Vision, and Motion.
    • Image

      Workshop Paper: Imitation Learning of Robot Policies by Combining Language, Vision and Demonstration

      We contributed a workshop paper to the Workshop on Robot Learning at NeurIPS 2019!
    • Image

      Best Poster Award

      I received the Best Poster Award by Nvidia at the Southwest Robotics Symposium for my work on Neural Policy Translation for Robot Control!
    • Image

      Robotics Intern @BOSCH

      I will be joining Robot BOSCH in Sunnyvale for an internship to work on semantic data analysis with a focus on time series segmentation.
    • Image

      Paper: Extrinsic Dexterity through Active Slip Control using Deep Predictive Models

      We got our paper accepted to ICRA 2018, and I will be presenting our work in Brisbane, Australia! Paper on IEEE Xplore
    • Image

      Best Video Award

      Awarded at the International Conference on Humanoid Robots (Humanoids) 2016 for our work on Learning human-robot interactions from human-human demonstrations Video on YouTube