Hongyu Li (李鸿宇)

Welcome! I am a first-year Ph.D. student in Computer Science at Brown University. I work with Prof. Srinath Sridhar at Interactive 3D Vision & Learning Lab.

My research interests revolve around the convergence of computer vision, machine learning, and robotics, particularly in the field of robot perception (vision and touch) and planning. Perception and planning play a crucial role in various robotics domains, and I am currently focused on developing deep learning models for environment and object interaction.

Before joining Brown University, I worked with Prof. Huaizu Jiang and Prof. Taskin Padir at Northeastern University. I also did two research internships at Honda Research Institute, focusing on visuotactile perception under the guidance of Dr. Nawid Jamali and Dr. Soshi Iba.

News

Mar 23, 2024 I will begin a research internship at Amazon Robotics in the summer of 2024, advised by Prof. Taskin Padir.
Jan 20, 2024 Our work E(2)-Equivariant Graph Planning for Navigation is accepted to RA-L. See you in Abu Dhabi :camel: (IROS 2024)!
Aug 25, 2023 Our work ViHOPE is accepted to RA-L. See you in Yokohama :jp: (ICRA 2024)!
May 15, 2023 I receive a $1,300 RAS travel grant for ICRA 2023.
Apr 17, 2023 I will present our work in progress, StereoNavNet: Learning to Navigate using Stereo Camera with Auxiliary Occupancy Voxels, at CVPR 2023 3D Vision and Robotics in Vancouver :canada:.

Selected Publications

Symbol * or † represents equal contribution or advising.

2024

  1. odtformer.gif
    ODTFormer: Efficient Obstacle Detection and Tracking with Stereo Cameras Based on Transformer
    Tianye Ding*Hongyu Li*, and Huaizu Jiang
    Under Review
    We propose ODTFormer, a Transformer-based model to address both obstacle detection and tracking problems
  2. hypertaxel.gif
    HyperTaxel: Hyper-Resolution for Taxel-Based Tactile Signal Through Contrastive Learning
    Under Review
    This work was done at my second internship at HRI.
  3. snn.gif
    StereoNavNet: Learning to Navigate using Stereo Cameras with Auxiliary Occupancy Voxels
    Hongyu LiTaskin Padir, and Huaizu Jiang
    Under Review
    We propose StereoNavNet, leveraging 3D voxel occupancy grids from stereo images to predict navigation actions
  4. RA-L
    e2-preview.png
    E(2)-Equivariant Graph Planning for Navigation
    IEEE Robotics and Automation Letters, 2024
    To be presented at IROS 2024 in Abu Dhabi, UAE :camel:.
    We study E(2) Euclidean equivariance in navigation on geometric graphs and develop message passing network to solve it.

2023

  1. RA-L
    vihope-animation.gif
    ViHOPE: Visuotactile In-Hand Object 6D Pose Estimation with Shape Completion
    Hongyu LiSnehal DikhaleSoshi Iba, and Nawid Jamali
    IEEE Robotics and Automation Letters, 2023
    To be presented at ICRA 2024 in Yokohama, Japan :jp:.
    Presented at NeurIPS 2023 Workshop on Touch Processing in New Orleans, LA :us:.
  2. ICRA
    hie.gif
    StereoVoxelNet: Real-Time Obstacle Detection Based on Occupancy Voxels from a Stereo Camera Using Deep Neural Networks
    In IEEE International Conference on Robotics and Automation (ICRA), 2023
    Presented at IROS 2022 Agile Robotics Workshop in Kyoto, Japan :jp:.

2022

  1. IROS
    tentabot.gif
    Deep Reinforcement Learning based Robot Navigation in Dynamic Environments using Occupancy Values of Motion Primitives
    Neset Unver AkmandorHongyu LiGary Lvov, Eric Dusel, and Taskin Padir
    In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022