Ruihai Wu

I am a fourth-year PhD Candidate in Center on Frontiers of Computing Studies (CFCS) at Peking University. , advised by Professor Hao Dong. Currently, I am a visiting PhD student at University of Illinois Urbana-Champaign (UIUC), advised by Professor Yunzhu Li.

My research interests include computer vision and robotics.

Email: wuruihai [at] pku.edu.cn

Email  /  Google Scholar  /  Github

profile photo
Publications      (* denotes equal contribution)
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for Robotic Manipulation
Hanxiao Jiang, Binghao Huang, Ruihai Wu, Zhuoran Li, Shubham Garg, Hooshang Nayyeri, Shenlong Wang, Yunzhu Li
ArXiv 2024
project page / paper / code / video

We formulate interactive exploration as an action-conditioned 3D scene graph (ACSG) construction and traversal problem. Our ACSG is an actionable, spatial-topological representation that models objects and their interactive and spatial relations in a scene, capturing both the high-level graph and corresponding low-level memory.

Learning Dense Visual Correspondence for Category-level Garment Manipulation
Ruihai Wu*, Haoran Lu*, Yiyan Wang, Yubo Wang, Hao Dong
CVPR 2024
project page / paper (coming soon) / code (coming soon) / video (coming soon)

We propose to learn dense visual correspondence for diverse garment manipulation tasks with category-level generalization using only one- or few-shot human demonstrations.

Leveraging SE(3) Equivariance for Learning 3D Geometric Shape Assembly
Ruihai Wu*, Chenrui Tie*, Yushi Du, Yan Shen, Hao Dong
ICCV 2023
project page / paper / code / video

We study geometric shape assembly by leveraging SE(3) Equivariance, which disentangles poses and shapes of fractured parts.

Learning Foresightful Dense Visual Affordance for Deformable Object Manipulation
Ruihai Wu*, Chuanruo Ning*, Hao Dong
ICCV 2023
project page / paper / code / video / video (real world)

We study deformable object manipulation using dense visual affordance, with generalization towards diverse states, and propose a novel kind of foresightful dense affordance, which avoids local optima by estimating states’ values for longterm manipulation.

DualAfford: Learning Collaborative Visual Affordance for Dual-gripper Object Manipulation
Yan Zhao*, Ruihai Wu*, Zhehuan Chen, Yourong Zhang, Qingnan Fan, Kaichun Mo, Hao Dong
ICLR 2023
project page / paper / code / video

We study collaborative affordance for dual-gripper manipulation. The core is to reduce the quadratic problem for two grippers into two disentangled yet interconnected subtasks.

Learning Part Motion of Articulated Objects Using Spatially Continuous Neural Implicit Representations
Yushi Du*, Ruihai Wu*, Yan Shen, Hao Dong
BMVC 2023
project page / paper / code / video

We introduce a novel framework that explicitly disentangles the part motion of articulated objects by predicting the movements of articulated parts by utilizing spatially continuous neural implicit representations.

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions
Yian Wang*, Ruihai Wu*, Kaichun Mo*, Jiaqi Ke, Qingnan Fan, Leonidas J. Guibas, Hao Dong
ECCV 2022
project page / paper / code / video

We study performing very few test-time interactions for quickly adapting the affordance priors to more accurate instance-specific posteriors.

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects
Ruihai Wu*, Yan Zhao*, Kaichun Mo*, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas J. Guibas, Hao Dong
ICLR 2022
project page / paper / code / video

We study dense geometry-aware, interaction-aware, and task-aware visual action affordance and trajectory proposals for manipulating articulated objects.

DMotion: Robotic Visuomotor Control with Unsupervised Forward Model Learned from Videos
Haoqi Yuan*, Ruihai Wu*, Andrew Zhao*, Haipeng Zhang, Zihan Ding, Hao Dong
IROS 2021
project page / paper / code

We train a forward model from video data only, via disentangling the motion of controllable agent to model the transition dynamics.

Unpaired Image-to-Image Translation using Adversarial Consistency Loss
Yihao Zhao, Ruihai Wu, Hao Dong
ECCV 2020
project page / paper / code / video

We propose adversarial consistency loss for image-to-image translation that does not require the translated image to be translated back to the source image.

Localize, Assemble, and Predicate: Contextual Object Proposal Embedding for Visual Relation Detection
Ruihai Wu, Kehan Xu, Chenchen Liu, Nan Zhuang, Yadong Mu
AAAI 2020 (Oral presentation)
paper

We propose localize-assemble-predicate network (LAP-Net), decomposing visual relation detection (VRD) into three sub-tasks to tackle the long-tailed data distribution problem.

TDMPNet: Prototype Network with Recurrent Top-Down Modulation for Robust Object Classification under Partial Occlusion
Mingqing Xiao, Adam Kortylewski, Ruihai Wu, Siyuan Qiao, Wei Shen, Alan Yuille
ECCV 2020 Visual Inductive Priors for Data-Efficient Deep Learning Workshop
paper

We introduce prototype learning, partial matching and convolution layers with top-down modulation into feature extraction to purposefully reduce the contamination by occlusion.

Other Projects
TensorLayer

A deep learning and reinforcement learning library designed for researchers and engineers.
ACM MM Best Open Source Software Award, 2017.
I am one of the main contributors of its 2.0 version.

GitHub / star (7000+) / fork (1600+) / contributors

Spreadsheet Intelligence (Microsoft Research Asia)

Umbrella research project behind Ideas in Excel of Microsoft Office 365 product.
Intelligent feature announced at Microsoft Ignite Conference and released on March, 2019.
Star of tomorrow excellent intern, 2019.

project page

Services
Reviewer: ICCV 2021, CVPR 2023 Workshop on 3D Vision and Robotics (3DVR) , ICRA 2023
Volunteer: Wine 2020
Teaching Assistant
Deep Generative Models, 2020, 2022
Invited Talks
Visual Representations for Embodied Agent,     China3DV, 2024
Honors and Awards
Nominee, Apple Scholarship (~9 in China),     WorldWide, 2024
Finalist, ByteDance Scholarship (~3 in robotics track in China),     China, 2023
Jiukun Scholarship (10 in School of CS),     Peking University, 2023
Research Excellence Award,     Peking University, 2022, 2023
Excellent Graduate,     Peking University, 2020
Peking University’s Third-class Scholarship,     Peking University, 2019
Research Excellence Award,     Peking University, 2019
Star of Tomorrow Excellent Intern,     Microsoft Research Asia, 2019
May Fourth Scholarship,     Peking University, 2018
Academic Excellence Award,     Peking University, 2018
Bronze medal in National Olympiad in Informatics (NOI),     China Computer Federation, 2015
First prize in National Olympiad in Informatics in Provinces (NOIP),     China Computer Federation, 2013, 2014, 2015

Website template comes from Jon Barron
Last update: October, 2023