Tianhe (Kevin) Yu
tianhe dot yu at berkeley dot edu

I am a fourth-year undergraduate student at UC Berkeley, studying Computer Science, Applied Mathematics and Statistics. I currently work with Professor Pieter Abbeel, Professor Sergey Levine and Professor Alexei Efros as an undergraduate researcher in the Berkeley Artificial Intelligence Research (BAIR) Lab.

LinkedIn  /  GitHub

Research

My research interests lie at the intersection of machine learning, perception, and control for robotics, specifically deep reinforcement learning, imitation learning and meta-learning.


One-Shot Visual Imitation Learning via Meta-Learning
Chelsea Finn*, Tianhe Yu*, Tianhao Zhang, Pieter Abbeel, Sergey Levine
Conference on Robot Learning (CoRL), 2017 (Long Talk)
Oral presentation at the NIPS 2017 Deep Reinforcement Learning Symposium
arXiv / video / talk / code

We present a meta-imitation learning method that enables a robot to learn to acquire new skills from just a single visual demonstration. Our method requires data from significantly fewer prior tasks for effective learning of new skills and can also learns from a raw video as the single demonstration without access to trajectories of robot configurations such as joint angles.


Real-Time User-Guided Image Colorization with Learned Deep Priors
Richard Zhang*, Jun-Yan Zhu*, Phillip Isola, Xinyang Geng, Angela S. Lin, Tianhe Yu, Alexei A. Efros
ACM Transactions on Graphics (SIGGRAPH), 2017
arXiv / project website / video / slides / talk / code

We propose a deep learning approach for user-guided image colorization. Our system directly maps a grayscale image, along with sparse, local user “hints” to an output colorization with a deep convolutional neural network.


Generalizing Skills with Semi-Supervised Reinforcement Learning
Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine
International Conference on Learning Representations (ICLR), 2017
arXiv / video / code

We formalize the problem of semi-supervised reinforcement learning (SSRL), where the reward signal in the real world is only available in a small set of environments such as laboratories, and the robot need to leverage experiences in these instrument environments to continue learning in places where reward signal isn’t available. We propose a simple algorithm for SSRL based on inverse reinforcement learning and show that it can improve performance in 'unlabeled' environments by using experience from both 'labeled' environments and 'unlabeled' environments.


Template 1, Template 2