UROP Openings

Have a UROP opening you would like to submit?

Please fill out the form.

Submit your UROP opening

Towards Sample-Efficient, Flexible AI: Tasks, Benchmarks, and Models. (Remote)


Term:

Fall

Department:

9: Brain and Cognitive Sciences

Faculty Supervisor:

Josh Tenenbaum

Faculty email:

jbt@mit.edu

Apply by:

09/30/2020

Contact:

Nick Watters, nwatters@mit.edu

Project Description

The field of deep reinforcement learning has seen leaps and bounds in recent years as RL agents have surpassed human performance on complex games like Go and Starcraft. However, RL algorithms pale in comparison to humans when it comes to sample-efficient learning and transfer to new scenarios, two hallmarks of intelligence. Moreover, the field of deep RL lacks established tasks and benchmarks for transfer. We have implemented a Python-based game engine and are now developing a cognitively inspired task suite consisting of games with levels, where the levels systematically test a player’s ability to transfer/reuse knowledge learned in previous levels to a new context. We are planning to write a paper about this task suite (and open-sourcing the game engine), and are looking for a UROP to (i) think up cool games with us, and (ii) run deep RL baselines on these games. If you want you can also be involved in other aspects of the project, such as (i) coming up with new algorithms that beat the baselines, (ii) getting human data through MTurk, or (iii) helping with paper-writing. You will work mainly with Nick Watters, who can help with all things technical and will be readily available for meeting on Zoom. You will be advised by Josh Tenenbaum, who will have a high-level strategic role.

Pre-requisites

Experience with deep reinforcement learning. Proficiency in Python. Experience with TensorFlow or PyTorch.