Project Description

Image-based 3D reconstruction is a huge topic in machine learning today, with applications ranging from autonomous driving to augmented reality. While large datasets for training these machine learning systems do exist, they are designed to be general and are not easily tailored to the specific learning task of interest. In this project, students will use the Unity game engine to design tools for generating and visualizing scene-level training data for image-based 3D reconstruction tasks, aiming to make the tools generalizable to any existing 3D dataset. Using these tools, students will generate training data using a variety of available 3D datasets and explore training machine learning systems on the single-view, multi-view, or video-based reconstruction tasks.

Team Members

  • Irene Cho
  • Bryan Zamora Flores
  • Amey Walimbe
  • Kelly Lin

Professor and Mentors

  • Prof. Tobias Hollerer
  • Alex Rich

Meeting Time

  • Research group meeting
    • Time and Location: Mondays 3:30PM - 4:30PM
  • ERSP meeting with central mentors (zoom for the first two weeks, in person after that)
    • Chinmay: Saturdays noon-12:30PM
    • Diba: TBD

Links to Proposals and Presentation

  • Proposal (first draft): link
  • Proposal (after peer review): link
  • Final proposal (after instructor feedback): link
  • Final presentation: link

Individual Logs

Peer Review

Project Documentation and Resource