Project Description

Neural Networks (NNs) have been successful in many areas including computer vision, speech recognition, and natural language processing. However, due to the increasing adoption of NNs in safety-critical and socially sensitive domains such as self-driving cars, robotics, computer security, criminal justice, and medical diagnosis, there is a pressing need for developing verification techniques that can provide guarantees about dependability and safety of NN applications.

In analyzing if a NN is robust to small perturbations or fair to sensitive fields, determining if a single input value triggers an undesired response may not be sufficient. We may want to know how many input values can trigger an undesired response or the ratio of the input values that trigger an undesired response to the whole input domain.  Quantitative analysis techniques enable us to answer such questions. The goal in this project is to apply symbolic quantitative analysis  techniques to NNs. In particular the goal is to develop techniques for probabilistic analysis of NNs based on symbolic execution, volume computation, and model counting.

Team Members

  • Anushka Lodha
  • Brian Ozawa Burns
  • Erin DeLong

Professor and Mentors

  • Prof. Tevfik Bultan
  • Mara Downing

Meeting Time

  • Research group meeting
    • Time and Location: Tuesdays 3PM-5PM on zoom
  • Meeting with Mara
    • Thursdays 4PM - 4:30PM
  • ERSP team meeting
    • Thursdays 4:30PM - 6:30PM
  • ERSP meeting with central mentors (zoom for the first two weeks, in person after that)
    • Chinmay: Thursdays 11AM - 11:30AM
    • Diba: TBD

Links to Proposals and Presentation

  • Proposal (first draft): link
  • Proposal (after peer review): link
  • Final proposal (after instructor feedback): link
  • Final presentation: link 

Individual Logs

Peer Review

Project Documentation and Resource