Project Description

The growing popularity of networked devices and applications imposes increasingly stringent security- and performance-related requirements on the underlying communication fabric. Satisfying these ever-increasing demands with limited infrastructure and operational budgets is challenging for network operators. Network researchers have demonstrated that using machine learning (ML) to automate different network operations tasks can help alleviate these problems. Still, we witness reluctance from network operators to deploy ML-based solutions in production settings. This reluctance is attributable to their lack of trust in these ML-based solutions, i.e., they do not trust the model to make the right decisions in production settings. Our inability to develop trustworthy ML artifacts can lead to future insecure and unreliable communication networks. The proposal explores an answer to the question, _**how can we develop next-generation ML artifacts that network operators can trust?**_

 

Establishing trust in an ML model requires looking beyond its predictive performance (e.g., *F1-score*). It entails demonstrating that the ML model is (1) _credible_ (i.e., generalizes as expected to different deployment settings) and (2) _explainable_ (i.e., it is possible to explain the decision-making process of the black-box model). Unfortunately, the current breed of ML-based networking solutions is generally neither credible nor explainable. Specifically, standard ML pipelines are `one-off`; i.e., they typically rely on a given dataset whose quality is either poor or unknown. This one-off nature results in ML models being prone to the problem of `underspecification` (i.e., models that lack adequate detail) and are therefore not credible. Moreover, standard ML pipelines do not explain how a selected black-box learning model makes its decisions or if it is vulnerable to an underspecification problem. All these factors impede the development of trustworthy ML artifacts for networking, which prevents their deployability in production settings.

Team Members

  • Aditi Phatak
  • Abel Atnafu

  • Emily Hu

  • Luis Bravo

  • Michael Yang

Professor and Mentors

  • Prof. Arpit Gupta
  • Grad mentor: Roman Beltiukov

Meeting Time

  • Meeting with the Professor
    • Research group meeting: 3p - 5p
  • Meeting with Grad mentor
    • TBD
  • ERSP meeting with central mentors
    • Chinmay: TBD
    • Diba: TBD

Links to Proposals and Presentation

  • Proposal (first draft): link
  • Proposal (after peer review): link
  • Final Proposal (after instructor's feedback): link
  • Final presentation: link

Individual Logs

Peer Review

Project Documentation and Resource