Reading Note of the Paper "Survey - Leakage and Privacy at Inference Time"

Information of the Original Paper: Jegorova, M., Kaul, C., Mayor, C., O’Neil, A. Q., Weir, A., Murray-Smith, R., & Tsaftaris, S. A. (2022). Survey: Leakage and privacy at inference time. IEEE Transactions on Pattern Analysis and Machine Intelligence.

Brief Intro

Leakage of data from publicly available ML models is an area of growing significance since it can draw on multiple sources of data, potentially including users’ sensitive data.

Inference-time leakage: the most likely scenario for publicly available models.

Topics:

  • What leakage is in the context of different data, tasks, and model architectures;
  • Taxonomy across involuntary and malicious leakage (i.e., involuntary data leakage which is natural to ML models, and potential malicious leakage caused by privacy attacks);
  • Current defence mechanisms, assessment metrics, and applications.

Key Words: Data Leakage, Privacy Attacks and Defences, Inference-Time Attacks

Content Access

Please click here to access the content of the blog from my Gitbook.

Yanyun Wang
Yanyun Wang
Research Assistant

My research interests include adversarial attack, robust machine learning and trustworthy AI.