EITAN BORGNIA


A. Bansal, E. Borgnia, H. Chu, J. Li, H. Kazemi, F. Huang, M. Goldblum, J. Geiping, T. Goldstein
Diffusion models remain effective when the forward process uses degradations without noise. Under review at ICLR 2023.


A. Bansal, A. Schwarzschild, E. Borgnia, Z. Emam, F. Huang, M. Goldblum, T. Goldstein
An improved architecture and loss function solves the overthinking problem of DeepThinking networks. Published at NeurIPS 2022.


R. Levin*, M. Shu*, E. Borgnia*, F. Huang, M. Goldblum, T. Goldstein
We identify the filters responsible for a misclassification by examining the parameter saliency profile. Published at NeurIPS 2022.


A. Schwarzschild, E. Borgnia, A. Gupta, F. Huang, U. Vishkin, M. Goldblum, T. Goldstein
Weight sharing in iterated residual blocks of a CNN can solve a benchmark suite of algorithmic problems. Published at NeurIPS 2021.

E. Borgnia, J. Geiping, V. Cherepanova, L. Fowl, A. Gupta, A. Ghiasi, F. Huang, M.Goldblum, T. Goldstein
Rigorous DP guarantees for combination of mixup and Laplacian noise. ICLR 2021 Security and Safety in Machine Learning Systems.



E. Borgnia, J. Geiping, V. Cherepanova, L. Fowl, A. Gupta, A. Ghiasi, F. Huang, M.Goldblum, T. Goldstein
Data augmentations offer state of the art defense against data poisoning and backdoor attacks. Published at ICASSP 2021.