Research: I primarily work on self-supervised learning for images. My focus thus far has been improving the quality of features learnt by self-supervised models. This has involved using knowledge distillation for improving smaller models and improving the loss functions for better semantic level grouping. Recently, I have become interested in analyzing the failure modes of these methods. Specifically, how changing the input dataset affects the representations of the final model? I am also interesting in vision-language pre-training.
Previously: I obtained my MS in Computer Science from UMBC in 2020. I interned at Meta AI in summer 2021, 2022 and at Matroid in summer 2020. Even before that I was a Software Engineer at Tavisca from 2014 to 2017.
(*) denotes shared first authorship
- CVPRDefending Against Patch-based Backdoor Attacks on Self-Supervised LearningIn Conference on Computer Vision and Pattern Recognition (CVPR) Jun 2023
- ECCVConstrained Mean Shift Using Distant Yet Related Neighbors for Representation LearningIn European Conference on Computer Vision (ECCV) Jun 2022
- CVPROralBackdoor Attacks on Self-supervised LearningIn Conference on Computer Vision and Pattern Recognition (CVPR) Jun 2022
- NeurIPS W.Can we train vision and language zero-shot classification models without syntax?In NeurIPS SSL Theory and Practice Workshop Dec 2022
- BMVCSimReg: Regression as a Simple Yet Effective Tool for Self-supervised Knowledge DistillationIn British Machine Vision Conference (BMVC) Nov 2021
- ICCVOralMean Shift for Self-Supervised LearningIn International Conference on Computer Vision (ICCV) Oct 2021
- ICCVISD: Self-Supervised Learning by Iterative Similarity DistillationIn International Conference on Computer Vision (ICCV) Oct 2021
- NeurIPSCompRess: Self-Supervised Learning by Compressing RepresentationsIn Advances in Neural Information Processing Systems (NeurIPS) Dec 2020