I am a Master’s student in Computer Science (2024–2026) at the University of Massachusetts Amherst, where my research spans AI security, robust machine learning, and deep learning, with interests across natural language processing and computer vision. I hold a B.Tech in Computer Science and Engineering from the Indian Institute of Information Technology Guwahati.

My work focuses on improving the robustness, efficiency, and generalization of machine learning systems. I have contributed to research on backdoor defenses, model ensembling, large-scale zero-shot classification, and secure biometric recognition, combining insights from NLP and vision.

I have worked with research and development teams at Thales (Identity & Biometrics R&D), University College London, and MAQ Software, and have collaborated with researchers at Google DeepMind. My research has been published at venues such as ACL, contributed to multiple patent filings in biometric systems, and supported by competitive awards including the DAAD-WISE Scholarship.

Publications

  • Arora, Ansh, et al. (2024). Here’s a Free Lunch: Sanitizing Backdoored Models with Model Merge. ACL 2024.
    View Paper

🗞 Recent News

  • Aug 2025 — My internship at Thales (Pasadena, CA) was extended into Fall.
    Continued work in the Identity & Biometrics R&D team on large-scale fingerprint recognition, cross-attention–based CNN models, scalable evaluation pipelines, and deployment-ready inference systems, contributing to multiple patent filings.

  • Apr 2025 — Started as a Graduate Researcher at Google DeepMind, working on meta-optimization and adaptive model ensembling for efficient and generalizable large-scale training.

  • Oct 2024 — Published a blog post on Medium explaining our ACL 2024 paper, covering motivation, methodology, and real-world implications.
    ➡️ Here’s a Free Lunch: Sanitizing Backdoored Models with Model Merge

  • Feb 2024 — Our paper “Here’s a Free Lunch: Sanitizing Backdoored Models with Model Merge” was accepted at ACL 2024 (Findings), introducing an inference-time model merging defense that reduces NLP backdoor attack success by over 75% while preserving clean accuracy.