Previously, I was a visiting researcher at ServiceNow Research
in the Multimodal Foundation Models
team, building foundation model for structured document understanding. I was also a Research Intern at PROSE at Microsoft, working on
designing algorithms and evaluation set-ups for email classification in real-world (online) settings. I was fortunate to work on topological data analysis at
Adobe Research, India and on explainability in pre-trained language models at
INK-Lab, University of Southern California
under Prof. Xiang Ren.
Equivariant Adaptation of Large Pretrained Models
Arnab Kumar Mondal*, Siba Smarak Panigrahi*, Sékou-Oumar Kaba, Sai Rajeswar, Siamak Ravanbakhsh
Conference on Neural Information Processing Systems (NeurIPS) 2023
Leveraging Pretrained Language Models for Key Point Matching
Manav Nitin Kapadnis*, Sohan Patnaik*, Siba Smarak Panigrahi*, Varun Madhavan*, and Abhilash Nandy
8th workshop on ArgumentMining at Empirical Methods in Natural Language Processing (EMNLP), 2021