I am a third-year CS PhD Candidate at the University of Southern California and Amazon ML Fellow, advised by Prof. Xiang Ren as a part of the NLP Group and INK Research Lab. My research interests lie in LLM explainability and human-centered operationalization of LLMs – how can humans effectively use LLMs and in turn, how can LLMs benefit from human intervention. I often use rationales or model explanations as the medium of communication between LLMs and humans.
Prior to this, I was an Analyst at Goldman Sachs where I worked with the Regulatory Engineering team. Even before that, I graduated with a Bachelors in Computer Science from IIIT Delhi in December, 2020, where I completed my thesis as a part (and founding member) of the Laboratory for Computational Social Systems (LCS2).
Apart from pursuing my academic interests, I am extremely passionate about serving the community. I used to co-lead the Delhi chapter of Women Who Code Delhi, which is an organisation that works towards diversity and inclusion of women in tech, for more than 5 years. Currently, I am one of the organizers of the NLP with Friends, which is an online NLP seminar for and by students. I am a huge advocate of diversity in CS Research, and always looking for opportunities to support them! Head over to the outreach & mentorship tab for more details! I am also a musician by passion and absolutely love singing and composing music. I’ve tried my hands at 3 instruments (and I plan to master some, explore more).
|Nov 2023||Excited to share our new work led by Sahana, on improving the quality of rationales generated by small LMs!|
|Oct 2023||Passed my PhD Quals! Officialy a PhD Candidate now :D|
|Oct 2023||Gave a talk at the USC-Amazon Centre on Trusted AI about Human-Centred Operationalization of Model Explanations!|
|Jul 2023||Honoured to be awarded the Amazon ML PhD Fellowship for 2023-24! 🥺|
|May 2023||Started as an Applied Scientist Intern at Amazon Alexa 🎉 Excited to explore the greater Boston area and work with the NLG Team!|
|May 2023||XMD got accepted to ACL Demo Track 2023! 🎉|
|May 2023||Our human utility project got accepted to ACL (Main Conference) 2023! See you in Toronto! 🎉🍾|
|Dec 2022||Excited to share a new preprint on measuring human utility of free-text rationales, that I co-led with Ziyi!|
|Dec 2022||New preprint, 🔪 KNIFE, on learning from free-text rationales is out! Happy to have collaborated with Aaron and Zhiyuan who led the work!|
|Oct 2022||ER-Test is now accepted at EMNLP 2022 as a Findings Paper! I will be presenting it at the BlackboxNLP workshop on 12/08!|
|Oct 2022||Our proposal with Aaron and Xiang, on Utilizing Explanations for Model Refinement got the Alexa: Fairness in AI award!|
|Aug 2022||New demo paper XMD out!|
|May 2022||Our workshop Broadening Research Collaborations in ML was accepted at NeurIPS 2022!|
|May 2022||Full version of ER-Test: Evaluating Explanation Regularization Methods for NLP Models is out! Super excited to share my first work as a PhD student! 😩|
|May 2022||ER-Test is accepted to the TrustNLP workshop @ NAACL 2022! Excited to present this at Seattle!|