Prasanna Parthasarathi
Cited by
Cited by
Extending neural generative conversational model using external knowledge sources
P Parthasarathi, J Pineau
arXiv preprint arXiv:1809.05524, 2018
Unnatural language inference
K Sinha, P Parthasarathi, J Pineau, A Williams
arXiv preprint arXiv:2101.00010, 2020
Learning an unreferenced metric for online dialogue evaluation
K Sinha, P Parthasarathi, J Wang, R Lowe, WL Hamilton, J Pineau
arXiv preprint arXiv:2005.00583, 2020
Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain
J Rajendran, A Srinivas, MM Khapra, P Prasanna, B Ravindran
arXiv preprint arXiv:1510.02879, 2015
Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning
A Neelakantan, S Yavuz, S Narang, V Prasad, B Goodrich, D Duckworth, ...
arXiv preprint arXiv:1910.14613, 2019
Local structure matters most: Perturbation study in NLU
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955, 2021
Sometimes we want ungrammatical translations
P Parthasarathi, K Sinha, J Pineau, A Williams
Findings of the Association for Computational Linguistics: EMNLP 2021, 3205-3227, 2021
Deep learning on a healthy data diet: Finding important examples for fairness
A Zayed, P Parthasarathi, G Mordido, H Palangi, S Shabanian, S Chandar
Proceedings of the AAAI Conference on Artificial Intelligence 37 (12), 14593 …, 2023
Adaapt: A deep architecture for adaptive policy transfer from multiple sources
J Rajendran, P Prasanna, B Ravindran, MM Khapra
arXiv preprint arXiv 1510, 2015
Maca: A modular architecture for conversational agents
HP Truong, P Parthasarathi, J Pineau
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, 93-102, 2017
Demystifying neural language models’ insensitivity to word-order
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955 5, 45-67, 2021
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task?
P Parthasarathi, J Pineau, S Chandar
Proceedings of the 22nd Annual Meeting of the Special Interest Group on …, 2021
Memory augmented optimizers for deep learning
PA McRae*, P Parthasarathi*, M Assran, S Chandar
arXiv preprint arXiv:2106.10708, 2021
Measuring the knowledge acquisition-utilization gap in pretrained language models
A Kazemnejad, M Rezagholizadeh, P Parthasarathi, S Chandar
arXiv preprint arXiv:2305.14775, 2023
Practical Takes on Federated Learning with Pretrained Language Models
A Agarwal, M Rezagholizadeh, P Parthasarathi
Findings of the Association for Computational Linguistics: EACL 2023, 454-471, 2023
On task-level dialogue composition of generative transformer model
P Parthasarathi, A Neelakantan, S Narang
arXiv preprint arXiv:2010.04826, 2020
A brief study on the effects of training generative dialogue models with a semantic loss
P Parthasarathi, M Abdelsalam, J Pineau, S Chandar
arXiv preprint arXiv:2106.10619, 2021
Variational encoder decoder for image generation conditioned on captions
J Romoff, N Angelard-Gontier, P Parthasarathi
Towards Practical Tool Usage for Continually Learning LLMs
J Huang, P Parthasarathi, M Rezagholizadeh, S Chandar
arXiv preprint arXiv:2404.09339, 2024
Language Model-In-The-Loop: Data Optimal Approach to Learn-To-Recommend Actions in Text Games
A Vaithilingam Sudhakar, P Parthasarathi, J Rajendran, S Chandar
arXiv e-prints, arXiv: 2311.07687, 2023
The system can't perform the operation now. Try again later.
Articles 1–20