Ramakrishna Vedantam wants to build machine learning systems that mimic human capabilities for reasoning, compositional generalization, and concept learning. In terms of methodologies he is interested in robust representation learning and out of distribution generalization viewpoints of studying these problems. Previously, he obtained his Ph.D. at Georgia Tech under the supervision of Prof. Devi Parikh, studying problems at the intersection of probabilistic modeling, computer vision and natural language processing. During his Ph.D. he also spent time working at Google Research, Microsoft Research, INRIA and Facebook working on related problems.
July 17, 2020
We propose a new class of probabilistic neural-symbolic models, that have symbolic functional programs as a latent, stochastic variable. Instantiated in the context of…
Ramakrishna Vedantam, Karan Desai, Stefan Lee, Marcus Rohrbach, Dhruv Batra, Devi Parikh
July 17, 2020
July 17, 2020
We propose a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more…
Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra
July 17, 2020