I am a Machine Learning Researcher in the Foundation Models team at Apple Inc. in New York City, NY.
Previously, I was a research engineer at Facebook AI Research (Meta AI/FAIR), working on AI for Creativity, specifically in human motion modeling and generative modeling for character animation. Previously at Meta AI, I was in the Language and Translation Technologies team, where I co-developed and deployed Neural Machine Translation models for the Facebook family of apps, supporting over 7 billion translations per day.
In the past, I’ve worked as a software engineer at LinkedIn, and as an intern at Facebook AI, Microsoft Research, and Amazon.
I obtained a master’s degree from Language Technologies Institute, Carnegie Mellon University, where I worked on Multimodal ML with Prof. LP Morency at the MultiComp Lab, and a bachelor’s degree from BITS Pilani, India.
More details about my work experience are in my CV. A full list of my publications can be found here and on Google Scholar.
Our work “CIRCLE: Capture In Rich Contextual Environments” (website) was presented at CVPR 2023!
Our paper on “Physics based Character Controllers using Conditional VAEs” has been accepted at SIGGRAPH 2022!
We open sourced fairmotion, a library that provides AI researchers tools to work with motion capture data, visualizers and operators. [Meta AI tweet]
Our paper titled “A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters” has been accepted at SIGGRAPH 2022!
[2018] We used multilingual translation models, single model trained on multiple lanugages from the same language family, to expand translation services to Indian and African languages. [Facebook AI blogpost] [Press coverage: Venture Beat]
[2017] Our work on training and deploying neural network based translation models was featured in the Facebook AI blog and received press coverage [Press coverage: The Verge, Slator]
[2011] I was part of a small team of undergraduate students that build Acyut, India’s first indigenously built autonomous humanoid robot, at BITS Pilani. [Press coverage: The Hindu , The Times of India]
Simulation and Retargeting of Complex Multi-Character Interactions
Y Zhang, D Gopinath, Y Ye, J Hodgins, G Turk, J Won
ACM SIGGRAPH Asia Posters, 2022
[PDF]
CIRCLE: Capture In Rich Contextual Environments
JP Araújo, J Li, K Vetrivel, R Agarwal, J Wu, D Gopinath, AW Clegg, K Liu
CVPR, 2023
[PDF] [Project Page]
Motion In-betweening for Physically Simulated Characters
D Gopinath, H Joo, J Won
ACM SIGGRAPH Asia Posters, 2022
[PDF]
Physics-based Character Controllers using Conditional VAEs
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2022
[PDF] [Project Page]
Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2021
[PDF] [Project Page]
A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2020
[PDF] [Project Page]
Fairmotion - Tools to Load, Process and Visualize Motion Capture Data
D Gopinath and J Won
GitHub, 2020
[Code]
Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade
J Pino, L Puzon, J Gu, X Ma, AD McCarthy, and D Gopinath
IWSLT, 2019
[PDF]
Deep Multimodal Fusion for Persuasiveness Prediction
B Nojavanasghari, D Gopinath, J Koushik, T Baltrušaitis, and LP Morency
ACM ICMI, 2016
[PDF]
Open Domain Video Description Alignment
J Koushik, D Gopinath, CY Li, and LP Morency
submitted at CVPR, 2016
[PDF]