I am a Machine Learning Researcher in the Foundation Models team at Apple, the core team building Large Language Models that power Apple Intelligence and other AI products across the company. I lead several efforts in pretraining and evaluation of LLMs focused on model training recipes, data research, and multilinguality.
I have nearly a decade of AI research experience. Previously, I was a research engineer at Facebook AI Research (Meta AI/FAIR), where I worked on AI for Creativity, specifically in multimodal generative AI for character animation. At Meta AI, I also worked in the Language and Translation Technologies team, where I developed and deployed Neural Machine Translation models for the Meta family of apps, including Facebook, Instagram, and WhatsApp. The models I developed were the first neural network-based translation systems at Meta and served tens of billions of translations per day.
In the past, I’ve worked as a software engineer at LinkedIn, and as an intern at Facebook AI, Microsoft Research, and Amazon.
I have a master’s degree from Language Technologies Institute, Carnegie Mellon University, where I worked on Multimodal ML with Prof. LP Morency at the MultiComp Lab, and a bachelor’s degree from BITS Pilani, India.
I am best reached by email at dpacgopinath@gmail.com. More details about my work experience are in my CV. A full list of my publications can be found here and on Google Scholar.
[June 2025] Launched new versions of Apple’s On-Device and Server Foundation Language Models supporting several new capabilities and languages [blogpost]
[May 2025] Paper titled “Interleaved Reasoning for Large Language Models via Reinforcement Learning” significantly reduces time to first token and improves accuracy in reasoning tasks paper
[2024] Read the Apple Foundation Models technical report and accompanying blogpost
[2023] Our paper titled “Simulation and Retargeting of Complex Multi-Character Interactions” has been accepted at SIGGRAPH 2023!
[2023] Our work “CIRCLE: Capture In Rich Contextual Environments” (website) was presented at CVPR 2023!
[2022] Our paper on “Physics based Character Controllers using Conditional VAEs” has been accepted at SIGGRAPH 2022!
[2020] We open sourced fairmotion, a library that provides AI researchers tools to work with motion capture data, visualizers and operators. [Meta AI tweet]
[2018] We used multilingual translation models, single model trained on multiple lanugages from the same language family, to expand translation services to Indian and African languages. [Facebook AI blogpost] [Press coverage: Venture Beat]
[2017] Our work on training and deploying neural network based translation models was featured in the Facebook AI blog and received press coverage [Press coverage: The Verge, Slator]
[2011] I was part of a small team of undergraduate students that build Acyut, India’s first indigenously built autonomous humanoid robot, at BITS Pilani. [Press coverage: The Hindu , The Times of India]
Interleaved Reasoning for Large Language Models via Reinforcement Learning
R Xie, D Qiu, D Gopinath, D Lin, Y Sun, C Wang, S Potdar, B Dhingra
[PDF]
Apple Intelligence Foundation Language Models
T Gunter, Z Wang, C Wang, R Pang, A Narayanan, A Zhang, B Zhang, C Chen, C Chiu, D Qiu, D Gopinath, …
[PDF]
Simulation and Retargeting of Complex Multi-Character Interactions
Y Zhang, D Gopinath, Y Ye, J Hodgins, G Turk, J Won
ACM SIGGRAPH Asia Posters, 2022
[PDF]
CIRCLE: Capture In Rich Contextual Environments
JP Araújo, J Li, K Vetrivel, R Agarwal, J Wu, D Gopinath, AW Clegg, K Liu
CVPR, 2023
[PDF] [Project Page]
Motion In-betweening for Physically Simulated Characters
D Gopinath, H Joo, J Won
ACM SIGGRAPH Asia Posters, 2022
[PDF]
Transformer Inertial Poser: Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation
Y Jiang, Y Ye, D Gopinath, J Won, AW Winkler, CK Liu
SIGGRAPH Asia 2022, 2022
[PDF]
Leveraging Demonstrations with Latent Space Priors
J Gehring, D Gopinath, J Won, A Krause, G Synnaeve, N Usunier
[PDF]
Physics-based Character Controllers using Conditional VAEs
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2022
[PDF] [Project Page]
Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2021
[PDF] [Project Page]
A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2020
[PDF] [Project Page]
Fairmotion - Tools to Load, Process and Visualize Motion Capture Data
D Gopinath and J Won
GitHub, 2020
[Code]
Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade
J Pino, L Puzon, J Gu, X Ma, AD McCarthy, and D Gopinath
IWSLT, 2019
[PDF]
Deep Multimodal Fusion for Persuasiveness Prediction
B Nojavanasghari, D Gopinath, J Koushik, T Baltrušaitis, and LP Morency
ACM ICMI, 2016
[PDF]
Open Domain Video Description Alignment
J Koushik, D Gopinath, CY Li, and LP Morency
submitted at CVPR, 2016
[PDF]