Deepak Gopinath

Deepak Gopinath
github | linkedin
google scholar
About Me
Highlights
Publications
CV

I am a Machine Learning Researcher in the Foundation Models team at Apple, the core team building Large Language Models that power Apple Intelligence and other AI products across the company. I lead several efforts in pretraining and evaluation of LLMs focused on model training recipes, data research, and multilinguality.

I have nearly a decade of AI research experience. Previously, I was a research engineer at Facebook AI Research (Meta AI/FAIR), where I worked on AI for Creativity, specifically in multimodal generative AI for character animation. At Meta AI, I also worked in the Language and Translation Technologies team, where I developed and deployed Neural Machine Translation models for the Meta family of apps, including Facebook, Instagram, and WhatsApp. The models I developed were the first neural network-based translation systems at Meta and served tens of billions of translations per day.

In the past, I’ve worked as a software engineer at LinkedIn, and as an intern at Facebook AI, Microsoft Research, and Amazon.

I have a master’s degree from Language Technologies Institute, Carnegie Mellon University, where I worked on Multimodal ML with Prof. LP Morency at the MultiComp Lab, and a bachelor’s degree from BITS Pilani, India.

I am best reached by email at dpacgopinath@gmail.com. More details about my work experience are in my CV. A full list of my publications can be found here and on Google Scholar.


Highlights


Publications

Interleaved Reasoning for Large Language Models via Reinforcement Learning
R Xie, D Qiu, D Gopinath, D Lin, Y Sun, C Wang, S Potdar, B Dhingra
[PDF]

Apple Intelligence Foundation Language Models
T Gunter, Z Wang, C Wang, R Pang, A Narayanan, A Zhang, B Zhang, C Chen, C Chiu, D Qiu, D Gopinath, …
[PDF]

Simulation and Retargeting of Complex Multi-Character Interactions
Y Zhang, D Gopinath, Y Ye, J Hodgins, G Turk, J Won
ACM SIGGRAPH Asia Posters, 2022
[PDF]

CIRCLE: Capture In Rich Contextual Environments
JP Araújo, J Li, K Vetrivel, R Agarwal, J Wu, D Gopinath, AW Clegg, K Liu
CVPR, 2023
[PDF] [Project Page]

Motion In-betweening for Physically Simulated Characters
D Gopinath, H Joo, J Won
ACM SIGGRAPH Asia Posters, 2022
[PDF]

Transformer Inertial Poser: Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation
Y Jiang, Y Ye, D Gopinath, J Won, AW Winkler, CK Liu
SIGGRAPH Asia 2022, 2022
[PDF]

Leveraging Demonstrations with Latent Space Priors
J Gehring, D Gopinath, J Won, A Krause, G Synnaeve, N Usunier
[PDF]

Physics-based Character Controllers using Conditional VAEs
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2022
[PDF] [Project Page]

Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2021
[PDF] [Project Page]

A Scalable Approach to Control Diverse Behaviors for Physically Simulated Characters
J Won, D Gopinath, and J Hodgins
ACM SIGGRAPH, 2020
[PDF] [Project Page]

Fairmotion - Tools to Load, Process and Visualize Motion Capture Data
D Gopinath and J Won
GitHub, 2020
[Code]

Harnessing Indirect Training Data for End-to-End Automatic Speech Translation: Tricks of the Trade
J Pino, L Puzon, J Gu, X Ma, AD McCarthy, and D Gopinath
IWSLT, 2019
[PDF]

Deep Multimodal Fusion for Persuasiveness Prediction
B Nojavanasghari, D Gopinath, J Koushik, T Baltrušaitis, and LP Morency
ACM ICMI, 2016
[PDF]

Open Domain Video Description Alignment
J Koushik, D Gopinath, CY Li, and LP Morency
submitted at CVPR, 2016
[PDF]