Seungyeon Kim

ML Research Engineer


Seungyeon Kim is a Machine Learning researcher with a CS Ph.D. from the Georgia Institute of Technology. Currently, he works as an L7 Research Engineer at Google DeepMind under the supervision of Rob Fergus, focusing on areas such as memorization in large language models (LLMs), adaptive computation, extreme classification, and model distillation. His work aims to advance the capabilities and efficiency of machine learning systems.

Recent Publications


More publications >

[new] Relaxed Recursive Transformers: Effective Parameter Sharing with Layer-wise LoRA, ArXiv 2024


[new] Analysis of Plan-based Retrieval for Grounded Text Generation, EMNLP 2024


Faster Cascades via Speculative Decoding, [preprint] ArXiv 2024


USTAD: Unified Single-model Training Achieving Diverse Scores for Information Retrieval, ICML 2024


Efficient Training of Language Models using Few-Shot Learning. ICML 2023