Doyoung Kim (κΉλμ)
I am an MS student studying AI. I am a member of Language & Knowledge Lab at KAIST AI, advised by Minjoon Seo. Before studying AI, I completed my BS in Mathematics & Computer Science (double major) at KAIST.
Despite the massive corpus of data on which modern AI systems are trained, they still struggle with tasks that humans, even young children, can easily perform. I believe that by incorporating key aspects of human cognitive processes, we can create AI systems capable of robust decision-making. My research focuses on narrowing the gap between human and artificial intelligence in complex scenarios. Specifically, I aim to tackle two key challenges:
- Extrapolability: Humans effortlessly generalize knowledge from simple scenarios to navigate complex situations. How can we develop AI agents that, after learning from a few simple demonstrations, can extrapolate to more complex scenarios?
- Semiparametric generation: Unlike purely parametric systems, humans rely on both learned patterns and direct interactions with memory, tools, and the physical world. Can we design AI systems that similarly combine internal models with external information sources in a cohesive semiparametric framework?
Email  / 
Google Scholar  / 
X  / 
Github  / 
LinkedIn  / 
CV
|
|
|
How language models extrapolate outside the training data: A case study in Textualized Gridworld
Doyoung Kim,
Jongwon Lee,
Jinho Park,
Minjoon Seo
Neurips 2024 Compositional Learning Workshop
[paper]
[blog]
While humans can learn complex reasoning from few examples, AI struggles to generalize beyond its training. We enable language models to generate "cognitive maps" - tree-structured expansions of future states - before planning. In maze-solving tasks, this cognitive mapping approach proves to be the only effective method for helping language models extrapolate their planning abilities to larger, unseen mazes.
|
|
Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards
Hyeonbin Hwang,
Doyoung Kim,
Seungone Kim,
Seonghyeon Ye,
Minjoon Seo
EMNLP 2024 Findings
[paper]
We propose a self-training method that helps LLMs identify their first incorrect reasoning step ("pit") and use it as a reward signal. Through preference optimization, this method enables LLMs to improve their reasoning process, leading to enhanced mathematical performance.
|
|
Semiparametric Token-Sequence Co-Supervision
Hyunji Lee*,
Doyoung Kim*,
Jihoon Jun,
Sejune Joo,
Joel Jang,
Kyoung-Woon On,
Minjoon Seo
ACL 2024
[paper]
We introduce a semiparametric model superposing two embedding spaces: parametric token embeddings and nonparametric sequence embeddings. The model is co-trained using weighted cross-entropy loss for language modeling and InfoNCE loss for sequence retrieval to enable generation with citations.
|
|
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
Seonghyeon Ye*,
Doyoung Kim*,
Sungdong Kim,
Hyeonbin Hwang,
Seungone Kim,
James Thorne,
Juho Kim,
Minjoon Seo
ICLR 2024 Spotlight
[paper]
We propose a fine-grained evaluation framework for generative language models based on 12 alignment skill sets, which show a strong correlation between model-based and human-based evaluations.
|
|
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
Seungone Kim,
Sejune Joo,
Doyoung Kim,
Joel Jang,
Seonghyeon Ye,
Jamin Shin,
Minjoon Seo
EMNLP 2023
[paper]
We introduce a new instruction-tuning dataset called the COT COLLECTION dataset, containing 1.84 million rationales across 1,060 tasks. These rationales were extracted from the FLAN Collection using OpenAI Codex with in-context learning (ICL). We fine-tune Flan-T5 (3B & 11B) with the COT COLLECTION to show both zero-shot and few-shot improvements.
|
Show All Publications
|
|
How Well Do Large Language Models Truly Ground?
Hyunji Lee,
Sejune Joo,
Chaeeun Kim,
Doyoung Kim,
Kyoung-Woon On,
Minjoon Seo
NAACL 2024
[paper]
|
|
Exploring the Benefits of Training Expert Language Models over Instruction Tuning
Joel Jang,
Seungone Kim,
Seonghyeon Ye,
Doyoung Kim,
Lajanugen Logeswaran,
Mootae Lee,
Kyungjae Lee,
Minjoon Seo
ICML 2023
[paper]
|
|
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
Seonghyeon Ye,
Doyoung Kim,
Joel Jang,
Joongbo Shin,
Minjoon Seo
ICLR 2023
[paper]
|
|
Retrieval of Soft Prompt Enhances Zero-Shot Task Generalization
Seonghyeon Ye,
Joel Jang,
Doyoung Kim,
Yongrae Jo,
Minjoon Seo
EMNLP 2023 Findings
[paper]
|
|
Projects
* denotes equal contribution.
|
|