I am an AI researcher based in San Francisco.

I currently work at OpenAI. I helped launch o1, a frontier model trained to do chain-of-thought reasoning via reinforcement learning. I gave a video demo of o1 for writing puzzles and video game coding, and you can view a video of OpenAI researchers talking about o1 here.

From 2020 to 2023, I was a research scientist at Google Brain. My work popularized chain-of-thought prompting, instruction tuning, and emergent phenomena.

Twitter / CV / Google scholar / Email

Papers (all)
2024 Oct Measuring short-form factuality in large language models. (blog)
2022 Oct Scaling instruction-finetuned language models. (blog)
2022 Jun Emergent abilities of large language models. (blog)
2022 Jan Chain-of-thought prompting elicits reasoning in language models. (blog)
2021 Sep Finetuned language models are zero-shot learners. (blog)
2019 Jan Easy data augmentation techniques for text classification tasks.
Talks
2024 Nov Talk, UC Berkeley AI Summit.
2024 Oct Keynote, OpenAI DevDay SF.
2024 Sep Keynote, The AI Conference.
2024 Aug Talk, Step SF Conference.
2024 May Stanford NLP Seminar.
2024 May Keynote, LLM day at WebConf.
2024 Apr Guest lecture, Stanford CS25 (video).
2024 Apr UMass Amherst NLP Seminar.
2023 Nov Guest lecture, Stanford CS330.
2023 Nov Guest lecture, Harvard CS249r.
2023 Nov Talk, Samsung AI Forum.
2023 Oct Talk, ML at UC Berkeley.
2023 Aug Keynote, KDD LLM day.
2023 Jun Vanderbilt ML Seminar.
2023 May Guest lecture, Dartmouth QBS 108.
2023 Apr Guest lecture, NYU CSCI-GA.2590.
2023 Mar Guest lecture, MIT MAS.S68.
2023 Jan Guest lecture, Stanford CS25.
2023 Jan USC NLG Seminar.
2022 Dec Berkeley NLP Seminar.
2022 Nov Guest lecture, Stanford CS224v.
2022 Nov Guest lecture, NYU DS-GA 1011.
2022 Nov Guest lecture, JHU CSCI 601.771.
2022 Oct Talk, Amazon AWS AI Research.
2022 Feb Talk, Princeton NLP Group.
2022 Jan Stanford NLP Seminar.