Ekin Akyürek

searching for the true nature of intelligence

Research Scientist at OpenAI

[email protected]

profile_pic.jpg

I am a research scientist at OpenAI, where I work on novel reinforcement learning algorithms for training general-purpose agents. I did my PhD in Computer Science at MIT and advised by Jacob Andreas. I studied how neural networks acquire human-like generalization in language, uncovering algorithmic structures underlying in-context learning in language models (PhD Thesis: Inference-Time Learning Algorithms of Language Models).

biography

I was born in the small city of Soma in Manisa, Türkiye. I attended Izmir Science High School. During my high school years, I was selected to represent Türkiye at the International Physics Olympiads — I won a bronze medal. Then, I earned Bachelor’s degrees in Electrical & Electronics Engineering and Physics from Koç University, where I actively contributed to the KUIS AI Lab with Prof. Deniz Yuret.

In the final year of undergraduate studies, I was a visiting student at MIT CSAIL, collaborating with Prof. Alan Edelman on a linear algebraic approach to backpropagation, and work with John Fisher on compute-efficient algorithms for bayesian non-parametrics. Subsequently, I began my PhD and still working with Jacob Andreas. I’ve done two internships at Google Research, first, collaborating with Kelvin Guu and Keith Hall on fact attribution for large language models using influence functions. I also interned at Google Brain Team (now Google-Deepmind) on understanding in-context learning and mentored by Denny Zhou, Tengyu Ma, Dale Schuurmans.

I am maried to my lovely wife Afra Feyza Akyürek, and we live in Boston, MA.

I am an outdoor and summer person, I like swimming, biking, sailing and hiking whenever Boston weather allows. My wife and I enjoy traveling together to discover new places around the world. I do play guitar to chill. I do like cooking and learning new recipes (e.g. I can make a Texas style brisket at home).

selected publications

news

Jul, 2023 Our paper LexSym: Compositionality as Lexical Symmetry has won lexical semantics area chair award at ACL2023.
Jul, 2023 I gave a talk about our paper ​What learning algorithm is in-context learning? Investigations with linear models on in-context learning to Naval Warfare Center researchers.
Jun, 2023 I gave a talk about our paper ​What learning algorithm is in-context learning? Investigations with linear models at MIT Mechanistic Interpretability Conference.
May, 2023 I attended the meeting of Philosophy of AI Reading Group at Oxford to discuss our paper ​​What learning algorithm is in-context learning? Investigations with linear models, hosted by Raphaël Millière.
May, 2023 At Google NLP Reading Group, I presented our paper ​​What learning algorithm is in-context learning? Investigations with linear models, hosted by Peter Chen.
Dec, 2022 In Munich NLP community meeting, I presented our ​​What learning algorithm is in-context learning? Investigations with linear models, hosted by Muhtasham Oblokulov.
Nov, 2022 At KUIS AI, I presented our paper ​​What learning algorithm is in-context learning? Investigations with linear models, hosted by Gözde Gül Şahin.
Jul, 2022 I gave a talk about our paper LexSym: Compositionality as Lexical Symmetry at EML Tubingen, hosted by Zeynep Akata.
Oct, 2021 I presented our paper Lexicon Learning for Few-Shot Neural Sequence Modeling at IBM Research, hosted by Yang Zhang.
Sep, 2021 I presented our paper Lexicon Learning for Few-Shot Neural Sequence Modeling at Boston University AIR Seminar.