I am a fifth-year PhD student in Decision Sciences at INSEAD. I am immensely fortunate to be advised by Hamsa Bastani and Spyros Zoumpoulis.

My research examines how we should design and govern algorithms and AI to improve performance while preserving human skills. Methodologically, I study human-AI systems by integrating sequential decision-making modeling with behavioral experiments and industry collaborations. My goal is to address fundamental questions on the design and governance of human-AI systems, advancing both theory and managerial practice.

Prior to my PhD, I embarked on an entrepreneurial journey, earning recognition on the Forbes 30 Under 30 list for Greece. I also contributed to women’s economic development through my public policy and education roles at MExoxo.

I hold a B.S. and M.S. in Electrical and Computer Engineering from the National Technical University of Athens.

đź“„ View my CV

Research interests

behavioral operations, human-AI collaboration, AI governance, service operations, human capital development

📢 I will be presenting at the following sessions at the 2025 INFORMS Annual Meeting:
  • Sunday, Oct 26—1:40–2:05 pm (Building B Level 3 B310)
    Session SC50—Recent Advances in Data-Driven & AI-Guided Decision-Making
    Self-Regulated AI Use Hinders Long-Term Learning
  • Monday, Oct 27—4:30–4:50 pm (Building B Level 3 B306)
    Session ME46—Decision Analysis Society Student Paper Award
    Action vs. Attention Signals for Human-AI Collaboration: Evidence from Chess

Research

Action vs. Attention Signals for Human-AI Collaboration: Evidence from Chess

with Haosen Ge, Hamsa Bastani, and Osbert Bastani

Major Revision, Management Science

  • 1st Place, Decision Analysis Society Student Paper Award, 2025
  • Finalist, TIMES Best Working Paper Award, 2025 (winner TBA)

Working Paper

Algorithmic advice increasingly supports human decision-making in high-stakes domains such as healthcare, law, and finance. While prior work has mostly studied action signals, which recommend specific actions, many practical implementations actually rely on attention signals, which highlight critical decisions without prescribing a course of action—e.g., in hospitals, attention signals may trigger upon encountering high-risk patients, while action signals may additionally suggest specific treatments for those patients. Naïvely, if both kinds of signals are reliable, then action signals may be clearly preferable since they provide significantly more information to the decision-maker. To assess this hypothesis, we study the impact of these signals on human decision-making via an extensive behavioral experiment in the context of chess, a challenging and well-studied decision-making problem where experts frequently rely on algorithmic advice. We find that both signal types can effectively improve decision-making, with attention signals achieving at least 40% of the benefits of action signals. However, we find that action signals only improve decision-making in the specific states where they are provided, and can even guide decision-makers into "uncharted waters" where they are unsure how to make effective decisions, thereby degrading subsequent performance. In contrast, attention signals improve decision-making quality not only in states where they are given, but also in subsequent states. Our findings suggest that action signals act as substitutes for human thinking, whereas attention signals act as complements—thus, attention signals may be preferable to action signals even in settings where both kinds of signals are considered reliable.

Self-Regulated AI Use Hinders Long-Term Learning

with Hamsa Bastani and Osbert Bastani

Working Paper

There has been significant recent interest in leveraging artificial intelligence (AI) tutors to aid student learning. Current systems enable students to control the timing and nature of AI assistance; however, this student-directed access risks short-circuiting the effortful practice essential for lasting expertise. To understand these risks, we conducted a long-term field experiment with over 200 chess club students training on a custom AI-assisted chess platform. Students were randomly assigned to either a system-regulated condition, where the platform automatically provided AI tips at key moments, or a self-regulated condition, where students could additionally request help at any time by clicking a button. After 12 weeks of training, we find that both groups improved their chess skills, but students in the self-regulated condition achieved less than half the performance gains of students in the system-regulated condition (30% vs. 64%). We identify two potential mechanisms for these adverse effects: reduced engagement and diminished productive struggle—students in the self-regulated condition trained less, reported a lower sense of accomplishment, and became increasingly reliant on AI even though they were aware of its harms. We also show that these effects are mitigated among highly motivated students, but not among highly skilled students. Our findings demonstrate that while scaffolded AI assistance can accelerate learning, unrestricted access can undermine it.

Generative AI Workflows at Work

with Hamsa Bastani and Spyros Zoumpoulis

To Alert or to Tell? Optimizing AI Signals for Decision-Making

Fieldwork