Summary
AI-driven systems are rapidly moving from decision support to directing human behavior through rules, recommendations, and compliance requests. This shift expands everyday human–AI interaction and raises the possibility that AI may function as an authority figure. However, the behavioral consequences of AI as an authority figure remain poorly understood. We investigate whether individuals differ in their willingness to comply with arbitrary rules depending on whether these rules are attributed to an AI agent (ChatGPT) or to a fellow human. In a between-subject design, 977 US Prolific users completed the coins task: they could earn a monetary payoff by stopping the disappearance of coins at any time, but a rule instructed them to wait for a signal before doing so. There are no conventional reasons to follow this rule: complying is costly and nobody is harmed by non-compliance. Despite this, we find high rule-following rates: 64.3% followed the rule set by ChatGPT and 63.9% complied with the human-set rule. Descriptive and normative beliefs about rule following, aswell as compliance conditional on these beliefs, are also largely unaffected by the rule’s origin. However, subjective social closeness to the rule setter significantly predicts how participants condition their behavior on social expectations: when participants perceive the rule setter as subjectively closer, conditional compliance is higher and associated beliefs are stronger, irrespective of whether the rule setter is human or AI.
Download the paper in PDF format
Authors
Dominik Suri, Simon Gächter and Sebastian Kube
View all CeDEx discussion papers | View all School of Economics featured discussion papers
Posted on Wednesday 4th February 2026