Measuring the Impact of AI on Information Worker Productivity. SSRN Working Paper 4648686. With Donald Ngwe and Sida Peng.
This paper reports the results of two randomized controlled trials evaluating the performance and user satisfaction of a new AI product in the context of common information worker tasks. We designed workplace scenarios to test common information worker tasks: retrieving information from files, emails, and calendar; catching up after a missed online meeting; and drafting prose. We assigned these tasks to 310 subjects tasked to find relevant information, answer multiple choice questions about what they found, and write marketing content. In both studies, users with the AI tool were statistically significantly faster, a difference that holds both on its own and when controlling for accuracy/quality. Furthermore, users who tried the AI tool reported higher willingness to pay relative to users who merely heard about it but didn’t get to try it, indicating that the product exceeded expectations.
(Also summarized in What Can Copilot’s Earliest Users Teach Us About Generative AI at Work? at “A day in the life” and “The strain of searching.” Also summarized in AI and Productivity Report at “Copilot Common Tasks Study” and “Copilot Information Retrieval Study.”)