I am a Ph.D. Candidate at Stanford Graduate School of Business. My research is on applied microeconomics. I use tools from economic theory, statistics, and machine learning, to analyze interactions between economic agents in marketplace or other strategic environments. Some of the fields of application of my work include dynamic budget allocation and causal inference problems in advertising, strategic communication and media, and experimentation in complex environments.
During my time at Stanford, I also received an M.S. in Statistics. In my third year as a Ph.D. candidate, I worked at Facebook’s Core Data Science team as a research intern. Prior to joining Stanford, I worked in the finance industry in London. I received an M.Phil. in Economics from University of Cambridge, and a B.A. in Economics from Bogazici University.
Here is an incomplete list of my research.
- Measuring Investigative Journalism in Local Newspapers, joint work with Anish Saha, Rhett C. Owen, Greg J. Martin, and Shoshana Vasserman
We develop a machine learning algorithm to measure the investigative content of news articles. Our method combines an unsupervised document influence model with supervised classification using text data. We use our method to examine over-time and cross-sectional patterns in news production by local newspapers in the United States between 2010 and 2020. We find surprising stability in the quantity of investigative articles produced over most of the time period examined, but a notable decline in the last two years of the decade, corresponding to a recent wave of newsroom layoffs.
- Electoral Campaigns as Dynamic Contests, joint work with Avidit Acharya, Edoardo Grillo and Takuo Sugaya
(Under review, last updated April 2021)
We build a game theoretic model electoral campaigns as dynamic contests in which two candidates allocate their advertising budgets over time to affect their relative popularity (i.e. odds of winning), which evolves over time as a mean-reverting stochastic process. We show that time-dependent regulations—for example, those that prohibit spending in the final stages of a campaign—can be welfare-enhancing and outperform static regulations—specifically, aggregate spending caps. Finally, we use the one-to-one relationship between the speed of reversion of the popularity process and the equilibrium spending path to recover estimates of the rate of decay in the effectiveness of advertising in actual elections. We use these estimates to examine the effects of dynamic regulations in races that include incumbents.
- Persuasion with Coarse Communication, joint work with Yunus Can Aybas
(Under review, last updated July 2020)
Why does Amazon use a five-star rating system, but Netflix prefers a simple thumbs-up or down? Why do financial rating agencies use letter grades to describe the riskiness of assets, instead of a continuous scale? Motivated by these questions, we analyze a game theoretic model of persuasion in which a sender communicates with a receiver using coarse signals. We provide a tractable and novel way of modeling information design problems with a focus on the complexity of communication between agents. We characterize the sender’s willingness to pay for increasing the precision of communication, and provide upper bounds for this value. We analyze games of advice seeking, where a receiver is asking for advice from an expert sender and can choose to ask for ‘simple’ advice consisting of fewer possible action recommendations. We show that it might be optimal to ask for simple advice if the sender and receiver preferences are misaligned. Our work has implications for rating systems, certification and grading of goods and services, and the design of grading schemes.
Posters and Presentations
- Contamination-Aware Experimentation on Networks
Joint work with Mine Su Erturk
We study a setting where a decision maker is conducting experiments in a network environment. We assume the existence of multiple analysts conducting experiments on the same network, as is the case in many online platforms. An experiment creates negative externalities on other ongoing experiments by contaminating their results. We analyze an experimenter’s decision making problem in this setting, where the goal is learning an optimal treatment regime over the network while limiting the contamination on other experimenters. We provide theoretical regret bounds and study the performance of our suggested policy through simulations.