About Me

I am a Ph.D. Candidate at Stanford Graduate School of Business. My research is on applied microeconomics. I use tools from economic theory, statistics, and machine learning, to analyze interactions between economic agents in marketplace or other strategic environments. Some of the fields of application of my work include dynamic budget allocation and causal inference problems in advertising, strategic communication and media, and experimentation in complex environments.

During my time at Stanford, I also received an M.S. in Statistics. In my third year as a Ph.D. candidate, I worked at Facebook’s Core Data Science team as a research intern. Prior to joining Stanford, I worked in the finance industry in London. I received an M.Phil. in Economics from University of Cambridge, and a B.A. in Economics from Bogazici University.

Here is an incomplete list of my research.


Joint work with Anish Saha, Rhett C. Owen, Greg J. Martin, and Shoshana Vasserman.

(Dataset and Code) (Supplementary Information)

We develop a machine learning algorithm to measure the investigative content of news articles. Our method combines an unsupervised document influence model with supervised classification using text data. We use our method to examine over-time and cross-sectional patterns in news production by local newspapers in the United States between 2010 and 2020. We find surprising stability in the quantity of (predicted) investigative articles produced over most of the time period examined, but a notable decline in the last two years of the decade, corresponding to a recent wave of newsroom layoffs.

Working Papers

(Under review, last updated September 2021)

We study games of Bayesian persuasion where communication is coarse. This model captures interactions between a sender and a receiver, where the sender is unable to fully describe the state or recommend all possible actions. The sender always weakly benefits from more signals, as it increases their ability to persuade. However, more signals do not always lead to more information being sent, and the receiver might prefer outcomes with coarse communication. As a motivating example, we study advertising where a larger signal space corresponds to better targeting ability for the advertiser, and show that customers may prefer less targeting. In a class of games where the sender’s utility is independent from the state, we show that an additional signal is more valuable to the sender when the receiver is more difficult to persuade. More generally, we characterize optimal ways to send information using limited signals, show that the sender’s optimization problem can be solved by searching within a finite set, and prove an upper bound on the marginal value of a signal. Finally, we show how our approach can be applied to settings with cheap talk and heterogeneous priors.

(Under review, last updated September 2021)

We build a game theoretic model electoral campaigns as dynamic contests in which two candidates allocate their advertising budgets over time to affect their relative popularity (i.e. odds of winning), which evolves over time as a mean-reverting stochastic process. We show that time-dependent regulations—for example, those that prohibit spending in the final stages of a campaign—can be welfare-enhancing and outperform static regulations—specifically, aggregate spending caps. Finally, we use the one-to-one relationship between the speed of reversion of the popularity process and the equilibrium spending path to recover estimates of the rate of decay in the effectiveness of advertising in actual elections. We use these estimates to examine the effects of dynamic regulations in races that include incumbents.

Posters and Presentations

  • Contamination-Aware Experimentation on Networks

Joint work with Mine Su Erturk

(MIT Conference on Digital Experimentation, November 2020)

We study a setting where a decision maker is conducting experiments in a network environment. We assume the existence of multiple analysts conducting experiments on the same network, as is the case in many online platforms. An experiment creates negative externalities on other ongoing experiments by contaminating their results. We analyze an experimenter’s decision making problem in this setting, where the goal is learning an optimal treatment regime over the network while limiting the contamination on other experimenters. We provide theoretical regret bounds and study the performance of our suggested policy through simulations.