The Dynamic Discrete Choice Workbook

A Crash Course for Undergraduates on John Rust’s NFXP Algorithm

Author

Colleen O’Briant

Published

November 11, 2022

Art work by DALL-E / Courtesy of OpenAI

Preface

In this workbook, I build the intuition and tools required to solve John Rust’s dynamic discrete choice bus engine replacement problem. Your challenge, should you choose to accept it, is to complete all of the exercises in the following 5 chapters of this workbook.

What is DDC (dynamic discrete choice)?

Think about your favorite strategic board game. Every time it’s your turn to move, you try to make the best choice you can to balance 2 objectives: earn the most points you can in that round, and also put yourself in the best possible position to earn more points in later rounds. If you balance those objectives well enough and get lucky enough, you’ll win.

This isn’t just how to win board games; it also sums up how we live our lives and make all our decisions! We always have those 2 competing objectives we’re trying to balance: maximize your happiness today while also setting yourself up to be as happy as possible later on. Some people are more patient than others, but everyone thinks about the future to some extent. DDC lets you take real-world data about an agent acting this way and estimate particulars about the data generating process, like how much an action might benefit or cost the agent relative to other actions they might have taken.

What You Can Expect

  • The first chapter is on Binary Choice. Here, I introduce the econometrics of modeling people’s choices between two alternatives. There are a few tools you can use here: a linear probability model, a probit, or a logit. I discuss differences between those and explore the logit in some depth.
  • In the second chapter, I explain how to estimate a logit using sample data, which requires a technique called Maximum Likelihood Estimation.
  • In the third chapter, I introduce dynamics with Markov Chains.
  • The fourth chapter is on Dynamics. I guide you to work through an example of a Markov Decision Process, solving it via simulation and then value iteration.
  • Finally in the fifth chapter, I put everything together for an optimal stopping point example from Rust (1987). By the end of this chapter, you will replicate Rust’s seminal bus engine replacement paper using his Nested Fixed Point algorithm.

Each chapter has analytical exercises in teal boxes for you to check your understanding. Also, at the end of chapters 1-3, you’ll find a programming exercise to practice the imperative programming skills you’ll need to build in order to finally implement the NFXP algorithm in the final chapter.

I encourage you to practice imperative programming in R throughout this workbook using Project Euler problems. This is a big shift from the previous workbook where we practiced declarative programming in R using the tidyverse. You’ll want to keep these two programming paradigms as separate as possible in your mind and in practice. Declarative programming is well suited for solving data analytics problems. Imperative programming is well suited for implementing algorithms from scratch. Since you’ll need to implement an algorithm in order to estimate dynamic discrete choice models, you need to learn to program imperatively. But whenever you’re programming, you should know whether you’re programming declaratively or imperatively, and you should always try to avoid mixing the two without clear delineation (“this is the data analytics part of the problem so I’ll use the tidyverse; this is where I need to implement an algorithm, so I’ll switch to imperative programming”). I discuss more about these programming paradigms here.