The Idea

The need for accurate quantitative predictions of human decision making is becoming more and more prominent in many real world applications. For example, to increase revenue and provide value for the customers, online retailers need to predict what type of products users may be interested to purchase. Similarly, to increase safety and efficacy, autonomous vehicles need to predict the decisions made by other drivers and by pedestrians.

Behavioral decision research highlights interesting choice anomalies and proposes elegant models that can explain these phenomena. Thus, it seems reasonable that this type of research would provide a solid base for development of useful predictive models of choice. In practice, however, it is often easier to predict behavior with data-driven machine learning tools that do not rely on the behavioral decision making literature.

One reason for the difficulty in deriving general predictions using the elegant behavioral models is that different  models are often proposed to explain different phenomena. It is then unclear which model to use to address a new task. Indeed, Economics Nobel laureate Alvin E. Roth made this point clearly by asking authors of different papers to accompany them with a toll-free 1-800 number for interested readers to call and find out the model’s predictions for a specific task.

A recent study (Erev, Ert, Plonsky, Cohen, & Cohen, 2017) has tried to address this prediction difficulty by making three observations. First, it is possible to define a single space of choice problems and a single paradigm in which 14 of the classical phenomena (including the St. Petersburg paradox, (Bernoulli, 1954), the Allais paradox, (Allais, 1953), and the Ellsberg paradox, (Ellsberg, 1961)) emerge. Second, it is possible to derive a single behavioral model that both captures all 14 classical phenomena and provides useful predictions of choice behavior in other settings that are part of the same space of problems. Specifically, the model, BEAST (acronym for Best Estimate And Simulation Tools), assumes choice is sensitive to the expected values and to four additional behavioral tendencies: (a) minimization of the probability of immediate regret, (b) a bias toward equal weighting of possible outcomes, (c) sensitivity to the payoff sign, and (d) pessimism. The third observation Erev et al. (2017) made follows from a choice prediction competition (hereinafter CPC15) in which other researchers were challenged to develop and submit models that outperform BEAST with respect to predictive power of a completely withheld dataset. The results of the competition showed that the best predictive models were variants of BEAST. Specifically, theory-free statistical learning models, as well as more traditional behavioral models such as variants of prospect theory (Kahneman & Tversky, 1979), did not predict well.

However, Plonsky, Erev, Hazan, and Tennenholtz (2017) have since shown that a model that integrates insights from the behavioral decision research with a machine learning algorithm can provide state-of-the-art predictive performance for the CPC15 data. Specifically, the model, called Psychological Forest, employs the basic theoretical building blocks of BEAST (as well as the predictions of BEAST) as features (covariates or predictors) to be used in a supervised learning setting with a random forest algorithm (Breiman, 2001). Notice however, that Psychological Forest, unlike BEAST, was developed after the test data was already available, which makes it possible to fine tune the model to improve prediction accuracy.

In addition to presenting Psychological Forest, Plonsky et al. (2017) also tested a variety of machine learning algorithms on the CPC15 data and their results affirm Erev et al.’s (2017) third observation above: These algorithms (without incorporating knowledge from behavioral research) cannot outperform BEAST. Yet, Plonsky et al. tested mainly off-the-shelf machine learning tools. It is possible that tailoring an algorithm to the data or the use of more sophisticated techniques can improve performance. Indeed, and in contrast to the aforementioned results, Hartford, Wright, and Leyton-Brown (2016) show that for a similar task, predicting human behavior in strategic settings, a deep neural net approach that does not use expert behavioral research knowledge outperforms the best available models that do make use of such expert knowledge. Yet, again note that Hartford et al. developed their model when the test set is already available. Moreover, to the best of our knowledge, they did not compare their deep learning algorithm to a model that integrates behavioral research insights with machine learning tools.

To sum, at least three quite different approaches for the development of predictive models of human choice have proven to be successful in recent years: pure behavioral models like BEAST, pure machine learning tools like Hartford et al.’s deep learning architecture, and a mixture of behavioral and data-driven approaches, like Psychological Forest. The main goal of this study is to test which approach is likely to provide the best performance. To provide fair grounds for this comparison, we plan to hold a new choice prediction competition (with two parallel tracks, see below). As different researchers have a different favorite modelling approach, we hope to get many candidate models from each approach. The relative success of the different approaches would then approximate its usefulness for the development of predictive models of human choice.


A fuller description of CPC18 can be found here:  Plonsky et al. CPC18_March2018

PPT summarizing main ideas can be found here: Erev_Comp_2_2018