The Pinch-hitter Problem

Using Markov Chains to Analyze Outcomes in Pitcher-Batter Matchups

Photo by Adam Klepsteen

Photo by Adam Klepsteen

Abstract

In baseball, consistently winning the matchup between pitcher and batter is integral to winning games. This intuitive conclusion drives most strategy and decision making in the game, as teams are constantly looking to gain an advantage in these matchups using pinch hitters, relief pitchers, and lineup changes. If managers and teams are able to better understand the probabilities of achieving a certain outcome from a certain pitcher-batter matchup, it will greatly help them in making these important decisions. In this analysis, a new model of predicting the outcomes of an at-bat is developed using a Markov chain, where each state in the chain represents a certain count or a certain outcome. Statcast pitch-by-pitch data scraped from MLB’s online database is used to plug real-world data for pitchers and batters into the model and simulate how their expected performance changes along with the count. By combining these transition matrices for a given hitter and pitcher, the model gives the expected probabilities of outcomes in a hypothetical at-bat between the two. This model is useful in multiple ways, namely in that it allows teams and researchers to determine which counts are particularly strong for a certain player, take a closer look at performance splits (e.g. home/away, vs. RHP/LHP), and calculate expected statistics for a player (e.g. wOBA, OPS) by count or by matchup.

1. Introduction

“Baseball is a game of failure.” This is perhaps the biggest cliché in a sport that is notorious for them. It is something that is said to every Little Leaguer growing up, and something that you consistently hear on any given baseball broadcast. But what often gets overlooked is that baseball is also a game of uncertainty. Every hitter stepping up to the plate, every pitcher toeing the rubber, and every manager making an adjustment does so knowing that they can’t predict the outcome of their actions. On one hand, they could get the clutch hit, shut down the opposing offense, win the game, and be a hero. On the other, they could fail, and the rest of the people watching will wonder what they were thinking and where they went wrong. And because baseball is a game of failure, it is much more likely that the latter will happen than the former.

However, even though the odds might always be against them, it doesn’t mean that they can’t find a way to make those odds friendlier. Particularly in the case of the manager, it is very important for them to use whatever they have at their disposal to get as close to an accurate prediction as possible. To this end, managers have a good amount of options at their disposal, including large databases of analytics, a deep understanding of the players in the dugout, and often decades of baseball experience to draw from. Here, this paper looks to add a new tool to the manager’s utility belt: a model that uses Markov chains to give the expected outcomes of an at-bat between a specific pitcher and hitter. 

Over the course of this paper, this model will be developed by reviewing previous literature on the subjects in question, explaining the methodology and data used to build it, showing various ways of interpreting its results, going through an example at-bat, finding areas in which it may be useful, discussing its limitations, and finding the next steps to be taken.

2. Background

Trying to predict the outcomes of plate appearances has always been at the heart of baseball strategy, and almost every decision to change a lineup, send in a pinch hitter, or call for a relief pitcher depends on an accurate prediction of the success rate for that move. Because of this, sabermetricians have put a lot of time and effort into creating models that can make those accurate predictions. As with many topics in sabermetrics, the application of statistics to this specific context started with Bill James, whose log5 model (James, 1984) became perhaps the most widely used format for predicting at-bat outcomes. Healey, for example, used this model to test which factors in an at-bat had the greatest effects on strikeout rates (Healey, 2015). Similarly, Doo and Kim used a Bayesian hierarchical version of James’ model and applied it to outcomes in the Korean Baseball Organization (Doo and Kim, 2018). But of course, James’ model has not been the only attempt to quantify outcome probabilities. For example, Pemstein’s Outcome Machine uses a straightforward linear regression model that takes into account strikeout, hit, and walk rates as well as BABIP (Batting Average on Balls In Play) for a specific hitter and pitcher to predict the probabilities of these outcomes occurring (Pemstein, 2014) and Powers uses a nuclear-penalized multinomial regression model to calculate the probability of these various at-bat outcomes simultaneously (Powers, 2018). However, even though they have received wide use across the world of sports and baseball in particular, no study to date has attempted to use Markov chains to simulate or predict individual at-bat outcomes.

Markov chains models have been used in sports analysis for years, with applications being implemented for predicting the NCAA Men’s Basketball Tournament (Paul and Sokol, 2006), making the decision of when to pull a goalie in a hockey game (Zaman, 2001), and ranking college football teams (Kolbush and Sokol, 2017), among many other examples. These types of models tend to be well-suited to the context of sports, as the outcome of most major sporting events depends on the probability of a team or player going from one state to another – think of the probability of entering your opponent’s territory in a football game, converting a match point in tennis, or creating a turnover in basketball. This is especially the case in baseball – perhaps more than any other sport – as the game is constantly fluctuating from state to state in terms of count, outs, and runners on base.

As such, Markov chain analysis has consistently played a role in sabermetric literature, but not in the way that it is used here. Instead, they have mainly been used to determine expected team run production. The use of Markov chains to study baseball in this way began with Howard in 1960, who used a simulated inning as an illustrative example in his book Dynamic Programming and Markov Processes (Howard, 1960). At first, these models would prove difficult to build and to use, largely due to a lack of reliable and abundant statistics as well as computing power. After all, in order to create a Markov chain that could take every variable into account, it would need to cover 2592 (9 lineup positions x 8 baserunner positions x 3 outs x 4 balls x 3 strikes) different states that the batter could find themselves in (Bellman, 1976). However, starting in the 1970s, simplified Markov models of baseball began taking a standardized shape and statistics started becoming more and more available, making way for a growing body of literature.

This standard model is achieved through the construction of a 25x25 transition matrix, which uses a combination of the number of runners on base (23 or 8 different situations) and the number of outs (0, 1, or 2) as the 24 transient states of the Markov chain and a 25th column representing the end of an inning. By simulating multiple innings through this model, sabermetricians can predict a team’s expected run outputs for a given inning, a game, or even an entire season (Sokol, 2004 in particular does a great job of explaining how this standard model is used to answer both informational and prescriptive questions in baseball). This ability has led to two main areas of baseball research where Markov chains have typically been used. The first is optimization, particularly when it comes to the structure of a lineup. In 1974, Freeze wrote the seminal paper on this subject, in which he showed that a traditional baseball lineup (where the best hitters are clustered near the third, fourth, and fifth spots) creates more runs than a lineup sorted in descending order of ability (Freeze, 1974). Other research has since built upon this work, taking into account factors like uncertainty (Sokol, 2003) and models for runner advancement (Bukiet et al, 1997). The second area of research is simulation, typically for evaluating expected team performance. By taking into account transition matrices for both run production and run allowance against certain opponents, one can effectively simulate through entire seasons for each team and player in MLB and use these simulations to predict their performance at the end of the year (see Daniel and Hartigan, 1993Ursin, 2014). 

In this paper, a third application of Markov chain models will be discussed: the prediction of at-bat outcomes. This research differs from other applications of these models in two main ways. First, it focuses on individual outcomes, rather than those of the team as a whole. Second, rather than using the standard transition matrix that uses outs and runners on base as the states and each at-bat as a transition, my matrix will be focused on outcomes within each at-bat—using counts and potential outcomes as states in the chain and each pitch as a transition. By using this approach, this paper seeks to achieve a simulated at-bat that is more true-to-life than previous predictive models by taking into account how changes in the count affect the probabilities of certain outcomes occurring.

3. Methodology and Data

Markov Chains

The term Markov chain refers to a process with multiple states si, where one can transition from one state to another with a certain probability p. An example of a simple Markov chain is shown in figure 3.1. The various states of the chain and their transition probabilities are formalized using a transition matrix T, where each row and column of the matrix represent a specific state in the chain and each entry represents the probability of going from one state (represented by a specific row) to another (represented by a specific column). In other words, the probability of going from si (the state represented in row i) to sj (the state represented in column j) is represented by pij. In each iteration of the chain, one must either transition from one state to another or return to their current state. Because of this property, all transition matrices are stochastic, meaning that the p values in each row all add up to one. An example transition matrix for the sample Markov chain from Figure 3.1 is shown in Table 3.1 below. 

Figure 3.1: An Example Markov Chain

Figure 3.1: An Example Markov Chain

Table 3.1: An Example Transition Matrix

Table 3.1: An Example Transition Matrix

For this application, the model will be using an absorbing Markov chain. In this type of chain, there are two kinds of states: transient states, in which one can go from that state to another with probability pij, and absorbing (or recurrent) states, where once the chain reaches a certain state, it remains in that state in perpetuity. To put this in the context of other sports, think of a Markov chain in hockey where the transient states are which area of the rink the puck is currently in and scoring a goal is the absorbing state of the chain. Transition matrices for this type of Markov chain take the form shown in Figure 3.2 below. This is an (a) x (a) matrix, where n represents the number of transient states and a represents the number of absorbing states. It is made up four submatrices: 

Figure 3.2: A Generic Transition Matrix for an Absorbing Markov Chain

Figure 3.2: A Generic Transition Matrix for an Absorbing Markov Chain

R : an n x n matrix giving the transition probabilities between each transient state

A : an n x a matrix giving the transition probabilities from each transient state to an absorbing state

N : an a x n zero matrix

I : an identity matrix

If represents the number of iterations of the chain, the matrix of transition probabilities pij after iterations takes the form of T^k. For example, the transition matrix of a chain after two iterations is given by T x T^2. After multiple iterations of the chain, the transition matrix eventually converges on a steady state of transition probabilities, where for a certain value of k:

Screen Shot 2020-07-17 at 3.04.32 PM.png

In a normal Markov chain with no absorbing states, this steady state distribution takes the form of equal transition probabilities to a certain state, regardless of the starting state. In other words, each row vector si in T k is equal to one another. But with an absorbing Markov chain, things are a bit different. First, the steady state matrix of an absorbing chain is collapsible, meaning that as k increases, all pij in submatrix R converge toward zero, leaving values only in submatrix Ain the steady state. Second, instead of measuring the steady transition probabilities from one state to another, the steady state matrix for an absorbing chain measures the probabilities of landing in a certain absorbing state given the transient state you started in. Because of this, each row has different values based on the characteristics and transition probabilities associated with each starting state. 

The At-Bat Model

Applying the characteristics listed in the previous section, plate appearances can be modeled in an absorbing Markov chain where the states of the chain are the counts and potential outcomes of that plate appearance. An illustration of this chain and the generic model of its transition matrix are shown in Figure 3.3 and Table 3.2, respectively. This model includes 19 unique states. The first 12 are the transient states, which represent the 12 potential counts a batter can face during an at-bat. The other 7 are the absorbing states, which represent the 7 most common outcomes of a plate appearance: a single (1B), a double (2B), a triple (3B), a home run (HR), an out on a ball in play (BIP), a walk (BB), and a strikeout (K). In the transition matrix below, the rows of the matrix represent the count the batter faces before the pitch and the columns represent the count afterwards. Each iteration of the chain represents one pitch and transitioning from one state to another is based on the outcome of the current pitch. If this pitch results in a strike, a ball, or a foul ball, the batter transitions from one count to another, and the at-bat continues to the next pitch. However, if that pitch results in a ball in play, a strikeout, or a walk, the at-bat is over, and it is “absorbed” into that state of the chain. 

Figure 3.3: Markov Chain for a Generic At-Bat

Figure 3.3: Markov Chain for a Generic At-Bat

Table 3.2: Transition Matrix for the At-Bat Model

Table 3.2: Transition Matrix for the At-Bat Model

In order to find the transition probabilities necessary to build these matrices, the model uses real-world data on the past performance of specific players. This data is gathered using the aptly named R package baseballr to scrape baseballsavant.mlb.com (MLB’s online advanced metrics database), and fetch the data for every pitch that a player has seen or thrown in a certain time period (for the purposes of this paper, only data from the full 2019 MLB season is being used). From this raw data, plate appearances that have outlier outcomes (such as a hit by pitch, HBP) are deleted, pitch observations are grouped together by count, the outcomes of every pitch are recorded using binary variables, and the average of these variables are used to create the probabilities that go into the matrix. Once the data is turned into a working matrix, this matrix is run through multiple iterations until it reaches its steady state. Finally, by studying the steady state matrix, one can see how the expected probabilities of certain outcomes vary as the count progresses for a certain pitcher or hitter.

To bring this analysis up another level and examine a specific pitcher-batter matchup, all that is needed is to create a new transition matrix that combines the individual matrices for pitcher and batter. This is achieved by taking the simple average of both matrices—adding them together and dividing each value by 2. Using this simple average has two advantages. First, it does not bias the outcomes toward one player or the other. For example, using a weighted average by number of pitches would bias outcomes toward the pitcher, as they tend to throw more pitches in a season than hitters are able to see. Second, it is a simple approach that allows us to quickly and directly determine how the tendencies of a pitcher affect the batter’s expected outcomes, and vice versa. Once this combined transition matrix is in place, it can be run through multiple iterations, and the expected outcomes can be examined in the same way as the individual matrices. 

4. Interpreting the Model

Now that this model can show the steady-state distributions for each matchup, it is important to learn how to understand and interpret these numbers in order to analyze these matchups. There are a couple of ways that this can be done. The first would be to simply read through the expected probabilities for each outcome and make inferences based on that. While this is an intuitive approach and works well for answering certain questions (e.g. which hitters would be most likely to strikeout against a certain pitcher), other questions require a deeper level of analysis. For example, which pitchers would be most successful at keeping hitters off base? What hitters are most likely to drive in a runner standing at first base? Which hitters would produce the most runs against a certain pitcher? To help answer these questions, one can leverage the expected outcome probabilities from the model by using them to measure more traditional baseball metrics. In this section, I will show a few basic examples of how this could be done and how each of these expected metric values can be useful.

On-base Percentage

On-base percentage (OBP) is exactly what it sounds like: it measures how often a hitter gets on base safely. This is similar to batting average in that it takes the sum of all types of hits into account, but it differs in that OBP also includes walks (and HBP, which is not considered in the model) in its calculation. The formula for OBP is shown below:

Screen Shot 2020-07-17 at 3.24.50 PM.png

Using the expected outcome probabilities given by the model, calculating expected OBP is quite simple. Since we are looking at only one plate appearance  the expected OBP is just the sum of the probabilities for the outcomes listed in the formula.

Slugging Percentage

Slugging percentage (SLG) is a version of batting average that is weighted by the number of bases each hit is worth. In this case, a single is worth one base, a double is worth two, and so on. Unlike OBP, walks are not included in the calculation. Thus, instead of dividing by total plate appearances, we divide by the total number of at-bats, which is the number of plate appearances not ending in a walk or HBP. This is shown in the formula below:

Screen Shot 2020-07-17 at 3.26.40 PM.png

The calculation for the model is done in a similar way to OBP, with the only major difference being that the denominator is now . This metric helps people to better understand what kinds of outcomes hitters tend to generate and gives managers an idea of who would be the likeliest to get an extra base hit, both of which are useful to know in the context of a lineup change or a pinch-hitting scenario. Alternatively, we can also combine the expected SLG with the expected OBP to get an expected OPS (on-base plus slugging), which would give us a better picture of a player’s expected production than SLG or OBP on their own.

Weighted On-base Average

Weighted on-base average (wOBA) is a metric that shows how productive a hitter’s plate appearances are on average. Originally developed by noted sabermetrician Tom Tango, it provides a much more accurate representation of an individual player’s contributions than OBP, SLG, or even OPS. It does this by assigning a weight to each offensive outcome based on how many runs that outcome is worth on average. These weights change every year due to factors like variations in scoring, changes in how the game is played, and scaling adjustments (as wOBA is set on the same scale as OBP). For the 2019 season, the formula is as follows:

Screen Shot 2020-07-17 at 3.28.44 PM.png

As with the previous metrics, this calculation simplifies in the context of the model. Intentional walks are not considered in the data used, and as such, they are removed from the formula. Once again, , so the wOBA for a given matchup is just the weighted average of the expected outcome probabilities. 

5. Example: Trout vs. Verlander

To show how this model is put together in context, let us look at the expected outcomes of an at-bat between Mike Trout (the 2019 American League MVP) and Justin Verlander (the 2019 AL Cy Young Award winner). In order to figure out how they would perform against each other, we must first understand how they each perform separately. To do this, we must first create a transition matrix for each of the players in question, whose steady states are shown below. From these distributions, some key patterns in the matchup begin to emerge.

Table 5.1: Steady State Matrix for Trout

Table 5.1: Steady State Matrix for Trout

Table 5.2: Steady State Matrix for Verlander

Table 5.2: Steady State Matrix for Verlander

First, Trout is prolific at getting on base, and walking in particular. According to the model, nearly one out of every five of his plate appearances ends in a walk. This, combined with his other offensive outputs, puts him at an expected on-base percentage of .410, which is incredibly good. Second, as great as Trout is at getting on base, Verlander is just as good at keeping people off of the base paths. This is especially the case with his expected strikeout rate, which sits at 35.8%. Considering he also ends 42.9% of his matchups with an out on a ball in play, hitters have an average OBP of just .213 when they face Verlander.

As one might expect, given that they were calculated using the players’ 2019 data, these steady states are quite reflective of their individual performances. Below, I compare the generic-plate-appearance outcomes from the model to the players’ actual percentages from the 2019 season.

Table 5.3: Expected vs. Actual Probabilities for 2019 Mike Trout (%)

Table 5.3: Expected vs. Actual Probabilities for 2019 Mike Trout (%)

Table 5.4: Expected vs. Actual Probabilities for 2019 Justin Verlander (%)

Table 5.4: Expected vs. Actual Probabilities for 2019 Justin Verlander (%)

Once the transition matrices are completed for both players, it is now time to combine the two together. As mentioned previously, this is done using the average of each transition probability, so as not to put too much weight on the performance of the hitter or the pitcher. This transition matrix is shown below, alongside the steady state distributions of that matrix. 

Table 5.5: Combined Transition Matrix for Verlander vs. Trout

Table 5.5: Combined Transition Matrix for Verlander vs. Trout

Table 5.6: Steady State of Combined Transition Matrix

Table 5.6: Steady State of Combined Transition Matrix

From the steady state, some notable observations can be made about this specific matchup. First, Verlander has a huge effect on Trout in terms of the strikeout. Whereas Trout has an expected strikeout rate of 21% against a generic pitcher, the model implies that this rate goes up to nearly 30% against Verlander, quite a large jump. Second, it seems that Verlander also has an effect on the rest of Trout’s offensive production. According to the model, Trout’s expected OBP goes down to .295 against Verlander. To put this in context, the average OBP in 2019 was .323. This is a pretty surprising result – reducing Trout to a below-average hitter is quite a feat after all – but it does make sense considering just how dominant Verlander was in 2019. So while there was a big drop-off for Trout, it is not hard to believe that this drop-off would be much worse for the other hitters in the league. Finally, the model shows that on average, Verlander (and the defense behind him) should be more worried about singles and home runs than they should be about extra base hits in the gap. In fact, it appears that the former outcomes are almost four times as likely to occur compared to the latter ones. With strategic defensive positioning becoming more common in the modern MLB, having access to information like this may be very useful in making defensive adjustments on the field.

6. Applications

Being able to interpret results of the model is half of the process; the other half is applying this knowledge to specific areas of the game where it can lead to better outcomes for the team that is using it. Below are some examples of areas within the game where this model may be useful for analysis or optimization.

Evaluating Lineups, Relief Pitchers, and Pinch-hitters

Almost every decision a manager makes regarding their team is based on how they can gain an advantage over the other team’s best pitchers or hitters. Lineups are built to maximize favorable matchups against the opposing starting pitcher, pinch hitters are sent to the plate to deliver in key situations, and relief pitchers are brought in to stifle specific hitters in the opposing lineup. This model can help managers make these kinds of decisions by giving them an idea of which players would be best paired against the opponent’s pitchers and hitters.

Evaluating Performance Splits

In the context of this paper, the model has been used in the general case. However, it can also be used to compare how players react to certain situations. For example, we can use the model to show how a batter fares against right- and left-handed pitchers or how a pitcher performs at home compared to when they are on the road. This is an integral part of performance evaluation, and one that also ties into the concepts of lineup and matchup optimization discussed in the previous section.

Evaluating Performance Starting at Different Counts

One of the most important contributions of this model is that it not only gives the predicted outcome probabilities for a generic plate appearance, but it also shows how these expected outcomes shift as the count changes in the at-bat. For example, in the Trout vs. Verlander simulation, the probability of a strikeout at the beginning of the at-bat was 28.8%. However, this probability skyrockets to 37.5% if the first pitch is a strike and a massive 53.1% if the count goes to 0-2. By understanding how these probabilities tend to shift, pitchers and hitters can learn which counts are stronger for them and adjust their strategies accordingly to put themselves in more favorable counts.

7. Discussion

As has been shown throughout the course of this paper, this Markov chain model can serve as a powerful tool when it comes to analyzing player performance, simulating specific scenarios, and making managerial decisions on the field. However, this model is not without its limits. The main issue that can arise is that of sample size, which is due to the fact that the model groups pitches together by count. This splits the data in a way where if a pitcher or hitter does not have enough plate appearances, the sample sizes for many of the counts would not be enough to support a reliable normal distribution. And without the stability of a large sample size for each count, it is difficult to measure the accuracy of the expected outcomes given by the model. This is particularly a problem when looking at performance splits, as this kind of analysis cuts the sample sizes by at least a factor of 1/2.

With all that being said, having one count with a low sample size should not be a huge problem. For example, if a hitter has 500 plate appearances for the time period in question, he will have a sample size of 500 pitches at 0-0, and it is very likely that he will have seen enough pitches in all other counts for a normal distribution. However, it could also be the case that only 20 of these plate appearances included a pitch on a 3-0 count. This is a fairly common case, as 3-0 counts are by far the rarest count to see as a hitter. Because of this rarity, any inaccuracies of what the hitter would do on a 3-0 count should not have much of an effect on the overall accuracy of the steady state probabilities, given that the samples for every other count is above a certain threshold. In this case, a good rule of thumb is to make sure that there have been at least 30 pitches thrown in each count so that the Central Limit Theorem is applicable.

Another limitation of the model is that there is currently no way to account for certain exogenous changes that inherently affect the accuracy of the model. The model’s probabilistic predictions are based on the assumption that both players are mostly healthy and in a clear state of mind. Unfortunately, it is not possible to accurately quantify how these expected outcomes may change if, say, one of the players was playing through an injury or a situation causing severe mental stress. If another researcher were able to come up with an accurate measurement of how certain injuries affect performance, these kinds of considerations could eventually be added into the model. But for now, this is a next step that is beyond the scope of this current paper.

Speaking of next steps, the ongoing development of this model could provide the sabermetric community with ample opportunities for future research and applications. For example, it may be helpful to see what effect the opposing starting pitcher has on lineup optimization for a team. This could be accomplished by using this model in some way to gather the inputs for a standard Markov baseball model run across multiple lineup combinations for a given team. Similarly, this model could potentially be put to the test using a cross-validating simulation. However, doing so would require many combinations of one batter and one pitcher to run multiple at-bats against each other to see if the distribution of outcomes matches the predicted values of the model, which would be difficult to organize and time-consuming to record. But even if this field test is infeasible, this will not stop future researchers from trying to find other ways of testing the model and improving its accuracy. Additionally, other factors may be added into the model in the future, including possibly pitch types and/or locations. The potential applications are certainly not limited to the ideas given above, but these examples give a small sample of what could be accomplished through the future evolution of this model. Hopefully, this will be another step forward in a long journey toward making baseball less of a game of failure. 

References

Barry, Daniel, and J. A. Hartigan. “Choice Models for Predicting Divisional Winners in Major League Baseball.” Journal of the American Statistical Association, vol. 88, no. 423, 1993, pp. 766–774., doi:10.1080/01621459.1993.10476337.

Bukiet, Bruce, et al. “A Markov Chain Approach to Baseball.” Operations Research, vol. 45, no. 1, 1997, pp. 14–23., doi:10.1287/opre.45.1.14.

Doo, Woojin, and Heeyoung Kim. Modeling the Probability of a Batter/Pitcher Matchup Event: A Bayesian Approach, vol. 13, no. 10, 17 Oct. 2018, doi:10.1371/journal.pone.0204874.

Freeze, R. Allan. “An Analysis of Baseball Batting Order by Monte Carlo Simulation.” Operations Research, vol. 22, no. 4, 1974, pp. 728–735., doi:10.1287/opre.22.4.728.

“Guts!: FanGraphs Baseball.” Guts! | FanGraphs Baseball, FanGraphs, www.fangraphs.com/guts.aspx?type=cn.

Healey, Glenn. “Modeling the Probability of a Strikeout for a Batter/Pitcher Matchup.” IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 9, 2015, pp. 2415–2423., doi:10.1109/tkde.2015.2416735.

Howard, Ronald A. Dynamic Programming and Markov Processes. M.I.T. Press, 1960.

James, Bill. The Bill James Baseball Abstract 1987. Ballantine Books, 1987.

“Justin Verlander.” Justin Verlander - Stats - Pitching | FanGraphs Baseball, FanGraphs, www.fangraphs.com/players/justin-verlander/8700/stats?position=P.

Kolbush, J., and J. Sokol. “A Logistic Regression/Markov Chain Model for American College Football.” International Journal of Computer Science in Sport, vol. 16, no. 3, 2017, pp. 185–196., doi:10.1515/ijcss-2017-0014.

Kvam, Paul, and Joel Sokol. “A Logistic Regression/Markov Chain Model For NCAA Basketball.” Naval Research Logistics, vol. 53, 2006.

“Mike Trout.” Mike Trout - Stats - Batting | FanGraphs Baseball, FanGraphs, www.fangraphs.com/players/mike-trout/10155/stats?position=OF.

Pemstein, Jonah. “The Outcome Machine: Predicting At Bats Before They Happen.” Community Bloghttps://community.fangraphs.com/the-outcome-machine-predicting-at-bats-before-they-happen/

Powers, Scott, et al. “Nuclear Penalized Multinomial Regression with an Application to Predicting at Bat Outcomes in Baseball.” Statistical Modelling, vol. 18, no. 5-6, 2018, pp. 388–410., doi:10.1177/1471082x18777669.

Slowinski, Steve. “WOBA.” WOBA | Sabermetrics Library, FanGraphs, library.fangraphs.com/offense/woba/.

Sokol, Joel S. “A Robust Heuristic for Batting Order Optimization Under Uncertainty.” Journal of Heuristics, vol. 9, 2003, pp. 353–370.

Sokol, Joel S. “An Intuitive Markov Chain Lesson from Baseball.” INFORMS Transactions on Education, vol. 5, no. 1, 2004, pp. 47–55., doi:10.1287/ited.5.1.47.

Ursin, Daniel Joseph, "A Markov Model for Baseball with Applications.” Theses and Dissertations, 964, 2014. https://dc.uwm.edu/etd/964.

Zaman, Zia. “Coach Markov Pulls Goalie Poisson.” Chance, vol. 14, no. 2, 2001, pp. 31–35., doi:10.1080/09332480.2001.10542266.

Previous
Previous

What’s the Difference?