3-2 last selection YTD 107-85-1 +36.24 units
Working on it.
Here's a interesting read by Ron Shandler written in January 2005.
Ashley-Perry Statistical Axiom #3: Skill in manipulating numbers is a talent, not evidence of divine guidance.
Ashley-Perry Statistical Axiom #5: The product of an arithmetical computation is the answer to an equation; it is not the solution to a problem.
Merkin's Maxim: When in doubt, predict that the present trend will continue.
The quest continues for the most accurate baseball forecasting system.
I've been publishing player projections for the better part of nearly two decades. During that time, I have been made privy to the work of many fine analysts and many fine forecasting systems. But through all their fine efforts at attempting to predict the future, there have been certain constants. The core of every system has been comprised of pretty much the same elements:
Players will perform within the framework of their past history and/or trends.
Skills will develop and decline according to age.
Statistics will be shaped by a player's health, expected role and home ballpark.
These are the elements that keep all projections within a range of believability. This is what prevents us from predicting a 40-HR season out of Juan Pierre or 40 SBs for David Ortiz. However, within this range of believability is a great black hole where any semblance of precision seems to disappear. Yes, we know that Albert Pujols is a leading power hitter, but whether he is going to hit 40 HRs, or 45, or 35, or 50, is a mystery.
You see, while all these systems are built upon the same basic elements, they also are constrained by the same global limitations. We are all still trying to project...
a bunch of human beings
each with their own individual skill sets
each with their own individual rates of growth and decline
each with different abilities to resist and recover from injury
each limited to opportunities determined by other people
and each generating a group of statistics largely affected by tons of external noise.
As much as we all acknowledge these limitations as being intuitive, we continue to resist them because the game is so darned measurable. The problem is that we do have some success at predicting the future and that limited success whets our desire, luring us into believing that a better, more accurate system awaits just beyond the next revelation. So we work feverishly to try to find the missing link to success, creating vast, complex models that track obscure trends and relationships, and attempt to bring us ever closer to perfection. But for many of us fine analysts, all that work only takes us deeper and deeper into the abyss.
Why? Because perfection is impossible and nobody seems to have a real clear vision of what success is.
Measuring success
Is reasonable predictive accuracy even an attainable goal? Most agree that, given external variables such as injuries, managerial decisions and the like, only about 65-70% of the player population is even marginally predictable in any given year. But even within that group, you cannot get two analysts to agree about what it means to be accurate.
In truth, the only completely accurate projection would be one that looks like this:
AB HR RBI SB BA OBA SLG OPS
=== === === === === ==== ==== ====
PROJ 500 25 95 15 .280 .330 .450 .780
ACT 500 25 95 15 .280 .330 .450 .780
Clearly, you would be overjoyed if all of our projections yielded perfect results. But it is impossible to be on target with all of these individual categories, each moving more or less independently for 180 days each baseball season.
An alternative might be to focus only on the most important statistical gauges. After all, each raw data category measures only an isolated element, and some stats like batting average are flawed. Perhaps a better measure of accuracy can be gleaned by using a gauge of overall talent, like OPS.
It sounds reasonable in theory. However, if I projected a player to have an OPS of about .868, for instance, he could post any of the following 2004 stat lines and my projection would still be considered a success:
AB HR RBI SB BA OBA SLG OPS
=== === === === === ==== ==== =====
A 240 13 40 0 .296 .365 .504 .8688
B 573 32 67 13 .255 .371 .497 .8685
C 467 19 76 10 .298 .380 .488 .8682
D 438 26 71 0 .279 .329 .539 .8679
E 704 8 60 36 .372 .413 .455 .8676
I suppose, for simulation gamers and pure scientists, these players are all comparable. And with my .868 OPS projection, any of these results would have provided for a perfect success story. But I'd hardly think that, if I projected Brad Wilkerson (B) to have Jason Varitek's stats (C), you'd consider I was a heck of a prognosticator. Kevin Mench (D) and Ichiro Suzuki (E) are hardly comparable either, even though OPS does say that.
Admittedly, John Mabry (A) should not be in this group, but aggregate gauges like OPS make no distinction for playing time. Even if we were to separate out full-timers from bench players, OPS again can't reflect the impact that Brad Wilkerson's additional 100- plus ABs has over Varitek or Mench.
Despite the similarities using a gauge that measures aggregate performance, these are very different skill sets for most fantasy applications.
One way to resolve this issue might be to use a more fantasy-friendly gauge. Rotisserie dollar values can serve a dual purpose here. First, they measure only those categories that we are interested in. A second benefit is that they incorporate the importance of playing time - which OPS does not - and eliminate the problem of a John Mabry being included in this group. And in fact...
AB HR RBI SB BA 5x5
=== === === == === ===
Mabry,J 240 13 40 0 .296 $7
Wilkerson,B 573 32 67 13 .255 $22
Varitek,J 467 19 76 10 .298 $18
Mench,K 438 26 71 0 .279 $15
Suzuki,I 704 8 60 36 .372 $35
...now this group is no longer cut from the same cloth. But Rotisserie values still do not negate the underlying problem with comparing sets of numbers. Varitek is a nice $18 player, but $18 doesn't always buy you the same type of statistics:
AB HR RBI SB BA 5x5
=== === === == === ===
Varitek,J 467 19 76 10 .298 $18
Grissom,M 562 22 90 3 .279 $18
Wilson,J 652 11 59 8 .308 $18
Lugo,J 581 7 75 21 .275 $18
So, using dollar values doesn't work either. The last thing that a power-rich, speed- starved team needed was Marquis Grissom's numbers when you thought you were paying for Julio Lugo's. With all these obstacles to using aggregate performance gauges, perhaps we need to refocus on projecting individual stat categories. Can this provide any better hope for defining prognosticating success?
Here is where it gets "personal."
If I were to project that Albert Pujols is going to hit 45 HRs this year and he only hits 44, you will probably accept that level of inaccuracy. But what if he hits only 43? Or 42? Or 40? Or 39? At what point do we cross that imaginary line where the projection is "officially" deemed a failure?
You might say "40." I might say, "Okay, so if Pujols has 39 HRs on the final day of the season, and he hits a long fly ball that Corey Patterson makes an amazing over-the-wall leap to rob him of #40, has that one event been the difference between success and failure?" We have to draw a line between success and failure somewhere, but there is always going to be a grey area where it can go either way. You might consider the grey area as representing "inaccuracy." But more important is the fact that the size of this grey area is different for everybody.
In early 2003, we asked this type of question in two online polls at BaseballHQ.com. Here were the results:
If I were to project 35 HRs for Hideki Matsui this year, what is the threshold of actual HRs at which you would perceive that my projection had failed?
34 2%
32 3%
30 18%
28 31%
26 24%
24 14%
22 5%
20 3%
If I were to project 15 wins for Tom Glavine this year, what is the threshold of actual wins at which you would perceive that my projection had failed?
14 4%
13 10%
12 33%
11 27%
10 17%
9 3%
8 2%
7 3%
There is no clear consensus in either poll. That's why this is "personal." Accuracy can only be assessed based on your own subjective tolerance for error.
But you might say, "Shandler, there has to be some type of benchmark I can use. There has to be some way to gauge accuracy."
I'm not so sure. There are some people who might consider a broad stroke approach to be sufficient, using a flat percentage benchmark across all categories. For instance, you might be satisfied if a projection was off by only 10% across-the-board. Doesn't that seem reasonable? But a casual "eyeball test" can be deceiving. To wit:
AB R H HR RBI SB BA
=== == === == === == ====
PROJ 550 79 169 29 113 13 .307
ACT 599 70 169 26 100 10 .282
At first glance, this looks like a pretty good projection, at least one that you wouldn't be too unhappy with had you expected to purchase that first set of stats. Our eyeball test says that his overall productivity was pretty much on target. In reality, every one of his statistics was mis-projected by over 10%. Based on the "acceptable" 10% tolerance, this projection was a complete failure. Of course, I could just loosen that tolerance, perhaps to 15% or 20%, which will boost our perceived success rate, but the eyeball test will get much fuzzier.
Here is the above example with actual results within 15-20% of projection:
AB R H HR RBI SB BA
=== == === == === == ====
PROJ 550 79 169 29 113 13 .307
ACT 632 62 169 23 87 7 .267
My own eyeball test says that, while this projection was marginally in the ballpark, perhaps a 20% error is beyond the limits of my comfort level. But again, you might look at the above results and think these are perfectly fine within your tolerance for error. Can we agree on anything? Not likely.
The irony with the above examples is that, despite the shortfalls in batting average, both projections nailed this player's hit total. All of which begets other questions...
If a slugging average projection is dead on, but the player hits 10 fewer HRs than expected (and likely, 20 more doubles), is that a success or a failure?
If a projection of hits and walks allowed by a pitcher is on the mark, but the bullpen and defense implodes, and inflates his ERA by a run, is that a success or a failure?
If the projection of a speedster's rate of stolen base success is perfect, but his team replaces the manager in May with one that doesn't run, and the player ends up with half as many SBs as expected, is that a success or a failure?
If a batter is traded to Colorado and all the touts project an increase in production, but he posts a statistical line exactly what would have been projected had he not been traded, is that a success or a failure?
If the projection for a bullpen closer's ERA, WHIP and peripheral numbers is perfect, but he saves 20 games instead of 40 because the GM decided to bring in a high-priced free agent at the trading deadline, is that a success or a failure?
If I project a .272 batting average in 550 AB and the player only hits .249, is that a success or failure? Most will say "failure." But, wait a minute! The real difference is only two hits per month. That shortfall of 23 points in batting average is because a fielder might have made a spectacular play, or a screaming liner might have been hit right at someone, or a long shot to the outfield might have been held up by the wind... once every 14 games. Does that constitute "failure?"
Many questions, but all rhetorical.
When it comes down to it, perhaps the only thing we can really trust is the eyeball test and our own personal tolerance for error. A fantasy leaguer with a loaded bullpen doesn't care whether his third closer puts up 40 saves or 30. When you are leading your league in home runs by 25, it doesn't matter whether Jeff Bagwell hits 39 HR or 27. And when all the aggregates wash out come October, the fact that your $25 Barry Zito saw his ERA rise by over a run will only affect your team's bottom line by 0.15 - in most leagues, a loss of maybe 2-3 points at worst.
Obstacles to comparing different systems
It's tough enough to answer these questions when you are trying to measure the accuracy of a single set of projections. When you open things up and begin to look at multiple prognosticators, then there are even more issues to address.
The number of published projections that appear in print and online has been rising annually, and with them, expectations, questions and unbearable hype. How can there be equivalent credibility with so many different sets of numbers?
I've been asked to prove my prognosticating prowess more often than ever before. There have also been several recent analyses published that compare the Baseball Forecaster and Baseball HQ numbers to those of other touts, but the same thing happens time and time again:
1. We never finish first. 2. The purveyor of the study always does.
Is that a wonder? How can there be so many different "objective analyses" out there, and all of them so allegedly accurate?
Peter "Ask Rotoman" Kreutzer from mlb.com has this take: "Someone who tries to sell you projections that are "much better" than any others is bulls****ing you. The important thing for you as a consumer to understand is what system your prognosticator is using, what biases that introduces, and learn to make the necessary adjustments to incorporate risk evaluation into the process. Only then can you get the players who fit your league's rules best."
Ah, biases. The truth is, there is an inherent bias that exists in any comparative analysis that includes the author as one of its subjects. It's impossible to avoid. The reason is obvious: A tout is not going to publish such an analysis unless he can present himself in a favorable light. And the only way to do this is to instill some level of bias into the structure of the study.
Here are some of the ways this is done:
Selection of the study group: Some of the analyses I've seen contained perhaps a half dozen or so prognosticators, but I can easily count at least 20 books, magazines and websites that published projections last year. How do we know whether there were other touts not chosen for the study that might have fared better?
I've seen qualifiers such as: "We evaluated only those players who had a forecast provided by each of the seven projections systems." This means, the addition or omission of any of the seven prognosticators could change the composition of the players studied, and thus the results of the study.
As such, unless the study is exhaustive, it cannot be completely objective.
Selection of the study variables: We've already discussed the limitations inherent in choosing a study variable. However, those who conduct comparative analysis have to select something to compare. Will it be an overall aggregate gauge like OPS or Win Shares? Will it be a fantasy-relevant gauge like dollar values or fantasy points? Will it be a raw, traditional measure like ERA or batting average? And most important, how do we know that the measuring gauge chosen isn't one that just happens to yield the most favorable results?
As such, unless the study uses a viable test variable, it cannot be completely objective.
Selection of the study methodology: Even if a comparative analysis included all relevant test subjects and somehow found a study variable that made sense, there is still a concern about how the study is conducted. Does it use a recognized, statistically valid methodology for validating or discounting variances? Or does it use a faulty system like the ranking methodology used by Elias to determine Type A, B or C free agents? Such a system -- which ironically is the basis for Rotisserie scoring -- distorts the truth because it can magnify tiny differences in the numbers and minimize huge variances.
As such, unless the study uses a proven, accurate methodology, it cannot be completely objective.
And bias immediately enters into the picture. You simply cannot trust the results.
The only legitimate, objective analysis that can filter out the biases is one that is conducted by an independent third party. But the challenge of conducting such a study is finding a level playing field that all participants can agree on. Given that different touts have different goals for their numbers, that playing field might not exist. And even if one should be found, there will undoubtedly be some participants reluctant to run the risk of finishing last, which could skew the results as well.
Other challenges to assessing projections
Ashley-Perry Statistical Axiom #4: Like other occult techniques of divination, the statistical method has a private jargon deliberately contrived to obscure its methods from non-practitioners.
As users of player projections, and in a hurry to make decisions, we want answers, and quickly. We want to find a trusted source, let them do all the heavy lifting, and then partake of the fruits of their labor. The truth is, the greater the perceived weight of that lifting, the greater the perceived credibility of the source. Only the small percentage of users who speak in that "private jargon" can validate the true credibility. The rest of us have to go on the faith that the existence of experts proficient in these occult techniques is proof enough.
Well, so what? That's why we rely on experts in the first place, isn't it? What is the real problem here?
Complexity for complexity's sake
One of the growing themes that I've been writing about the past few years is the embracing of imprecision in our analyses. This seems counter-intuitive given the growth in our knowledge. But, the game is played by human beings affected by random, external variables; the thought that we can create complex systems to accurately measure these unpredictable creatures is really what is counter-intuitive.
And so, what ends up happening in this world of growing complexity and precision is that we obsess over hundredths of percentage points and treat minute variances as absolute gospel. When George W. Bush proclaimed that his 3.3 million vote margin was a "mandate," the fact was, in terms of popular vote, the margin of victory was only 2.8%. That's like saying the Yankees' 3-game victory over the Red Sox for the A.L. East title -- also about a 3% margin -- was a resounding triumph. Yes, the Yankees did clearly win, but suggesting that a 3% margin is significant is a bit of quantitative spin.
Two buddies go to the ballpark and are stocking up at the concession stand. Their orders arrive but one notices that he was given fewer nachos on his plate than his friend. He takes offense, and to prove his point, starts counting the chips. In the end, for want of confirming what turned out to be a variance of two chips, he missed out on two important facts:
1. Both plates were delicious.
2. The beer was missing.
And we also forget such "hard" baseball facts such as:
The difference between a .250 hitter and a .300 hitter is fewer than 5 hits per month.
A true .290 hitter can bat .254 one year and .326 the next and still be within a statistically valid range for .290.
A pitcher allowing 5 runs in 2 innings will see a different ERA impact than one allowing 8 runs in 5 innings, even though, for all intents and purposes, both got rocked.
And finally, there is the issue of "Marcel the Monkey." This is the assertion by folks on some of the sabermetric blogs that a "chimp forecasting method" - a simplistic averaging of the last few seasons and making minor adjustments for age - is nearly as good as any other, more comprehensive system.
Well... this is mostly true. If 70% accuracy is the best that we can reasonably expect, Marcel gets us about 65% of the way there. All of our "advanced" systems are fighting for occupation of that last 5%.
Gall's Law: A complex system that works is invariably found to have evolved from a simple system that works.
Occam's Razor: When you have two competing theories which make exactly the same predictions, the one that is simpler is preferred.
Even if it was created by a monkey, I suppose.
Married to the model
It's one thing if the model has a name like Claudia Schiffer, but quite another if a tout is so betrothed to his forecasting model that "it" becomes more important than the projections.
Whenever I hear a tout write, "Well, the model spit out these numbers, but I think it's being overly optimistic," I cringe. Well then, change the numbers! The mindset is that you have to cling to the model, for better or for worse, in order to legitimize its existence. The only way to change the numbers is to change the model.
On occasion, I will take a look at one of my projections and admit that I think it's wrong. Usually, it's because I see things in the BPIs that I overlooked the first time through. Then I change the numbers.
In the end, is the goal to have the best model or to have the best projections? That should be a no-brainer.
Hedging and the comfort zone
Given the variability in player performance a "real world" forecast should not yield black or white results. Some touts accomplish this by providing forecast ranges, others by providing decile levels. We provide a single statistical projection, for simplicity's sake, and then color it in our player commentaries. In fact, most touts do this, however, many use the commentary as a hedge against the numbers they've committed to. But when does a hedge negatively impact your ability to assess the accuracy of a projection?
One of the best examples was Ben Sheets last year. This was a pitcher coming off a 4.46 ERA, yet had incredible leading indicators. The typical forecast would never venture into uncharted, sub-4.00 ERA territory because straight computer-generated projections would neither find the history nor see a trend that pointed in that direction.
Still, four of us did break rank. But which of these projections, and comments, was the most committed?
Tout 1 - Us (3.94 ERA projection here last year, updated to 3.82 on BaseballHQ.com): "BPIs are developing nicely, but a 5% drop in his strand rate served to hide those gains in a higher ERA. Keep a close eye on this one. He's at the prime spot to post a breakout season. Major sleeper."
Tout 2 (3.97 ERA): "He's got better stuff than his numbers would indicate but his upside is limited pitching for the Brewers. He would warrant fantasy consideration in NL-only leagues but probably not elsewhere."
Tout 3 (3.83 ERA): "Fewer walks, fewer strikeouts. Hard to say what to make of that. Besides hooray if you're in a 4x4 league, boo if you're in a 5x5 league. Basically the same season."
Tout 4 (3.79 ERA): "Sheets has been a very good pitcher but lacks the consistency to reach the next level. He relies too much on just his curve and fastball... If he can find a third pitch and stay away from the gopher ball then he has a chance to post a sub 4.00 ERA next season."
The other three touts provided skittish recommendations, but their "official" published projections all eclipsed the 4.00 ERA barrier. It's a common hedge. Did they truly believe Sheets had the potential to post a sub-4.00 ERA? You wouldn't know it from their comments alone. That makes it difficult to figure out the "official line" on their projection. In the past, some authors used this tactic as a means of playing both sides so that they always had a winning projection to promote the following year. Thankfully, that level of deception is rare these days.
But also notice that none of us four touts came anywhere close to projecting the season that Sheets really did put up, even though the evidence in his BPIs was strong and supported such a breakout performance.
As a group, there is a strong tendency for all pundits to provide numbers that are more palatable than realistic. That's because committing to either far end of a range of expectation poses a high risk. Few touts will put their credibility on the line like that, even though we all know that those outliers are inevitable. The easy road is often just to split the difference.
I handle this phenomenon in the Baseball Forecaster by offering up the possibility of outlying performances in the commentary. Occasionally, I do commit to "official" outlying projections when I feel the data supports it. But on the whole, most projections are going to be within close range of the mean or median expectation of a player's performance.
I like to call this the comfort zone, a range bordered by the outer tolerances of public acceptability of a projection. In most cases, even if the evidence is outstanding, published pundits will not stray from within the zone.
For instance, nearly everyone in 2004 assumed that a healthy Randy Johnson would be a vintage Randy Johnson, yet not one tout had him down for a 20 win, 2.50 ERA season. Most touts doubted Esteban Loaiza's ability to repeat his 2003 numbers, but nobody was willing to risk the possibility that he might revert to his pre-2003 form. In fact, in a survey of 10 touts last April, eight of them projected an ERA between 3.50 and 3.93, even though Loaiza had never posted an ERA in that range in his entire career.
They say that the winners in any fantasy league are those who have the most outliers on their teams. There is an element of truth to this. It is likely that owners who rostered surprises like Johan Santana and Adrian Beltre fared well in the standings this past year. The problem is, these type of performances are the most difficult to project. Still, the prognosticators who fare the best in this exercise should get their props, shouldn't they?
According to analyst John Burnson, the answer is no. He says: "The issue is not the success rate for one player, but the success rate for all players. No system is 100% reliable, and in trying to capture the outliers, you weaken the middle and thereby lose more predictive pull than you gain. At some level, everyone is an exception!"
Peter Kreutzer again: "Those projections that are outside the comfort zone, as Ron calls it, are flashy, but they're of little statistical use. What you want is to follow the predictor who gets the general flow (guys who improve, guys who fall off) more right than anyone else. If someone does that they'll make you money in almost any league."
Yes! That "general flow" is far more important than any pure accuracy level. And far more attainable. And perhaps, that is the study variable that makes the most sense.
Finding relevance
Berkeley's 17th Law: A great many problems do not have accurate answers, but do have approximate answers, from which sensible decisions can be made.
Maybe I'm a bit exasperated by this obsession with prognosticating accuracy because my own projections system is more prone to stray from the norm - by design - and thus potentially fare worse in any comparative analysis. My system is not a computer that just spits out numbers. I don't spend my waking hours tinkering with algorithms so that I can minimize my mean squared errors. My computer model only spits out an objective baseline and then the process becomes hands-on and highly subjective.
From the Projections Notes page at BaseballHQ.com:
"Skills performance baselines are created for every player beginning each fall. The process starts as a 5-year statistical trend analysis and includes all relevant performance data, including major league equivalents. The output from this process is a first-pass projection.
"Our computer model then generates a series of flags, highlighting relevant BPI data, such as high workload for pitchers, contact rate and PX levels trending in tandem, xERAs far apart from real ERAs, etc. These flags are examined for every player and subjective adjustments are made to all the baseline projections based on a series of "rules" that have been developed over time."
As an example, let's look at Pujols. After hitting 37, 34, 43, and 46 HRs, his baseline projection called for 42, which represented a normal regression to the mean. However, our flags pointed out consistent upward trends in contact rate, fly ball ratio, batting eye and a second half surge in his power index. Add in his alleged age (25) and a reliability rating of 94, and all signs pointed north for his power trend to continue. Our projection now calls for 50 HRs.
Why 50? I believe it is reasonable to expect Pujols to maintain his second half PX level for a full six months, given the trends in his skills. For some people, it might take a moment to accept 50, but the more you look at it, the more it passes the eyeball test. This is a player with no true comparables in history. All we have is our eyeballs and a general idea of what makes sense. Fifty makes sense to me.
The end result of this system is not just a set of inert numbers. As I mentioned earlier, I consider the commentary that accompanies the numbers to be just as vital a part of the "projection," if not more so. Think of it this way... The numbers provide a foundation for our expectations, the "play-by-play," if you will. The commentary, driven by all the BPIs, provides the "color." Both, in tandem, create the complete picture.
Admittedly, a system with subjective elements tends to give classic sabermetricians fits. But that's okay because, at the end of the day we're still dealing with...
a bunch of human beings
each with their own individual skill sets
each with their own individual rates of growth and decline
each with different abilities to resist and recover from injury
each limited to opportunities determined by other people
and each generating a group of statistics largely affected by tons of external noise.
Now here's the kicker... In the end, my primary goal is not accuracy. My goal is to shape the draft day behavior of fantasy leaguers. For certain players with marked BPI levels or trends, I often publish projections that are not designed to reflect a "most likely case" scenario but rather present a "strong enough case to influence your decision- making." There are reasons to stray beyond the comfort zone.
For instance, sometimes, when my projection says $27, it is intended solely to make you say $22 when the bidding stops at $21 (assuming the context of normal market conditions). If I had published a projection of $23 or $24, that's not enough of a psychological push for you to take that last leap of faith. I need a set of numbers that screams at you: "These BPIs could be HUGE! His upside could be far greater than any projection system would reasonably predict! It's worth the risk-yes, SAY $22!"
And I want you to make these decisions with a minimum of hesitation. That lack of hesitation comes from a trust I try to build between us, from sound analysis and a 19-year track record that has been shown to work.
How can I play so loose with dollar values? Because they are entirely market-driven anyway. If you are convinced that Eric Chavez is worth $26 and land him for $21, you will have overpaid if the rest of the league sees him as no more than a $17 player. Even if he is really worth $35. So my goal is to get you into the mode of playing off that volatility with the knowledge of where your profit opportunities really lie.
And that answers the question, "For any player, what is the one piece of information that is far more important than the most accurate projection?" That information is how the other owners in your league value that player. If you know that, and have a sense of a player's potential, it doesn't matter a whit how accurate your projections are.
So our track record is not necessarily built on any given level of prognosticating accuracy. Our track record is built on a series of analytical tools and a decision-making process that has led to success in playing this game. And since your ultimate goal is to fare better in your fantasy competitions, I see this all as a justifiable means to an end.
I'm not publishing deliberately inaccurate projections. I'm just taking a potential reality from an upper or lower decile, based on strong underlying indicators, and engaging in a bit of behavior modification. If you are offended by the psychological implications, I apologize. If you now consider me a sabermetric hack, I've been called worse. But the users of this information seem to be winning their leagues so I'll accept the baggage that comes along with it.
It's all about winning. Reasonably accurate projections are important, but will only get you part of the way there. The rest is knowing what to do with the information, especially at the draft table. Even if you had a crystal ball and knew exactly what every player's statistics were going to be next year, you can still lose at this game.
I believe your goal is to win. As such, you should not worry if my analysis says that David Ortiz is going to hit 39 HRs and the other prognosticator says 42. Even if his projection is powered by the latest shiny, new computer model, by next October 2, the difference between his and mine may be three unexpected gusts of wind.
Baseball Variation of Harvard Law: Under the most rigorously observed conditions of skill, age, environment, statistical rules and other variables, a ballplayer will perform as he damn well pleases.
------------------------------------------------------------------
Will take a old time tested play with Angels at Texas Rangers. You know the fan forcast is 78% on Angels today? Yet money line is Texas -135 from -130. Why is this whipping boy team of Angels getting the money today is in part what was said on their official web site about keeping their team healthy and resting players. Rangers are one of the 2nd half young surging team with everything to prove. This might be a soft Los Angeles team on the field tonight.
Don't forget Santana away 1-9 with 8.33 ERA, 1.85 WHIP and .401 OBP. When he takes the mound on road trips Angels are 3-10. What's not to like about the home team?
TEX -125 for 2 units(W)
1-0 +2.00 units
No comments:
Post a Comment