Craft beer has its studs, and then it has the seasonals which gives the beering community the opportunity to try new flavors out on a trial basis. Call it the speed dating of beer drinking.
We won't commit to your pumpkin beer, not until it's won our hearts three autumns in a row. We don't know about blueberry in our beer, but give us another couple of shots at it, why not. It's how craft brewers in the new craft beer world can satisfy that desire for new beers all the time, without opening up their entire brand to the level of scrutiny that they might invite if they started a year-round beer that fell as a dud. It's part of why we have sites like untappd, where we can record how many new beers we've tried.
In the seasonal beer, we have the intersection of some interesting phenomena. There's hype (New Beer! as DJ Clue would say), there's economics (does the hype factor help sell bad beers?) and there's a defined beginning and end date to our data. That sets the scene for easy analysis.
Do the scores for seasonal beers degrade noticeably over time?
That's step one. Step two would be trying to decide if any possible drop in scores was due to the death of hops over time (this beer doesn't taste as good as it did a few weeks ago because of chemistry), or due to the death of hype (oh this isn't as new and interesting as it was a month ago). I thought the best place to look would be a popular seasonal beer -- Deschutes' Red Chair NWPA. It's an early summer beer with a defined season -- early June to early August. And people like it.
To wit, a graph of the scores over time this year for Red Chair. An apology! I couldn't get Tableau to save this graph to the web, and so you have a dirty old screen grab. So, we asked the eighties what they thought of the best-fit line for Red Chair's scores over time, and we got this eight-bit response.
The size of the circles is determined by the number of check-ins at that score. So this beer gets a lot of fours. So much so, that it really kept up that near-four average right until the end. And yes, the slope of that line is negative, but no it's not really significant. The r-squared value was .004 which means that .4% of the variance in score is related to the date. So, no, there's no relation here.
This sort of thing happens all the time. A null result. It's worth something to know that Red Chair's scores remained stable over the life of the seasonal release.
But in this case, we are sampling only one seasonal. And maybe it's good to the end because it's good.
What would happen if we used all seasonals, or even a larger sample of seasonals? If we found a relationship there, would that just be saying that 'yeah, some seasonals are bad and it takes some time to figure that out?' Well, that's an interesting result still. It means that we are willing to give higher scores to unknown beers at first, but not after we get to know them better, as a craft community. It means that there is a hype factor that these seasonal brewers can take advantage of -- sell some beers and move on to the next seasonal before anyone notices it was a bad beer.
So, give this essay an incomplete. Or fail me -- that graphic is pretty poor. But at least put, in the grading comments along the side: "This was an interesting idea. Too bad it didn't work out."