A few days ago, Pauladin, who has taken over the LibStatLogic and TankPoints addons, left a comment indicating that he had a more complete data set that was giving him slightly more accurate coefficients for dodge diminishing returns. He was kind enough to share that data set with us, and we’ve analyzed it thoroughly in both MATLAB and Excel. As a result, we’ve managed to determine our dodge and parry DR coefficients to a significantly higher degree of accuracy than we’ve been able to achieve previously.

In this post, I’m going to provide the fits to the data and a little commentary on the logic that gets us to our final estimates. If you’re not interested in the fitting, skip to the end of the post for the full form of the diminishing returns equations including all of the constant values of interest.

**Parry**

First, let’s look at the parry fit:

General model: f(x,y) = 3+164/Q+(x/Q+y)/((x/Q+y)/C+k) Coefficients (with 95% confidence bounds): C = 237.2 (237.2, 237.2) Q = 243.6 (243.6, 243.6) k = 0.886 (0.886, 0.886) Goodness of fit: SSE: 5.852e-011 R-square: 1 Adjusted R-square: 1 RMSE: 6.267e-007

And this is what it looks like plotted:

This data set is *amazing*, because it’s so exact and covers a wide portion of the parameter space. And the fit we get from it is excellent, as you can tell from the error metrics. More importantly, look at the residuals, which are the differences between data and fit at each point:

The amazing thing here is that none of them are larger than 0.000005% parry. That’s how good this fit is – it beats the tooltip rounding by 3 orders of magnitude!

There are other hints here that tell us the fit is correct. We’ve known that the value of $a$ was around 243.6 already. The fact that the fit correctly predicts this is further evidence in favor of this fit. And the value we get for $C_p$ is consistent with what we’ve been seeing in fits based on warrior data, suggesting that $C_p$ is the same for both warriors and paladins. The value for $k$ is slightly different than what we’ve been using based on our earlier, less accurate data sets (0.885), which explains the minor discrepancies we’ve been seeing in our fitted caps.

The MATLAB output only shows a few significant digits, but it keeps quite a few more. If I ask it for 10 decimal places, I get:

Cp = 237.1860845891 Qs = 243.6053468436 k = 0.8860000602

The value of $k$ is too close to 0.886 to be a coincidence, in my opinion. If we fix that to be exactly 0.886, we get the following results from the fitting algorithm:

Cp = 237.1860403230 Qs = 243.6053629097

This value of $Q_s$ is a little different than what I’ve seen thrown around in other work. In fact, this was the issue that led to Pauladin and I analyzing the data in further detail. In the first pass, the residuals exhibited a distinct curvature, suggesting that something wasn’t right. Ideally, the residuals should be both *small* and *random*. But we were seeing relatively large residuals (i.e. 0.001%) that looked like they followed a functional form in both parry and dodge data sets.

At first we thought this might be due to having the wrong functional form, because we simply couldn’t get that artifact to go away by only varying $k$ and $C_p$. However, after playing around with it in MATLAB, it was clear that $Q_s$ was to blame. We were using 243.608552799873, and the fact that it was off by about 0.003 was enough to show clear systematic error in the residuals. As such, now that we’ve eliminated that problem, I feel very confident in our values of $C_p$, $k$, and $Q_s$.

**Dodge**

It gets even more interesting when we analyze the dodge data. This data set only has two points that have an agility much higher than base, the rest are either just base agility or base agility +4 (from Kings). However, those two points at 480 agility ended up being crucial, because they were what allowed us to accurately nail down the value of $Q_a$, the agility-to-dodge conversion factor.

The other neat thing is that the dodge fit predicts the exact same $k$ value that the parry fit does, 0.886. To show that, here’s the fit I get while leaving all three parameters free, followed by the 10-decimal expansion:

General model: f(x,y) = 3+2+97/Q+(x/Q+y)/((x/Q+y)/C+k) Coefficients (with 95% confidence bounds): C = 66.57 (66.57, 66.57) Q = 1e+004 (1e+004, 1e+004) k = 0.886 (0.886, 0.886) Goodness of fit: SSE: 4.86e-011 R-square: 1 Adjusted R-square: 1 RMSE: 5.711e-007 Cd = 66.5674515795 Qa = 9999.9310120191 k = 0.8859999914

This is just more reinforcement that $k$ ought to be 0.886. If we fix that, we get slightly modified values for $C_d$ and $Q_a$:

General model: f(x,y) = 3+2+97/Q+(x/Q+y)/((x/Q+y)/C+0.886) Coefficients (with 95% confidence bounds): C = 66.57 (66.57, 66.57) Q = 1e+004 (1e+004, 1e+004) Goodness of fit: SSE: 4.139e-011 R-square: 1 Adjusted R-square: 1 RMSE: 5.253e-007 Cd = 66.5674547339 Qa = 10000.1158515656

And by now, I think it’s pretty clear that $Q_a$ is likely to be exactly 10,000. So let’s fix that in place and see what we get:

General model: f(x,y) = 3+2+97/10000+(x/10000+y)/((x/10000+y)/C+0.886) Coefficients (with 95% confidence bounds): C = 66.57 (66.57, 66.57) Goodness of fit: SSE: 4.39e-011 R-square: 1 Adjusted R-square: 1 RMSE: 5.392e-007 C = 66.5674461982

Here’s what that fit looks like:

And the residuals, this time plotted in 2-D (collapsing the agility axis):

As you can see, we’re doing a little better with the dodge residuals than we were with parry. Our error is no larger than about 0.000002% here. And the residuals do look more or less randomly distributed about the fit. I said earlier that it was the residuals plot that clued us into our errant values of $Q_s$ and $Q_a$, so to give you an idea of what that looks like, here’s the residuals plot with the systematic error included (i.e. using $Q_a=10025$:

You can definitely see the curvature I’m talking about. In addition, the two points towards the upper left are the high-Agility data points, which are clearly wrong. Those two points allowed us to home in on 10,000 as the conversion factor, and relaxed the constraints on $k$ and $C_d$ that were causing the curvature.

It’s worth noting here that Mythor has been working hard on determining the warrior dodge cap to similar accuracy. So far we’ve got the warrior cap to $C_p^{\rm (warr)}=90.6425(0)$. We’re hoping to add a few more digits to the end of that with more mixed gear sets (i.e. high agility and high dodge rating).

**Block**

The block data was a *lot* more annoying, despite being a one-dimensional fit. Our initial fits seemed to get the functional form right, but had much larger residuals than our dodge and parry fits. For example, here’s a fit to the raw data just using our usual DR equation:

General model: f(x) = 3+10+x/(x/C+k) Coefficients (with 95% confidence bounds): C = 150.4 (150.3, 150.5) k = 0.886 (0.8859, 0.8861) Goodness of fit: SSE: 0.0005365 R-square: 1 Adjusted R-square: 1 RMSE: 0.001904 Cb = 150.3606885994 k = 0.8859892350

As you can see, this fit is still quite good, but the residual errors are now larger. Our max residual with this fit is around 0.004%, large enough that we see disagreements of ~0.01% with the tooltip value in certain gear sets. In fact, Jere from tankspot and I have been discussing this problem, as he noticed it before I did and brought it to my attention.

The confusing part was that the residuals were very evenly distributed about zero, with no apparent functional form. It looked like random noise on an excellent fit. But why would there be random noise in the function?

Well, there isn’t. Pauladin and Jere compiled more extensive data sets for mastery and block, and once that was done, the residual plot gave us a hint as to what was wrong. See if you can tell what it is:

As soon as I saw this, I knew the problem was rounding. Parallel lines on a residual plot like this are *usually* caused by discretization, and the curvature was due to the DR equation. The obvious answer is that there’s some rounding going on in the equation, and that’s causing our errors. It was just a matter of figuring out where and how to round to make them go away.

Which was a lot harder than it sounds, it turns out. The obvious first guess is to round mastery. And we did, over and over again, to different precisions, with no luck whatsoever. Rounding to the nearest 0.001 or lower had little to no effect, rounding to 0.01 or higher made the errors *worse*. There was something we were missing, and we didn’t know what. There’s a fun story behind this, but it’s rather long, so feel free to skip it if you’re not interested:

Pauladin and I puzzled over this for 5 or 6 hours yesterday, trying different rounding schemes (“What if we round (mastery/C)? What if we round the entire denominator?”) to no avail. Somewhere after midnight he gave up and went to bed. I soldiered on in MATLAB until around 1:30 AM before giving in to sleep.

But in that extra hour and a half, I made a breakthrough that would lead me to the answer. I was playing around with a “fake” data set in MATLAB (basically, I made an array of mastery values from 8 to 25 in steps of 0.0001) to try and figure out what part of the rounding determined the number and spacing of the features on the residual plot. From that, I determined that it came down to the rounding factor you used – it had to be a rational fraction times the point spacing. For example, my points were spaced by 1/10000, so I’d get nice clean lines for a round like

${\rm roundedMastery} = {\rm ROUND}((N*10000/M)*{\rm mastery})$

N and M would determine the number of lines and the amount of error. I could use this to tweak the number of lines and the size of the residual errors in my “fake” data set, which was nice, but it still didn’t line up with my *real* data. Defeated, I went to bed. It wasn’t until I was lying in bed thinking about the problem as I drifted to sleep that I stumbled across the idea that would get me to the correct function.

My “fake” data set used a point spacing of 0.0001. But in-game, we have a coarser discretization: 1 point of mastery rating. What if I discretized my “fake” mastery array in steps of exactly 1 mastery rating? As soon as I woke up this morning, I ran to the computer and tried it. And it gave me better results, but only sort-of. Rounding to the nearest $Q_m=179.280042052667$ didn’t quite work, but $Q_m/7$ gave me roughly the right number of lines, and $5*Q_m/7$ gave me roughly the right *amount* of error. But I couldn’t get both simultaneously.

Then it hit me. 5/7*179.280042052667 works out to 128.0572. That’s close to 128… what if they rounded in *binary*? 128 is 2^7, maybe they just rounded to the nearest 8-bit floating point value? So I tried rounding mastery by round(128*mastery)/128. And the residuals dropped by an order of magnitude. That was it! After that, some fine-tuning of the DR constant with the curve fitting tool dropped our residuals to the same level as our dodge and parry fits.

Since we’re already confident that $k=0.886$, we can fix that in the fitting algorithm to get a better estimate for the block cap $C_b$:

General model: f(x) = 3+10+round(128.*x)./128/(round(128.*x)/128/C+0.886) Coefficients (with 95% confidence bounds): C = 150.4 (150.4, 150.4) Goodness of fit: SSE: 7.897e-010 R-square: 1 Adjusted R-square: 1 RMSE: 1.35e-006 Cb = 150.375946929671870

This block cap we’re getting is consistent with what I’ve seen in fits to warrior data, which is reassuring. This suggests that both warriors and paladins have the same block cap, even though our $k$ values differ (theirs is the old Cataclysm value of 0.956). And our residual error is now down to the point that we shouldn’t see any more of the 0.01% errors that were showing up before.

**Conclusions**

With this data, we can fairly confidently state the diminishing returns formulas for parry, dodge, and block:

**Parry Diminishing Returns**

${\rm Parry} = 3 + \frac{\rm baseStr}{Q_s} + \left (\frac{1}{C_p} + \frac{k}{({\rm Str}-{\rm baseStr})/Q_s+{\rm preParry}}\right)^{-1}\large$

**Dodge Diminishing Returns**

${\rm Dodge} = 3 + 2 + \frac{\rm baseAgi}{Q_a} + \left (\frac{1}{C_d} + \frac{k}{({\rm Agi}-{\rm baseAgi})/Q_a+{\rm preDodge}} \right)^{-1}\large$

**Block Diminishing Returns**

${\rm Block} = 3 + 10 + \left(\frac{1}{C_b}+\frac{k}{\rm roundedBlock} \right )^{-1}\large$

In these equations:

${\rm Parry}$, ${\rm Dodge}$, and ${\rm Block}$ are your post-DR parry, dodge, and block values (on the character sheet).

${\rm preParry}$ and ${\rm preDodge}$ are the pre-DR values given in the parry and dodge tooltips.

${\rm roundedBlock = ROUND(128*preBlock)/128}$ is the 8-bit binary-rounded block value used in the Block DR equation. ${\rm preBlock}$ is your pre-DR block percentage rounded in binary form (for paladins this is just their mastery, i.e., if you have 20% mastery, ${\rm preBlock}=20$; for warriors it’s ${\rm masteryPercent}*0.5/2.2$).

${\rm Str}$, and ${\rm Agi}$ are your character sheet strength and agility values. ${\rm baseStr}$ is your base strength (naked, unbuffed).

$C_p=237.1860(403230) \pm 0.00005478$ is the parry cap for paladins and warriors

$C_d=66.56744(61982) \pm 0.000006006$ is the dodge cap for paladins (warrior’s is $90.6425(0)$ according to latest estimates)

$C_b=150.3759(4692967) \pm 0.0000094316$ is the block cap for paladins and warriors

$k=0.886$ is the scale factor for paladins (warrior’s is $0.956$), these are exact

$Q_s=951.158596$ is the L90 strength-to-parry conversion factor for paladins and warriors (exact, given by Blizzard)

$Q_s=243.60536(29097) \pm 0.00000704$ is the L85 strength-to-parry conversion factor for paladins and warriors

$Q_a=10000$ is the L85 agility-to-dodge conversion factor for paladins and warriors, this is assumed to be exact based on fitting residuals.

And since those latex expressions aren’t in easy copy/paste-able form, here’s the constants in plain text:

Const Nominal Value 95% CI Notes C_p = 237.1860(403230) 0.00005478 C_d = 66.56744(61982) 0.000006006 90.6425(0) for warriors C_b = 150.3759(4692967) 0.0000094316 k = 0.886 (exact) Q_s = 951.158596 (exact) L90 Q_s = 243.60536(29097) 0.00000704 L85 Q_a = 10000 (exact)

I’ve already updated my Tankadin spreadsheet with the updated $k$ and $C_i$ values. You may note that I haven’t bothered to add agility scaling to it, even though we’ve known it has existed for quite some time. I don’t think anyone will be attempting to stack enough agility gear to make it relevant, and as a result I don’t think it makes sense to needlessly complicate the spreadsheet.

As a final comment, I’d like to note that much of this post is directly attributable to the work of Pauladin, Mythor, and Jere, who deserve at least as much credit as I do (and probably more) for these results. Without their data sets and insights about the errors we were bumping into, these results would not have been possible.

Very nice find on the rounding scheme.

Can explain how ROUND() works for matlab (PM on MT is fine, xstratax)? For LUA there is no inherent Rounding function, so we often have to write one like so:

math.floor(num1 * math.pow(10, decPlaces) + 0.5)/math.pow(10, decPlaces)

Where ‘num1′ is the number being rounded, and ‘decPlaces’ is the number of decimal places we need to round to (just in case that wasnt obivous on first glance :S )

Its not a super awesome method, but gets the job done.

It works just like you’d expect. ROUND(x) gives x to the nearest integer. There’s a separate function for rounding to a different decimal place (i.e. roundn(x,-2) will round to 2 decimal places.

Since round(x) is functionally identical to floor(x+0.5), you can just use that in LUA, like you’ve suggested. Though in this particular case, you’d want to do it to 8 bits (i.e. math.pow(2,7), or just hardcode 128).

So something like floor(x*128)/128 would provide for the expected result?

It would have to be floor(x*128+0.5)/128 to be strictly correct. Otherwise you’ll get small rounding errors here and there (i.e. $latex pm$ 0.01% errors in block chance)

Ok, perfect.

Just amazing. Great work !

So what impact does this have the stat priority?? Still looking at Hit Cap/Expertise Cap > Mastery > Dodge > Parry > Haste

From what I can tell It should be minor, most of what has been done here is refinement of values, the end result being fractional differences here and there.

The more i thought about it, i figure some developer must have done the Mastery calculations in a `Decimal` data type, thinking it (and it’s fixed fractional precision of 8 bits) to be more accurate. Instead they ended up adding “round up on 0.5″ errors.

They should have just left it in a floating point type, with its 13-15 significant figures.

My first thought on them using the ‘decimal’ data type might have something to do with them wanting to ease the workload even a little but I admit I don’t have enough knowledge of programming to actually say whether using decimal instead of floating point would actually have any performance advantages.

There is no need to calculate this often, nor to be very precise.

My guess would be that they query the stat and the database type is limited.

Pingback: [Prot] 5.0 - I'm Sexy and I Know it - Elitist Jerks

Ive read this awhile ago, and found it extremely helpful even as a DK tank. However with the amounts of strength found on gear now (iLvl511) I’m running into a point where dodge may be how I need to reforge. But it depends on the value of Cd for DKs. If its 66.56….. Like for pallys. Or if its 90.62…. Like for warriors. Depending ill need to either gain or dump dodge. Has any sims been run on the dodge cap value for a DK?

It isn’t really a sim – we can determine it from in-game data. I can calculate the coefficients if you provide me with a data set.

Basically, here’s what you’d need to do:

1) Make a spreadsheet with the following columns:

Strength

Parry Rating

Dodge Rating

Parry percent (as read from character sheet)

Dodge percent (as read from character sheet)

Then, take off all of your gear and fill in the first row of columns (no buffs either). Next, put one piece of gear on and fill in the next row with the new values. Keep doing this for each piece of gear until you have everything on (and if possible, try and use gear with as much parry rating and strength as possible).

Then give me a link to the spreadsheet, and I’ll use the data to determine the diminishing returns coefficients just like I did in this post.

Ok I’ll work on compiling that this weekend, I can make it even a bit more thorough if u like as I have a some slots with 2 or more choices.

Sure, the more data I have, the better the fit.