# Avoidance Diminishing Returns in MoP – Part 2

In Part 1, I analyzed a data set collected at level 85 and came up with equations that accurately describe avoidance diminishing returns at that level. In this installment, we’re going to return to the level 90 data set provided by Klaudandus. This data set was troublesome before because of its near-linearity, making it difficult to fit. But the level 85 data gave us a few key pieces of information that will help us nail down the level 90 equations. Those key points are:

1. The parry gained through strength is affected by diminishing returns
2. The dodge and parry equations have the same form as the Cataclysm one at level 85, but the constants have been changed slightly.
3. The scale factor $k$ for both dodge and parry has changed from 0.956 to 0.885.
4. The dodge cap $C_d$ hasn’t been changed from the Cataclysm value (65.631440).
5. The parry cap $C_p$ has been changed, and our best estimate is 235.5 with a range of $\pm$1 or so.

The file containing the data and calculations for this section is here: dr_eqn_data_L90.m Klaud’s data set is fairly extensive, but very scattered. Rather than try to reproduce it all in one table, I’m just going to give the portions that are relevant to each section.

First, let’s start with what we already know. From Simcraft’s datamining, we know the rating conversion factors for dodge and parry at level 90 are both 1035. This value is very consistent with Klaud’s rating and pre-DR data, so I won’t bother to reproduce it here. In addition, we have a few base values from Klaud’s naked data set:

baseStr=176
baseDodge=3.01
baseParry=3.19

Analysis – Dodge

First, we’ll look at dodge since that’s easy:

Dodge data, in %
pre-DR   post-DR
2.80     6.03
2.97     6.21
3.30     6.54
2.24     5.45
3.02     6.26
2.58     5.80
3.36     6.60
3.19     6.43
0.80     3.91
1.12     4.25
1.41     4.56
1.68     4.85
1.86     5.04
2.08     5.28
2.18     5.39
2.47     5.68
2.76     5.98
3.08     6.31
3.34     6.58
3.62     6.86
3.83     7.07

Remember that the post-DR values have base dodge built-in, which is why they’re so much higher. The pre-DR values are read directly from the dodge tooltip. If we pretend we don’t know anything about the constants, and just try to fit with the old DR formula plus base dodge $b$, we get this:

d1_fit = General model:
d1_fit(x) = b+1./(1./C+k./x)
Coefficients (with 95% confidence bounds):
C =       72.31  (58.09, 86.53)
b =       3.017  (2.999, 3.035)
k =      0.8914  (0.8781, 0.9047)

d1_gof =sse: 4.1960e-004
rsquare: 1.0000
dfe: 18
rmse: 0.0048

Not bad, though the ranges for $k$, $b$, and $C$ are all pretty wide. We again note that the Cataclysm/MoP level 85 value for $C$ falls within the confidence interval. In addition, we know that $b=3.01$ (this was before the recent change to Sanctuary), so let’s try and fit again with those two constraints:

d2_fit = General model:
d2_fit(x) = 3.01+1./(1./65.631440+k./x)
Coefficients (with 95% confidence bounds):
k =       0.885  (0.8843, 0.8857)

d2_gof =sse: 4.4994e-004
rsquare: 1.0000
dfe: 20
rmse: 0.0047

In other words, exactly like the L85 data. This is enough evidence to conclude that the formulas (or in particular, the values of $k$ and $C$) do not change with character level. We do expect that the strength-to-parry conversion factor in the parry equation will change, however, so we’ll have to find that through fitting.

Conclusion: The combination of this data and the level 85 data suggests that the dodge equation is exactly:

$\Large {\rm totalDodge} = {\rm baseDodge} + \left (\frac{1}{C_d}+\frac{k}{\rm preDodge} \right )^{-1}$

with

${\rm baseDodge}=3.01$ (5.01 with Sanctuary)
$C_d = 65.631440$
$k = 0.885$

and ${\rm preDodge}$ being the dodge from all other sources. For us, this is just dodge rating. Presumably for bears/monks, this will include the dodge gained from $({\rm totalAgi}-{\rm baseAgi})$. And of course, I’d expect the cap $C_d$ to have a different value for bears, since they won’t get any avoidance from parry. Monks will probably share the bear cap since even though they can parry, the leather gear they share with bears won’t have parry rating. Their parry will be limited to rings, necks, cloaks, and trinkets, so they won’t be able to take advantage of the much more forgiving parry DR curve.

Analysis – Parry

Onward to parry. Klaud has provided us with a number of parry data sets that cover a pretty broad range of strength and parry rating. First, we’ll look at a subset of that data that keeps parry rating fixed. There are actually two data sets like this, at different parry ratings:

Set 1: fixed at 2.10% parry from rating
Str   post-DR parry
3638   9.50
4242  10.17
4752  10.74
5477  11.55
6115  12.25
6606  12.79
7210  13.44
7935  14.23
8779  15.13

Set 2: fixed at 1.15% parry from rating
Str     post-DR parry
1350    5.85
2236    6.87
2870    7.60
3631    8.46
4602    9.56
5236   10.27
5870   10.98
6389   11.55
7056   12.28
7817   13.12
8578   13.94
9548   14.99
10309   15.80

I attempted to fit these sets of data two different ways: once with a linear fit ${\rm totalParry} = b + {\rm Str}/a$, and once with the full DR equation:

b+baseStr/a+1/(1/235.5+0.885/(2.1+(x-baseStr)/a))

Here, I’ve used the level 85 values of $k$ and $C_p$, since the level 90 dodge data suggests that the constants don’t change. While it is possible, of course, that $C_p$ changes with level while $C_d$ does not, that seems very unlikely. What we ideally want to nail down with this data set is $a$, the strength-to-parry conversion factor, which is level-dependent. If we perform these fits on the first data set, these are the results:

ps1_fit = General model:
ps1_fit(x) = b+x./a
Coefficients (with 95% confidence bounds):
a =       911.9  (905.3, 918.6)
b =       5.529  (5.478, 5.579)

ps1_gof =
sse: 0.0019
rsquare: 0.9999
dfe: 7
rmse: 0.0164
ps2_fit =
General model:
ps2_fit(x) = b+176./a+1./(1./235.5+0.885./(2.1+(x-176)./a))
Coefficients (with 95% confidence bounds):
a =       952.3  (950.6, 954)
b =       3.006  (2.994, 3.018)

ps2_gof =
sse: 1.0177e-004
rsquare: 1.0000
dfe: 7
rmse: 0.0038

As you can see, both fits are pretty good. But the proper DR version is better than the linear approximation. This explains why I had so much trouble fitting the L90 data initially though – we’re so low on the DR curve that everything still looks pretty linear, and our fit had so many free parameters that it was hard to get a tight confidence interval on any of them.

Nonetheless, the second fit gives us a pretty good estimate of a, roughly 952. b is also consistent with what we expect for ${\rm baseParry}$, which should be 3.00 or 3.01 (unclear, since ${\rm baseDodge}=3.01$ despite not having any other sources of dodge). Fitting the second data set gives similar results:

ps3_fit =
General model:
ps3_fit(x) = b+x./a
Coefficients (with 95% confidence bounds):
a =       900.3  (893, 907.7)
b =       4.417  (4.359, 4.475)

ps3_gof =
sse: 0.0177
rsquare: 0.9998
dfe: 11
rmse: 0.0402
ps4_fit =
General model:
ps4_fit(x) = b+176./a+1./(1./235.5+0.885./(1.15+(x-176)./a))
Coefficients (with 95% confidence bounds):
a =       951.9  (951.2, 952.5)
b =           3  (2.995, 3.004)

ps4_gof =
sse: 1.0299e-004
rsquare: 1.0000
dfe: 11
rmse: 0.0031

From this fit, my best estimate of the two parameters are $b=3.00$ and $a=952$. At this point, we can try to tackle the entire parry data set, given below:

          Parry %
Str   pre-DR post-DR
4914    1.41   10.19
5308    1.51   10.73
5483    1.27   10.68
5956    1.26   11.19
6123    1.38   11.50
6371    1.28   11.67
6589    0.80   11.40
6751    0.80   11.58
6430    0.55   10.96
6430    0.81   11.24
6430    1.09   11.53
6430    1.33   11.79
6430    1.67   12.14
6430    1.93   12.41
6430    2.27   12.76
6430    2.57   13.07
6430    2.75   13.26
9354    0.76   14.38
9354    1.21   14.84
9354    1.46   15.10
9354    1.66   15.29
9354    1.84   15.48
9354    2.13   15.77
9354    2.34   15.98
9354    2.63   16.28
9354    3.00   16.65
3638    2.10    9.50
4242    2.10   10.17
4752    2.10   10.74
5477    2.10   11.55
6115    2.10   12.25
6606    2.10   12.79
7210    2.10   13.44
7935    2.10   14.23
8779    2.10   15.13
1350    1.15    5.85
2236    1.15    6.87
2870    1.15    7.60
3631    1.15    8.46
4602    1.15    9.56
5236    1.15   10.27
5870    1.15   10.98
6389    1.15   11.55
7056    1.15   12.28
7817    1.15   13.12
8578    1.15   13.94
9548    1.15   14.99
10300   1.15   15.80

Again, pre-DR is just the component of pre-DR parry provided by parry rating. We start with a very open expression, leaving all of the parameters free except for ${\rm baseStr}$, which we know is 176:

     General model:
p1_fit(x,y) = b+176./a+1./(1./C+k./((x-176)./a+y))
Coefficients (with 95% confidence bounds):
C =         234  (226.3, 241.6)
a =       951.7  (949.6, 953.8)
b =       3.001  (2.989, 3.013)
k =       0.885  (0.8819, 0.8882)

p1_gof =
sse: 7.5968e-004
rsquare: 1.0000
dfe: 44
rmse: 0.0042

Not bad at all. All four parameters ended up very close to the expected values. The range on $C$ and $a$ are still pretty broad, however. To tighten those up, we fix $k=0.885$ and $b=3.00$, since we’re pretty sure about those two. That gives us:

     General model:
p2_fit(x,y) = 3.00+176./a+1./(1./C+0.885./((x-176)./a+y))
Coefficients (with 95% confidence bounds):
C =       233.5  (230.4, 236.6)
a =       951.5  (950.8, 952.2)

p2_gof =
sse: 7.6093e-004
rsquare: 1.0000
dfe: 46
rmse: 0.0041

We’re getting closer. At this point, I decided to assume that our $C_p$ from the L85 data was exact. This seems pretty reasonable given that we don’t think it changes with level, and our L85 data was much better-suited to giving us an accurate value for $C_p$ since it covered more of the diminishing returns curve. So we set $C=235.5$ and see what the algorithm spits out for $a$:

     General model:
p3_fit(x,y) = 3.00+176./a+1./(1./235.5+0.885./((x-176)./a+y))
Coefficients (with 95% confidence bounds):
a =         952  (951.8, 952.1)

p3_gof =
sse: 7.8737e-004
rsquare: 1.0000
dfe: 47
rmse: 0.0041

Which is exactly what we expected based on our strength data subsets. The parry from base strength should then be 176/952=0.1849%, which is teetering on the edge of the 0.19% observed. It’s not clear whether ${\rm baseParry}$ should be 3.01 like it is for dodge, or whether our value of $a$ is a little on the heavy side (dropping it to 951.5 is enough to push us over the edge).

For the visually-inclined, here’s a surface plot showing the fit and the data: It might be hard to see, but if you spin the graph around you’d see that the data points all lie very close to the surface. That’s easier to see if we plot the residuals (the difference between the data point and the surface): The largest residuals hear are ~0.008% – in other words, the largest difference between the equation’s prediction and the observed post-DR parry percentage is around 0.008%, less than the rounding error (because the game rounds to the nearest 0.01% parry). In fact, these high-residual data points are probably cases where the fit predicts something near a rounding edge. For example, if the fit predicts 10.002% when it should be predicting 10.005%, the observed value will be 10.01% and the residual will be (10.01-10.002)=0.08%. So even though the error in the fit’s prediction is actually only (10.005-10.002)=0.003%, that error is exacerbated by the game’s rounding. Such an error is relatively rare though – the standard deviation of the residuals is 0.0041%.

Conclusion: This data suggests that the parry equation at level 90 is:

$\large {\rm totalParry} = {\rm baseParry} + \frac{\rm baseStr}{a} + \left (\frac{1}{C_p}+\frac{k}{\frac{{\rm totalStr}-{\rm baseStr}}{a} + {\rm preParry}} \right )^{-1}$

with

${\rm baseParry}=3.00$
${\rm baseStr}=176$ (for a BE, varies per race)
$a=952$
$C_p=235.5$
$k=0.885$

and ${\rm preParry}$ being the pre-DR parry granted by parry rating.

Looking Forward

In part 3 (Friday), I’ll discuss what these equations mean for tanks in a more practical sense. The new diminishing returns equations will change some of the rules of thumb we use for choosing gear and reforging, for example, so we’ll want to know what those new rules are.

This entry was posted in Tanking, Theck's Pounding Headaches, Theorycrafting and tagged , , , , , , , , , . Bookmark the permalink.