# Durbin Watson Statistic Definition

**Durbin Watson Statistic Definition**Táº¡i

**seattlecommunitymedia.org**

[ad_1]

## What Is the Durbin Watson Statistic?

The Durbin Watson (DW) statistic is a test for autocorrelation in the residuals from a statistical model or regression analysis. The Durbin-Watson statistic will always have a value ranging between 0 and 4. A value of 2.0 indicates there is no autocorrelation detected in the sample. Values from 0 to less than 2 point to positive autocorrelation and values from 2 to 4 means negative autocorrelation.

A stock price displaying positive autocorrelation would indicate that the price yesterday has a positive correlation on the price todayâ€”so if the stock fell yesterday, it is also likely that it falls today. A security that has a negative autocorrelation, on the other hand, has a negative influence on itself over timeâ€”so that if it fell yesterday, there is a greater likelihood it will rise today.

### Key Takeaways

- The Durbin Watson (DW) statistic is a test for autocorrelation in a regression model’s output.
- The DW statistic ranges from zero to four, with a value of 2.0 indicating zero autocorrelation.
- Values below 2.0 mean there is positive autocorrelation and above 2.0 indicates negative autocorrelation.
- Autocorrelation can be useful inÂ technical analysis, which is most concerned with the trends of security prices using charting techniques in lieu of a company’s financial health or management.

## The Basics of the Durbin Watson Statistic

Autocorrelation, also known as serial correlation, can be a significant problem in analyzing historical data if one does not know to look out for it. For instance, since stock prices tend not to change too radically from one day to another, the prices from one day to the next could potentially be highly correlated, even though there is little useful information in this observation. In order to avoid autocorrelation issues, the easiest solution in finance is to simply convert a series of historical prices into a series of percentage-price changes from day to day.

Autocorrelation can be useful forÂ technical analysis, which is most concerned with the trends of, and relationships between, security prices using charting techniques in lieu of a company’s financial health or management. Technical analysts can use autocorrelation to see how much of an impact past prices for a security have on its future price.

Autocorrelation can show if there is a momentum factor associated with a stock. For example, if you know that a stock historically has a high positive autocorrelation value and you witnessed the stock making solid gains over the past several days, then you might reasonably expect the movements over the upcoming several days (the leading time series) to match those of the lagging time series and to move upward.

###
The Durbin Watson statistic is named after statisticians James Durbin and Geoffrey Watson.

The Durbin Watson statistic is named after statisticians James Durbin and Geoffrey Watson.

## Special Considerations

A rule of thumb is that DW test statistic values in the range of 1.5 to 2.5 are relatively normal. Values outside this range could, however, be a cause for concern. The Durbinâ€“Watson statistic, while displayed by many regression analysis programs, is not applicable in certain situations.

For instance, when lagged dependent variables are included in the explanatory variables, then it is inappropriate to use this test.

## Example of the Durbin Watson Statistic

The formula for the Durbin Watson statistic is rather complex but involves the residuals from an ordinary least squares (OLS) regression on a set of data. The following example illustrates how to calculate this statistic.

Assume the following (x,y) data points:

$\begin{array}{cc}& \text{Pair\xc2One = ( 10 , 1 , 100 ) Pair\xc2Two = ( 20 , 1 , 200 ) Pair\xc2Three = ( 35 , 985 ) Pair\xc2Four = ( 40 , 750 ) Pair\xc2Five = ( 50 , 1 , 215 ) Pair\xc2Six = ( 45 , 1 , 000 )}\end{array}$

begin{aligned} &text{Pair One}=left( {10}, {1,100} right )\ &text{Pair Two}=left( {20}, {1,200} right )\ &text{Pair Three}=left( {35}, {985} right )\ &text{Pair Four}=left( {40}, {750} right )\ &text{Pair Five}=left( {50}, {1,215} right )\ &text{Pair Six}=left( {45}, {1,000} right )\ end{aligned}

â€‹PairÂ One=(1,1,1)PairÂ Two=(2,1,2)PairÂ Three=(35,985)PairÂ Four=(4,75)PairÂ Five=(5,1,215)PairÂ Six=(45,1,)â€‹

Using the methods of a least squares regression to find the “line of best fit,” the equation for the best fit line of this data is:

$\mathrm{Y=\xe2\u02c6\u2019\mathrm{2.6268\mathrm{x+\mathrm{1,129.2}}}}$

Y={-2.6268}x+{1,129.2}

Y=âˆ’2.6268x+1,129.2

This first step in calculating the Durbin Watson statistic is to calculate the expected “y” values using the line of best fit equation. For this data set, the expected “y” values are:

$\begin{array}{cc}& \text{Expected Y ( 1 ) = ( \xe2\u02c6\u2019 2.6268 \xc3\u2014 10 ) + 1 , 129.2 = 1 , 102.9 Expected Y ( 2 ) = ( \xe2\u02c6\u2019 2.6268 \xc3\u2014 20 ) + 1 , 129.2 = 1 , 076.7 Expected Y ( 3 ) = ( \xe2\u02c6\u2019 2.6268 \xc3\u2014 35 ) + 1 , 129.2 = 1 , 037.3 Expected Y ( 4 ) = ( \xe2\u02c6\u2019 2.6268 \xc3\u2014 40 ) + 1 , 129.2 = 1 , 024.1 Expected Y ( 5 ) = ( \xe2\u02c6\u2019 2.6268 \xc3\u2014 50 ) + 1 , 129.2 = 997.9 Expected Y ( 6 ) = ( \xe2\u02c6\u2019 2.6268 \xc3\u2014 45 ) + 1 , 129.2 = 1 , 011}\end{array}$

begin{aligned} &text{Expected}Yleft({1}right)=left( -{2.6268}times{10} right )+{1,129.2}={1,102.9}\ &text{Expected}Yleft({2}right)=left( -{2.6268}times{20} right )+{1,129.2}={1,076.7}\ &text{Expected}Yleft({3}right)=left( -{2.6268}times{35} right )+{1,129.2}={1,037.3}\ &text{Expected}Yleft({4}right)=left( -{2.6268}times{40} right )+{1,129.2}={1,024.1}\ &text{Expected}Yleft({5}right)=left( -{2.6268}times{50} right )+{1,129.2}={997.9}\ &text{Expected}Yleft({6}right)=left( -{2.6268}times{45} right )+{1,129.2}={1,011}\ end{aligned}

â€‹ExpectedY(1)=(âˆ’2.6268Ã—1)+1,129.2=1,12.9ExpectedY(2)=(âˆ’2.6268Ã—2)+1,129.2=1,76.7ExpectedY(3)=(âˆ’2.6268Ã—35)+1,129.2=1,37.3ExpectedY(4)=(âˆ’2.6268Ã—4)+1,129.2=1,24.1ExpectedY(5)=(âˆ’2.6268Ã—5)+1,129.2=997.9ExpectedY(6)=(âˆ’2.6268Ã—45)+1,129.2=1,11â€‹

Next, the differences of the actual “y” values versus the expected “y” values, the errors, are calculated:

$\begin{array}{cc}& \text{Error ( 1 ) = ( 1 , 100 \xe2\u02c6\u2019 1 , 102.9 ) = \xe2\u02c6\u2019 2.9 Error ( 2 ) = ( 1 , 200 \xe2\u02c6\u2019 1 , 076.7 ) = 123.3 Error ( 3 ) = ( 985 \xe2\u02c6\u2019 1 , 037.3 ) = \xe2\u02c6\u2019 52.3 Error ( 4 ) = ( 750 \xe2\u02c6\u2019 1 , 024.1 ) = \xe2\u02c6\u2019 274.1 Error ( 5 ) = ( 1 , 215 \xe2\u02c6\u2019 997.9 ) = 217.1 Error ( 6 ) = ( 1 , 000 \xe2\u02c6\u2019 1 , 011 ) = \xe2\u02c6\u2019 11}\end{array}$

begin{aligned} &text{Error}left({1}right)=left( {1,100}-{1,102.9} right )={-2.9}\ &text{Error}left({2}right)=left( {1,200}-{1,076.7} right )={123.3}\ &text{Error}left({3}right)=left( {985}-{1,037.3} right )={-52.3}\ &text{Error}left({4}right)=left( {750}-{1,024.1} right )={-274.1}\ &text{Error}left({5}right)=left( {1,215}-{997.9} right )={217.1}\ &text{Error}left({6}right)=left( {1,000}-{1,011} right )={-11}\ end{aligned}

â€‹Error(1)=(1,1âˆ’1,12.9)=âˆ’2.9Error(2)=(1,2âˆ’1,76.7)=123.3Error(3)=(985âˆ’1,37.3)=âˆ’52.3Error(4)=(75âˆ’1,24.1)=âˆ’274.1Error(5)=(1,215âˆ’997.9)=217.1Error(6)=(1,âˆ’1,11)=âˆ’11â€‹

Next these errors must be squared and summed:

$\begin{array}{cc}& \text{Sum\xc2of\xc2Errors\xc2Squared\xc2=}& ({\xe2\u02c6\u2019\mathrm{2.9\mathrm{2+{\mathrm{123.3\mathrm{2+{\xe2\u02c6\u2019\mathrm{52.3\mathrm{2+{\xe2\u02c6\u2019\mathrm{274.1\mathrm{2+{\mathrm{217.1\mathrm{2+{\xe2\u02c6\u2019\mathrm{11\mathrm{2)=}& \mathrm{140,330.81&}}}^{}}}}^{}}}}^{}}}}^{}}}}^{}}}}^{}\end{array}$

begin{aligned} &text{Sum of Errors Squared =}\ &left({-2.9}^{2}+{123.3}^{2}+{-52.3}^{2}+{-274.1}^{2}+{217.1}^{2}+{-11}^{2}right)= \ &{140,330.81}\ &text{}\ end{aligned}

â€‹SumÂ ofÂ ErrorsÂ SquaredÂ =(âˆ’2.92+123.32+âˆ’52.32+âˆ’274.12+217.12+âˆ’112)=14,33.81â€‹

Next, the value of the error minus the previous error are calculated and squared:

$\begin{array}{cc}& \text{Difference ( 1 ) = ( 123.3 \xe2\u02c6\u2019 ( \xe2\u02c6\u2019 2.9 ) ) = 126.2 Difference ( 2 ) = ( \xe2\u02c6\u2019 52.3 \xe2\u02c6\u2019 123.3 ) = \xe2\u02c6\u2019 175.6 Difference ( 3 ) = ( \xe2\u02c6\u2019 274.1 \xe2\u02c6\u2019 ( \xe2\u02c6\u2019 52.3 ) ) = \xe2\u02c6\u2019 221.9 Difference ( 4 ) = ( 217.1 \xe2\u02c6\u2019 ( \xe2\u02c6\u2019 274.1 ) ) = 491.3 Difference ( 5 ) = ( \xe2\u02c6\u2019 11 \xe2\u02c6\u2019 217.1 ) = \xe2\u02c6\u2019 228.1 Sum\xc2of\xc2Differences\xc2Square = 389 , 406.71}\end{array}$

begin{aligned} &text{Difference}left({1}right)=left( {123.3}-left({-2.9}right) right )={126.2}\ &text{Difference}left({2}right)=left( {-52.3}-{123.3} right )={-175.6}\ &text{Difference}left({3}right)=left( {-274.1}-left({-52.3}right) right )={-221.9}\ &text{Difference}left({4}right)=left( {217.1}-left({-274.1}right) right )={491.3}\ &text{Difference}left({5}right)=left( {-11}-{217.1} right )={-228.1}\ &text{Sum of Differences Square}={389,406.71}\ end{aligned}

â€‹Difference(1)=(123.3âˆ’(âˆ’2.9))=126.2Difference(2)=(âˆ’52.3âˆ’123.3)=âˆ’175.6Difference(3)=(âˆ’274.1âˆ’(âˆ’52.3))=âˆ’221.9Difference(4)=(217.1âˆ’(âˆ’274.1))=491.3Difference(5)=(âˆ’11âˆ’217.1)=âˆ’228.1SumÂ ofÂ DifferencesÂ Square=389,46.71â€‹

Finally, the Durbin Watson statistic is the quotient of the squared values:

$\text{Durbin\xc2Watson = 389 , 406.71 / 140 , 330.81 = 2.77}$

text{Durbin Watson}={389,406.71}/{140,330.81}={2.77}

DurbinÂ Watson=389,46.71/14,33.81=2.77

[ad_2]

View more information: https://www.investopedia.com/terms/d/durbin-watson-statistic.asp

**Blue Print**