Introduction to Probability and Statistics for Engineers and Scientists

(Sean Pound) #1

9.11Logistic Regression Models for Binary Output Data 411


x

p(x)

b > 0

−∞

1

x

p(x)

b < 0

−∞

1

FIGURE 9.21 Logistic regression functions.


Writingp(x)= 1 −[1/(1+ea+bx)]and differentiating gives that


∂x

p(x)=

bea+bx
(1+ea+bx)^2

=bp(x)[ 1 −p(x)]

Thus the rate of change ofp(x) depends onxand is largest at those values ofxfor which
p(x) is near .5. For instance, at the valuexsuch thatp(x)=.5, the rate of change is

∂xp(x)=.25b, whereas at that valuexfor whichp(x)=.8 the rate of change is .16b.
If we leto(x) be the odds for success when the experiment is run at levelx, then


o(x)=

p(x)
1 −p(x)

=ea+bx

Thus, whenb>0, the odds increase exponentially in the input levelx; whenb<0, the
odds decrease exponentially in the input levelx. Taking logs of the preceding shows the
the log odds, called thelogit, is a linear function:


log[o(x)]=a+bx

The parametersaandbof the logistic regression function are assumed to be unknown
and need to be estimated. This can be accomplished by using the maximum likelihood
approach. That is, suppose that the experiment is to be performed at levelsx 1 ,...,xk.

Free download pdf