|
Effects of variables
on type II error
is the difference between the two means. As the value of
increases the two curves will separate and the power of the study will
increase. As
gets smaller, the curves will overlap and the power of the study will
decrease.
is usually set at 5%. Sometimes
is set at 1%, moving away from the centre of the distribution,and this
will reduce the power of the study. In this simulation,
has a maximum value of 10%, to show that this increases the power of the
study, but it would not be set at that level in a real situation.
n
is the number or size of the samples. This affects the standard error.
When n is low the sample size is small, the curves are flatter
and overlap more so, assuming
is held constant, this reduces the power of the study. As n increases
the standard error decreases and the curves have narrower distributions,
hence they overlap less. So the effect of increasing the sample size is
to increase the power of the study, again, if
is held constant.
is the standard deviation of the measurement you're interested in. Generally
speaking a researcher has no control over this, but sometimes you can
design your study so you reduce ,
for example there are some well defined protocols for measuring blood
pressure that reduce measurement error. Or you can restrict your study
to a more homogeneous population, for example blood pressure of men aged
40 - 45. When
is reduced the effect is to make the width of the distributions smaller
and hence the curves overlap less. So this increases the power assuming
you hold
constant. Conversely, as
increases, the width of the distributions get bigger and the curves overlap
more so the power of the study is reduced
|