|<< contents||Section 4.1||Section 5.1|
Within observer agreement for the sign of visible flexural dermatitis has not been a problem in previous studies13,17. Between observer agreement is more difficult however. Clearly this is not a problem if your study involves one observer. If on the other hand one observer is recording the sign in one geographical area or country, and another is recording the sign at a different site, then it would be important to measure between observer agreement over a group of test subjects. Agreement should be expressed using the kappa statistic (a chance-corrected measure of agreement), and kappa values of 0.8 or over are attainable17. If your study involves a large number of observers, then testing their repeatability on a specially assembled selection of test cases and controls is likely to be impractical13. In this instance, we suggest that you simply compare your field workers' systematic errors, exploring the consequences in terms of misclassification of cases and whether or not such misclassification alters your conclusions. The beauty of comparing systematic error using a central marking system is that it will permit some degree of standardisation in recording this sign throughout the world.