By comparing two methods of measurement, it is interesting not only to estimate both the bias and the limits of the agreement between the two methods (interdeccis agreement), but also to evaluate these characteristics for each method itself. It is quite possible that the agreement between two methods is bad simply because one method has broad convergence limits, while the other is narrow. In this case, the method with narrow limits of compliance would be statistically superior, while practical or other considerations could alter that assessment. In any event, what represents narrow or broad boundaries of the agreement or a large or small bias is a practical assessment. A stricter variation in the reliability of the interval is the exact reliability of the agreement (Repp, Deitz, Boles, Deitz, Repp, 1976), in which an agreement defines that both observers record the same number of responses in an interval. This method is the most conservative estimate of reliability, as each difference in data recording leads to a complete disagreement. For example, an interval in which one observer records four responses, while the other records five responses, is considered a disagreement. Cohen, J. Weighted kappa: the nominal scale agreement with derer iron disagreement or partial credit. Psychological Bulletin 1968,70, 213-220. The results of Study 2 showed that high (constant) response rates and inflections of your reaction to influence reliability values had little impact. If the position of the responses in the interval was manipulated so that half of the response intervals contained a response at the end of the interval, only absolute reliability remained intact.
Increasing effects of the end-of-interval response were observed with interval, proportionality and accurate reliability. Taylor, D.R. A useful method for calculating Harris and Lahey`s weighted agreement formula. Behavioural Therapist 1980,3, 3. We examined the impact of several changes in response rates on the calculation of interval, interval, accuracy and proportional reliability indices. Trained observers recorded computer-generated data that was displayed on a computer screen. In Study 1, responses to objectives appeared in separate meetings with low, moderate and high rates, which allowed the reliability results to be compared on the basis of the four calculations made over a range of values. Overall reliability was consistently high, interval reliability was very high for high-speed reactions, proportional reliability was slightly lower at high response rates, and accuracy reliability was the lowest of the measurements, particularly for high-rate reaction.