Agreement Between Measurements
Statistics κ can take values ranging from – 1 to 1 and are arbitrarily interpreted as follows: 0 = concordance equivalent to chance; 0.10-0.20 = light match; 0.21-0.40 = fair agreement; 0.41-0.60 = moderate concordance; 0.61-0.80 = essential correspondence; 0.81-0.99 = almost perfect match; and 1.00 = full compliance. Negative values indicate that the agreement respected is worse than what is expected by chance. Another interpretation is that Kappa values below 0.60 indicate a significant degree of disagreement. Compliance limits = mean difference observed ± 1.96 standard deviation × of the observed differences. It is important to note that in each of the three situations in Table 1, the percentages of success are the same for both examiners, and if both examiners are compared to a usual 2 × 2 test for matched data (McNemar test), there would be no difference in their performance; on the other hand, agreement among observers varies considerably from country to country in all three situations. The fundamental approach is that “convergence” quantifies the concordance between the two examiners for each of the “pairs” of marks and not the similarity of the overall pass percentage between the examiners. In extreme cases, if we have several pairs of measures on the same individual, sT2 = 0 (provided there is no change in time) and ρ = 0, regardless of the proximity of the concordance. As noted above, correlation is not synonymous with concordance. Correlation refers to the existence of a relationship between two different variables, while concordance refers to the concordance between two measures of a variable. Two groups of highly correlated observations may have a mismatch; However, if the two groups of values are identical, they will certainly be highly correlated. For example, in the example of hemoglobin, the correlation coefficient between the values of the two methods is high, although the concordance is poor [Figure 2]; (r = 0.98). The other way to look at it is that, although the individual points are not close enough to the dotted line (the smallest square line;[ 2] which indicates a good correlation), these are quite far from the black line crossed which represents the line of the perfect correspondence (Figure 2: the black line crossed).
If a good match is made, points are expected to fall on or near this line (the black line) crossed. The newspaper was a fantastic success, beyond my boldest dreams. In this paper, we focused on the details of the agreement`s approach to the limits and used that term for the first time. According to our original author`s plan, the second work was published under the names of Bland and Altman, and so the method was called “the Bland Altman method”. I`m sorry, Doug! Kalantri et al. studied the accuracy and reliability of pallor as a tool for detecting anemia.  They concluded that “clinical evaluation of pallor may exclude severe anemia and decide modestly.” However, the correspondence between observers for the detection of pallor was very poor (kappa = 0.07 for connective blues and 0.20 for tongue blues), meaning that pallor is an unreliable sign for the diagnosis of anemia. I think statisticians have significantly improved the quality of medical research. Of course, we did not do it alone and it is the partnership between health professionals and statisticians that should get the loan. Sir Richard Doll, who died while I wrote this lecture, and Sir Austin Bradford Hill revolutionized epidemiological and clinical research together. . .