
08/07/2025
Laboratory ကသူက Statistics သိဖို့လိုလား……?
Overall Percentage Agreement (OPA/POA)
“AYER” မှာစျေးဝယ် 🛍️🛒
“AYER” မှာစာဖတ်မယ် 📖📚🤓
အပိုင်း (၅)
RDT တွေရွေးချယ်တဲ့အခါ၊
RDT performance တွေသိချင်တဲ့အခါ၊
သိထားသင့်တဲ့ Statistics ထဲကတခုပေါ့….
OPA ကိုတွေ့တဲ့အခါ….
RDT performance တွေရှာကြည့်တဲ့အခါ…
AYER ကိုသတိရပေးရင်
တင်ပေးရတာကျေနပ်ပါပြီ ….
“Overall percentage agreement in Rapid Diagnostic Test (RDT) performance is a key metric used to assess how well an RDT's results align with a reference standard, often a more accurate laboratory test like PCR or microscopy.
✅What it means?
🤓It's the proportion of all tested samples where the RDT result matches the reference standard result.
🤓It encompasses both true positives (RDT correctly identifies a positive case) and true negatives (RDT correctly identifies a negative case).
✅How it's calculated?
The formula for overall percentage agreement (POA) is seen in photo.
Where:
-True Positives (TP): RDT is positive, and the reference standard is positive.
-True Negatives (TN): RDT is negative, and the reference standard is negative.
-Total Number of Samples: The sum of True Positives, True Negatives, False Positives (RDT positive, reference negative), and False Negatives (RDT negative, reference positive).
✅Why it's used?
🤓Simple and intuitive: It's easy to understand and communicate the overall concordance between the RDT and the gold standard.
🤓Initial assessment: It provides a quick snapshot of the RDT's general performance.
✅Limitations and why other metrics are important:
While useful, overall percentage agreement has limitations:
🤓Impact of prevalence: It can be heavily influenced by the prevalence of the disease in the tested population. For example, in a low-prevalence setting, an RDT that frequently gives negative results (even if it misses some true positives) can still have a high overall agreement due to a large number of true negatives. This can be misleading.
🤓Doesn't account for chance agreement: Some agreements between the RDT and the reference standard might occur simply by chance. Overall percentage agreement doesn't adjust for this.
🤓Doesn't differentiate types of errors: It doesn't tell you if the RDT is better at identifying positive cases (sensitivity) or negative cases (specificity).
Therefore, overall percentage agreement is almost always reported alongside other crucial performance metrics, such as:
🤓Sensitivity: The ability of the RDT to correctly identify positive cases (True Positives / (True Positives + False Negatives)).
🤓Specificity: The ability of the RDT to correctly identify negative cases (True Negatives / (True Negatives + False Positives)).
🤓Positive Predictive Value (PPV): The probability that a positive RDT result actually means the person has the disease (True Positives / (True Positives + False Positives)).
🤓Negative Predictive Value (NPV): The probability that a negative RDT result actually means the person does not have the disease (True Negatives / (True Negatives + False Negatives)).
🤓Cohen's Kappa (or similar statistics): This statistic adjusts for chance agreement and provides a more robust measure of the level of agreement between two diagnostic methods.
In essence, while overall percentage agreement gives a broad idea of RDT performance, a complete understanding requires looking at a suite of diagnostic performance metrics.