How We Rate Supplements
Most supplement websites rate products based on opinions, brand deals, or whatever’s trending on TikTok. We don’t do that. Every rating on this site comes from a structured analysis of published research. Here’s exactly how it works.
Our 6-Step Process
We follow the same steps for every supplement claim we rate. No shortcuts. No exceptions.
Step 1: Search PubMed for published meta-analyses. We start by looking for existing meta-analyses and systematic reviews on a specific supplement and health claim. These are studies that pool data from multiple randomized controlled trials (RCTs). If a good meta-analysis already exists, we don’t need to start from scratch.
Step 2: Extract study-level data. We pull out the individual study results from each meta-analysis. That means sample sizes, effect sizes, confidence intervals, and study quality scores. We record everything in a structured format so we can verify the math.
Step 3: Check for newer RCTs. Meta-analyses go stale. A review from 2020 might miss five important trials published since then. We search for any newer RCTs that weren’t included in the original analysis. If we find them, we add their data to the pool.
Step 4: Calculate pooled effects. We combine all the study data using random-effects models. This gives us a single effect size estimate with a confidence interval. We also calculate heterogeneity (how much the studies disagree with each other) and prediction intervals (what range of effects you’d expect in a future study).
Step 5: Assess publication bias. Small studies with negative results often don’t get published. This makes supplements look more effective than they really are. We run funnel plot analysis and statistical tests (like Egger’s test) to check for this problem. If we find signs of bias, we note it clearly.
Step 6: Write it up in plain English. Numbers don’t help if you can’t understand them. We translate every finding into a clear verdict with a short explanation. You’ll always know what the data says and how confident we are.
Evidence Grades
Not all evidence is created equal. We assign a letter grade based on how much research exists and how consistent it is.
Grade A: 8 or more RCTs with consistent results and low heterogeneity. This is strong evidence. The studies agree with each other, and there’s enough data to be confident.
Grade B: 5 to 7 RCTs with mostly consistent results. Good evidence, but there might be some disagreement between studies or a few quality concerns.
Grade C: 3 to 4 RCTs, or results that are inconsistent across studies. The research exists, but it’s thin or contradictory. Take it with a grain of salt.
Grade D: Fewer than 3 RCTs, or very high heterogeneity. We basically can’t draw conclusions yet. More research is needed before anyone should make claims.
Verdict Criteria
Every supplement claim gets one of four verdicts.
Works: Grade A or B evidence with a meaningful effect size. The research is solid, and the effect is big enough to matter in real life.
Maybe: Grade B or C evidence with a small to moderate effect. There’s something there, but we aren’t confident enough to give a full thumbs up. More research could change this in either direction.
No Evidence: Grade C or D evidence with a negligible effect size. The studies either don’t exist, are too inconsistent, or show effects too tiny to matter.
Dangerous: Evidence of actual harm. This isn’t just “it doesn’t work.” This means studies show it can hurt you.
What You’ll See on Each Page
We don’t hide anything. Every supplement page shows you the raw data alongside our verdict.
You’ll find effect sizes reported as Hedges’ g (a standardized measure that works across different study designs). You’ll see 95% confidence intervals that tell you the range of plausible effects. We show heterogeneity scores (I²) so you know how much the studies agree. Prediction intervals show what you might expect from a future study. And we include forest plots and funnel plots so you can visualize the data yourself.
If you aren’t sure what any of those mean, check out our guide to reading research.
What We Don’t Do
We want to be upfront about the limits of this site.
We don’t run our own clinical trials. We’re analysts, not lab scientists. We don’t test supplement products in a lab for purity or potency. That’s a different job, and companies like NSF and USP do it well. And we don’t accept money from supplement companies. Not for reviews, not for ratings, not for anything. Our revenue comes from affiliate links to products that pass our evidence checks.
Limitations
Meta-analysis is powerful, but it isn’t perfect. We’re honest about that.
Most supplement RCTs are small. A trial with 30 participants can’t tell you much on its own. That’s why we pool them together, but even pooled estimates from small trials carry more uncertainty than pooled estimates from large ones.
Industry funding is common in supplement research. Companies that sell the product often fund the studies. This doesn’t automatically mean the results are wrong, but it’s a known source of bias. We flag industry-funded studies when we can.
Publication bias is real. Studies that show a supplement doesn’t work are less likely to get published. Our bias tests catch some of this, but not all of it.
And finally, we can only rate what’s been studied. If no one has run RCTs on a supplement, we can’t tell you anything about it. Absence of evidence isn’t evidence of absence, but it does mean you’re flying blind.
We built this site because we think you deserve better than marketing copy. You deserve actual data. If you have questions about our process, check our FAQ or contact us.