There's a persistent myth in the accessibility community that automated tools are useless because they can only detect 30% of WCAG violations. While the statistic is directionally correct, the conclusion is wrong. Automated testing is essential — it's just not sufficient on its own.
What automated tools catch well
Automated scanners like xsbl (powered by axe-core) excel at detecting structural issues: missing alt text, insufficient color contrast, missing form labels, broken ARIA attributes, heading hierarchy violations, missing landmark regions, duplicate IDs, and keyboard trap detection. These are objective, measurable violations that a machine can evaluate definitively.
Critically, these structural issues are often the most impactful. A screen reader user can't navigate a page at all if the heading hierarchy is broken. Missing alt text means images are completely invisible to blind users. These aren't edge cases — they're fundamental barriers.
What requires human judgment
Automated tools struggle with subjective evaluations: Is this alt text actually meaningful, or is it generic? Does the reading order make logical sense? Is the error message clear enough? Does the focus order follow a logical flow? These require human understanding of context and intent.
Similarly, automated tools can't fully evaluate complex interactions like custom widgets, dynamic content updates, or single-page application navigation patterns without manual testing.
The optimal workflow
Use automated scanning as your first line of defense. Run it on every PR, catch the easy wins, keep the baseline high. Then layer in manual testing for releases — keyboard-only navigation testing, screen reader testing with NVDA or VoiceOver, and cognitive walkthrough of key user flows. Automated tools handle the 50% that's objective and repeatable. Humans handle the 50% that requires judgment.