Readability is the degree to which a text can be understood by its intended audience. The term covers both quantitative measures (readability formulas) and qualitative judgments about clarity, organization, and usability.

Rudolf Flesch pioneered quantitative readability research with the Flesch Reading Ease score and (with J. Peter Kincaid) the Flesch-Kincaid Grade Level, which estimate text difficulty from sentence length and syllable count [@flesch1949]. These formulas made readability measurable and became the most widely used metrics in English.

Flesch himself acknowledged the limits of formulas: they measure surface features (word and sentence length), not semantic clarity. A passage can score well on readability and still be incomprehensible if it’s poorly organized, logically incoherent, or aimed at the wrong audience. Karen Schriver’s research confirmed that readability requires more than surface-level adjustment — it requires understanding how readers actually process documents, which only feedback-based testing can reveal [@schriver1997].

John Sweller’s cognitive load theory provides a complementary framework: working memory is limited, and text that overloads it — through unfamiliar vocabulary, dense syntax, or too many simultaneous concepts — reduces comprehension regardless of its readability score [@sweller1988]. The vault’s plain language specification operationalizes this through rules like “one purpose per section” and “keep sentences straightforward.”

Readability is necessary but not sufficient for good technical writing. A readable document that’s poorly organized, factually wrong, or aimed at the wrong audience still fails its reader.

  • audience — readability is relative to an audience, not absolute
  • revision — the process by which readability is improved
  • plain language writing — the vault’s operational standard, which goes beyond readability scores