Summary
Run docvet against high-profile Python open-source projects (FastAPI, Pydantic, etc.) to demonstrate findings, validate the tool at scale, and potentially open PRs that showcase docvet's value.
Motivation
The adoption playbook from ruff, black, and pytest shows a common pattern: flagship adoption by marquee projects creates social proof. Ruff's adoption by FastAPI, Pandas, and Airflow was a turning point.
For docvet, running checks on popular projects serves multiple purposes:
- Validates the tool — proves docvet works on real, large codebases
- Generates compelling content — "We ran docvet on FastAPI and found X stale docstrings"
- Opens PR opportunities — fix real docstring issues, credit docvet in the PR
- Creates word-of-mouth — maintainers who see value may adopt docvet
Target projects (Google-style docstrings + mkdocs/mkdocstrings)
- FastAPI (mkdocs-material, high visibility)
- Pydantic (mkdocs-material, high visibility)
- typer (same author as FastAPI, mkdocs-material)
- httpx (mkdocs-material)
- Polars (mkdocs-material, growing rapidly)
Approach
- Run
docvet check --all on each project
- Document findings (counts by rule, interesting examples)
- If findings are genuine and fixable, open small PRs fixing 5-10 issues
- Blog post or docs page: "What we found running docvet on the Python ecosystem"
Related
Summary
Run docvet against high-profile Python open-source projects (FastAPI, Pydantic, etc.) to demonstrate findings, validate the tool at scale, and potentially open PRs that showcase docvet's value.
Motivation
The adoption playbook from ruff, black, and pytest shows a common pattern: flagship adoption by marquee projects creates social proof. Ruff's adoption by FastAPI, Pandas, and Airflow was a turning point.
For docvet, running checks on popular projects serves multiple purposes:
Target projects (Google-style docstrings + mkdocs/mkdocstrings)
Approach
docvet check --allon each projectRelated