Personal fork of DCsunset/taskwarrior-webui. Upstream has been effectively inactive since late 2024, so this fork is where new features and fixes live. The original stack — Vue.js (Nuxt 2 + Vuetify) frontend, Koa.js backend wrapping the task CLI — is preserved.
This fork is maintained primarily for personal use. Compared to upstream it adds:
- A redesigned UI (minimal, Linear/Todoist-inspired) replacing the Material Design defaults.
- A "Today" tab combining due-today and overdue tasks, with collapsible recurring instances.
- An Inbox view (pending tasks with no project and no parent — strict GTD).
- Multi-profile support: switch between several
.taskrc/.taskpairs from the UI. - Per-user authentication via Cloudflare Access, with per-profile authorization.
- Client-side text search with
/andCtrl+Kshortcuts. - Mobile-friendly tweaks: denser rows and a quick-add FAB.
- Reusable date/time picker for
Due/Until/Scheduled/Waitfields. - Backend write serialization per profile to prevent data corruption under concurrent edits.
See CHANGELOG.md for the detailed feature inventory with commit SHAs.
- Responsive layouts and PWA support
- Light and dark themes
- Sync with a Taskserver / Taskchampion server
- Multi-profile sync targets
- Optional Cloudflare Access authentication
- Easy to deploy (single Docker image)
This fork is deployed by building the image locally and streaming it to a VPS over SSH (docker save | ssh ... docker load), then docker compose up -d. There is no public registry in the loop — the upstream's CI workflow publishes to its own Docker Hub namespace and is not used here.
A helper script deploy.sh automates build + transfer + restart. See DEPLOY.md for the full setup, including Cloudflare Access configuration, the per-user users.json mapping, rollback and image hygiene.
If you just want to run it locally without the SSH/VPS flow, the simplest path is:
docker build -t taskwarrior-webui .
docker run -d -p 8080:80 --name taskwarrior-webui \
-v $HOME/.taskrc:/.taskrc -v $HOME/.task:/.task \
taskwarrior-webuiThen open http://127.0.0.1:8080.
Taskwarrior v2 and v3 data files are not cross-compatible. The Dockerfile pulls
task3from Alpineedge/community, so the resulting image is v3-only.
If your .taskrc references absolute paths (e.g. /home/you/ca.cert.pem for taskserver certs), mount the files at the same paths inside the container and set TASKRC / TASKDATA accordingly:
docker run -d -p 8080:80 --name taskwarrior-webui \
-e TASKRC=$HOME/.taskrc -e TASKDATA=$HOME/.task \
-v $HOME/.taskrc:$HOME/.taskrc -v $HOME/.task:$HOME/.task \
taskwarrior-webuiEnvironment variables read by the backend:
| Variable | Default | Purpose |
|---|---|---|
TASKRC |
/.taskrc |
Path to the .taskrc file |
TASKDATA |
/.task |
Path to the .task data directory |
PROFILES_CONFIG |
/profiles.json |
Optional JSON file declaring multiple profiles |
USERS_CONFIG |
/users.json |
Maps authenticated emails to allowed profiles (Cloudflare Access mode) |
AUTH_MODE |
cloudflare |
cloudflare (verify JWT) or dev (bypass; local only) |
CF_ACCESS_TEAM_DOMAIN |
— | Required when AUTH_MODE=cloudflare |
CF_ACCESS_AUD |
— | Required when AUTH_MODE=cloudflare |
DEV_USER_EMAIL |
dev@localhost |
Email injected as the request user in dev mode |
When TASKRC or TASKDATA is changed, mount the files at the matching paths inside the container.
Mount a JSON file declaring one entry per profile:
{
"profiles": [
{ "name": "personal", "taskrc": "/profiles/personal/.taskrc", "taskdata": "/profiles/personal/.task" },
{ "name": "work", "taskrc": "/profiles/work/.taskrc", "taskdata": "/profiles/work/.task" }
]
}docker run -d -p 8080:80 --name taskwarrior-webui \
-v $PWD/profiles.json:/profiles.json \
-v $HOME/.taskrc:/profiles/personal/.taskrc \
-v $HOME/.task:/profiles/personal/.task \
-v $PWD/work/.taskrc:/profiles/work/.taskrc \
-v $PWD/work/.task:/profiles/work/.task \
taskwarrior-webuiA profile selector appears in the top bar whenever two or more profiles are defined. If PROFILES_CONFIG points to a missing file, the UI falls back to a single default profile using TASKRC and TASKDATA.
Two modes:
AUTH_MODE=cloudflare(default in production): every request must carry a validCf-Access-Jwt-Assertionheader. Configure a Cloudflare Access self-hosted application in front of the deployment, then provideCF_ACCESS_TEAM_DOMAINandCF_ACCESS_AUD. Authenticated emails are mapped to allowed profiles viausers.json. Full walkthrough inDEPLOY.md.AUTH_MODE=dev: skips JWT verification and treats every request as authenticated asDEV_USER_EMAIL. Used by the dev npm scripts (npm run dev). Do not use in production.
If you don't want Cloudflare Access, you can keep AUTH_MODE=cloudflare disabled by replacing the auth layer with another reverse proxy (basic-auth via nginx, mTLS, etc.) — but users.json-based per-user profile authorization assumes the email claim from Cloudflare Access. Without it, all requests run under the dev user.
Frontend:
cd frontend
npm install
npm run build
npm run export # generates frontend/dist (static)Backend:
cd backend
npm install
npm run build
npm startThen serve the frontend with nginx (or another web server) and reverse-proxy /api/ to the backend (backend/src/app.ts listens on port 3000). Reference config: nginx/server.conf.
Two shells:
# Backend — listens on localhost:3000, uses backend/test/.taskrc and backend/test/.task
cd backend && npm install && npm run dev
# Frontend — Nuxt dev server on localhost:8080, proxies /api → localhost:3000
cd frontend && npm install && npm run devOpen http://localhost:8080. The dev TASKRC / TASKDATA point into backend/test/, so local development does not touch your real Taskwarrior data. The backend npm run dev script sets AUTH_MODE=dev, so requests don't need a JWT during local development.
Node version is pinned at 16.17.1 (.tool-versions). The frontend requires NODE_OPTIONS=--openssl-legacy-provider (already baked into its npm scripts) because of Nuxt 2 / Webpack 4 crypto compatibility on newer Node releases.
Frontend lint:
cd frontend && npm run lint # eslint --fix on .ts/.js/.vueThis Web UI supports auto-sync by calling task sync periodically. To use it, configure both the taskserver and client manually until task sync runs successfully on the host, then map the resulting .taskrc and .task into the container.
GPL-3.0

