Paste a link and Lineate turns it into a reading page designed to help you decide what is worth your attention and get more out of what you do read. Each output starts with concise takeaways and a strong critique, then includes either the full content or a complete summary.
- Turns links into cleaner, easier-to-read pages.
- Pulls the main takeaways to the top so you can triage quickly.
- Adds a devil’s-advocate critique to pressure-test the source's claims.
- Includes either the full content or a complete summary, depending on how you run it.
- Accepts links, multiple links pasted together, pasted text containing links, and pasted markdown.
- Processed URL(s) are opened in your browser by default.
Each output begins with:
Highlights: the main claims, conclusions, and important implications.Not in highlights: what you would miss by reading only the highlights.Best rebuttal: the strongest critique of the source's own case.
After that, Lineate includes either the full content or a semantically complete summary.
PERSONA.txt still informs the audience framing for the highlights prompt, but there is no longer a separate "skip this" section.
Supported inputs and adapters include:
You can paste any text containing URLs, a block of markdown, or a local file path for .pdf, .mhtml, .mht, .mp3, or .mp4.
- Some converters only run when you pass
--force-convert-allor append##to a specific URL. - Sources that can be turned into reading pages include YouTube (
/watch?,/live/,youtu.be,/shorts/), Twitter/X (/status/), Discord (discord.com), Telegram (t.me), PDFs (.pdf,arxiv.org/pdf/...), DocSend (docsend.com/view/...), GitBook (gitbook,docs.), Discourse (/t/...), Medium articles, Substack, Apple Podcasts, SoundCloud, Streameth, Rumble, Notion, and direct.mp3/.mp4. - URL normalisers include Medium tracking cleanup, Google Docs export, Wikipedia mobile, Reddit (
old.reddit.com), and GreaterWrong (lesswrong.com). - Warpcast (
warpcast.com) is passed through unchanged.
- Install Python deps (managed via
pyproject.toml):uv sync
- Ensure external tools are installed (see below).
- Fill in
.envwith required keys. - Set the LLM model in
config.json. - Run:
uv run --env-file .env -m lineate "https://..."- or just run with no args to use the clipboard.
uv run --env-file .env -m lineate [text-or-url]Options:
--force-convert-allforces conversion for all URLs.--summariseuses a semantically complete summary instead of the extracted body.--no-summarisekeeps the full extracted body.--no-openprevents opening processed URLs in your browser.--force-refreshforces refresh for all converters (equivalent to appending###).--force-no-convertskips conversion for all URLs.
Put these in this repo’s .env
Required for core functionality:
CLOUDFLARE_ACCOUNT_IDR2_ACCESS_KEY_IDR2_SECRET_ACCESS_KEY
Required for LLM summarisation/title generation:
LLM_API_KEY– used for text-generation calls to the configuredllm.api_base_url.
Required for audio/video transcription:
OPENAI_API_KEY– used for Whisper transcription.
Required for specific sources:
- Discord:
DISCORD_AUTH_TOKEN - Telegram:
TELEGRAM_API_ID,TELEGRAM_API_HASH,TELEGRAM_SESSION_NAME - Twitter/X:
TWITTER_BEARER_TOKEN,TWITTER_CT0_TOKEN,TWITTER_COOKIE- Optional:
TWITTER_USER_AGENT,TWITTER_XCLIENTTXID,TWITTER_XCLIENTUUID
- Optional:
Notes:
- Telethon will create a
.sessionfile named afterTELEGRAM_SESSION_NAMEin the repo. - Twitter/X credentials need to be refreshed if the session expires.
How to get each key (links + minimal steps):
- Cloudflare R2 (
CLOUDFLARE_ACCOUNT_ID,R2_ACCESS_KEY_ID,R2_SECRET_ACCESS_KEY): create an R2 API token with access to the configured bucket, then copy the account id and credentials from Cloudflare. Docs: Cloudflare R2 authentication. LLM_API_KEY: create a key for whichever OpenAI-compatible LLM endpoint you configure. For OpenRouter, see: Authentication.OPENAI_API_KEY: create/view your key in the OpenAI API keys page for Whisper transcription only. Doc: Where do I find my OpenAI API key?.- Telegram (
TELEGRAM_API_ID,TELEGRAM_API_HASH): create an app on Telegram’s developer portal and copy the values from API development tools. Doc: Creating your Telegram Application. - Discord (
DISCORD_AUTH_TOKEN) — unofficial: extract your user token from the browser’s developer tools (Network → request headers →Authorization). Doc: discordpy-self “Authenticating”. - Twitter/X (
TWITTER_BEARER_TOKEN,TWITTER_CT0_TOKEN,TWITTER_COOKIE) — unofficial: use browser devtools on x.com to copy cookies (auth_token,ct0) and the requestAuthorization: Bearer ...header. Docs: Export X cookies (auth_token, ct0), Extract Bearer token from request headers.
Set the LLM endpoint and model under llm:
{
"llm": {
"api_base_url": "https://openrouter.ai/api/v1",
"reasoning_effort": "high",
"model": "qwen/qwen3.6-plus-preview:free",
"pdf_model": "openai/gpt-5.4",
"pdf_reasoning_effort": "medium"
}
}llm.model and llm.reasoning_effort are the defaults for the non-PDF text-generation paths. llm.pdf_model and llm.pdf_reasoning_effort are used only for the PDF-to-Markdown conversion path.
uv– Python package manager used to sync/install deps.ffmpeg– required bypydubandyt-dlpfor audio conversion.yt-dlp– required for some sources (e.g., Rumble).- Clipboard helper for
pyperclip:- Linux (X11):
xcliporxsel(orwl-clipboardon Wayland) - macOS:
pbcopy/pbpaste(built-in)
- Linux (X11):
Install (CLI dependencies):
- macOS (Homebrew):
brew install ffmpeg yt-dlp uv
- Linux (Fedora):
sudo dnf install ffmpeg yt-dlp uv
Installed via uv sync from pyproject.toml. Key runtime packages:
requests,loguru,python-dotenv,pyperclipopenai(OpenRouter/OpenAI-compatible client for LLM calls, plus Whisper transcription)pydub(audio slicing)youtube-transcript-api,bs4telethon(Telegram)yt-dlp(Rumble)soundcloud-libpython-dateutil(Discord timestamps)