Curated RSS discovery. Federation by fork. No algorithm. No platform.
Authors don't sign up, don't federate, don't install anything. They just keep blogging like it's 2005 and a discover instance picks them up.
Hand-picked playlists of RSS/Atom feeds, organized by vibe. Browse, follow, read inline, export to your reader. Runs on Cloudflare Workers' free tier. Forever.
Anyone can fork this, stand up their own instance, and curate their own playlists to their taste. Your corner of the web for the low, low price of $0.
- Browse — curated playlists with tag filtering, search, random shuffle, and a "new" view of recently-added sources
- Feed — follow playlists or individual sources, read posts inline, export/import OPML, add any RSS URL directly
- Suggest a feed — public submission form on
/about; server-side validation rejects click-through and invalid feeds before they hit the queue - Webmentions via RSS — when sources in the directory link to each other, the cited author gets a mention feed at
/api/mentions/{domain}.xml(e.g./api/mentions/gordonmclean.co.uk.xml). Delivered the same way authors read everything else: their RSS reader - PWA — installable, dark/light theme, works offline for cached views
- Analytics — privacy-friendly, no third parties. Tracks hits, top paths, countries, and RSS feed subscribers. No cookies, no JS fingerprinting
- Node.js
- Cloudflare account (free tier)
- A domain/subdomain (optional but recommended)
git clone https://github.com/qualityshepherd/discover
cd discover
npm install
cp wrangler.example.toml wrangler.toml
wrangler login
wrangler kv namespace create DISCOVER_KVPaste the KV namespace id into wrangler.toml. Create an R2 bucket in the Cloudflare dashboard and paste its name in too. Set your name and DOMAIN_NAME, then:
wrangler deployGo to /admin, enter a passphrase, copy your pubkey, paste it into wrangler.toml as OWNER, redeploy. Done.
/admin — manage playlists, sources, and content moderation.
- Playlists — create and edit themed groups of RSS sources
- Sources — add RSS/Atom feed URLs; the cron fetches on a rolling schedule (8–48h depending on post frequency) and builds the link graph for webmentions
- Curate — review suggested feeds, feed candidates mined from link content, and trending domains; approve, dismiss, or block. Includes a scan button to rebuild candidates from all sources and a rebuild button to regenerate the webmentions graph
- Batch validate — paste multiple URLs, validate in bulk, add directly or queue to pending
- Blocked domains — hostname blocklist; exact match or subdomain (
example.comblockssub.example.combut notnotexample.com) - Analytics — hit counts, top paths, countries
wrangler deploy # production → discover.brine.dev
wrangler deploy --env dev # staging → test.discover.brine.devCustom domains are set in the Cloudflare dashboard, not wrangler.toml.
npx wrangler devnpm test # full suite
npm run test:unit # unit onlyAGPL · brine