What enters
First-party evidence your team has authored or curated. The kind of writing your organisation already produces every quarter, sitting in a dozen tools nobody has time to cross-reference:
- Sales-call notes, win/loss summaries, deal-review write-ups.
- Customer-research transcripts, interview notes, usability findings.
- Support tickets and the resolution notes attached to them.
- Decision memos, RFCs, ADRs, retrospective outcomes.
- Competitor scans, market notes, pricing-page captures.
- System exports your team has chosen to feed in (HubSpot reports, Linear extracts, Notion pages).
- Technology shifts your team chose to flag — release notes, vendor updates, library / platform / hardware launches that the team's reading of the landscape says might matter for an opportunity already on the books. Curated by the team, not scraped from the wild.
Each piece of evidence is something a person on your team wrote, recorded, or chose to bring in. Nothing arrives via a tracking pixel, an inferred profile, or a behavioural inference. If a sentence doesn't have a human author standing behind it — or a human curator who decided it was worth bringing in — it doesn't belong in the system.
What doesn't enter
The boundary is sharp on purpose.
- No clickstream. No tracking pixels, no behavioural events, no session recordings, no heatmaps. The methodology has no use for them; the system has no place to put them.
- No silent ingestion of customer data. Customer-system exports (HubSpot, Salesforce, etc.) only enter the wiki when your team explicitly chooses to feed them in. There is no automated scraper running in the background.
- No third-party data brokers. Nothing is bought, enriched, or cross-referenced against an external profile.
- No model training on your data.Whatever LLM provider you bring, their terms apply — but the methodology itself never aggregates customer wikis to train a model. There is no shared model built from anyone's evidence.
What flows out
The wiki the methodology builds — insights, problem statements, opportunities, recommendations — is yours. It compiles from your own evidence, it cites that evidence on every claim, and it lives in a workspace your team controls.
Recommendations are surfaced as proposals with cited evidence, expected impact, risks, and confidence stated up-front. They flow outwards into the AI builders where prototypes get made (Claude Code, Cursor, Lovable, etc.) carrying the evidence they came from. Nothing flows outward to us, to a third party, or to a shared training corpus.
Self-hosted vs. hosted
Self-hosted Winnow runs on your own infrastructure. Your laptop, your server, your VPC. There is no telemetry, no callback, no remote logging. We have no visibility into what you put into it. The only third parties involved are whichever LLM provider you configure (Anthropic, OpenRouter, etc.) — and your relationship with that provider is yours, on your account, governed by their terms.
Hosted Winnow at usewin.nowruns the same software in an isolated per-customer container with an isolated per-customer volume. We don't read your workspace data as part of normal operation. The narrow exceptions are listed in the privacy policy: diagnosing a fault you've reported, restoring from a backup at your request, or a valid legal request we cannot decline. Operator access is logged and auditable.
A note on the names
Cultivateis the methodology — caring for your own growing practice over seasons. Foundations as soil, evidence as water and sunlight, recommendations as the fruit. Cultivate doesn't gather data from outside your own practice; it strengthens what your team has already written down.
Inside Cultivate's vocabulary, “harvest” still names a specific moment: when Winnow gathers refined evidence from raw inputs. The verb's grammatical object is always something your team owns and authored — sales notes, research transcripts, decision memos, the team's curated reading of the technology landscape. Never customer data. Never user behaviour. Never inferred signals.
Winnowseparates grain from chaff — recommendations supported by cited evidence from speculation that doesn't yet hold up. Bushel (forthcoming) measures the finished grain out into the artefacts a launch needs. All three names sit inside the same agricultural lineage by design; the metaphor explains why the tools work the way they do, not just what they're called.