Cultivate Method
The methodology

Cultivate, in plain English

Products are grown, not built. Cultivate is the practice that gives growing a structure: foundations as soil, evidence as water, recommendations as the fruit, launches as the bushel ready for delivery. Five baseline files about your product. A folder of raw evidence. An LLM that maintains a structured wiki on top, and an opinionated workflow for getting from “we have a lot of notes” to “here's a recommendation, with its evidence and strategic constraints attached.” This page walks the principles. For the technical mechanics, see how it works.

Where the data comes from, where it goes

Cultivate works on what your team has already written down — sales notes, research transcripts, decision memos, support tickets, and the team's curated reading of the technology landscape (release notes, vendor updates, library shifts the team chose to flag as relevant). First-party evidence your team authored or curated; nothing from outside your own practice. Self-hosted Winnow runs on your own infrastructure with no telemetry; the hosted version runs in an isolated workspace we don't read. Nothing trains a model.

Where the data comes from, where it goes →

Foundations

Your strategy is a constraint, not a suggestion. Five baseline files — product proposition, vision and strategy, lifecycle model, target markets, and constraints and assumptions — anchor every interpretation the LLM makes downstream. They're authored by you, not synthesised; they shape what counts as a tension, what counts as a valid opportunity, and what gets rejected as misaligned.

Five smaller files instead of one giant strategy doc is deliberate: it gives the LLM specific places to point when something contradicts the baseline (a market signal pulling against the lifecycle stage, say) and gives the team specific places to update when reality shifts. A first draft is enough to start; the foundations refine through the reconcile flow as decisions and delivery play out.

See evidence-in mechanics → for the file layout.

Ingest

Raw evidence is sacred — once it lands, it never gets edited. Files in the raw folder are immutable. The LLM reads them, but the canonical text stays untouched as the source of truth. The synthesis layer writes wiki pages on top; the raw folder is the receipts.

Four input paths feed the same pipeline: drag-and-drop a markdown file, type it into the in-UI Create-evidence form, push it via an authenticated webhook from a workflow tool, or sync from a cloud folder. The synthesis step is identical regardless of how a file got there. Reinforcements and contradictions with prior beliefs get flagged — no silent rewrites; you review the diff before anything sticks.

See ingest mechanics → for the format / debouncer / rate-limit detail.

Discover

Opportunities are not insights, and they're not ideas. An insight is what you've learned (one signal, one source). A problem is a recurring pain. An opportunity is a specific gap between what customers need and what your product covers — backed by multiple independent pieces of evidence and explicitly tied to the baselines. The discipline of staying at the opportunity layer is what stops the team jumping straight from “I heard something” to “we should build X.”

Each opportunity carries cited evidence, a strategic-alignment note, and an explicit status. Without an opportunity, no recommendation is allowed to exist — that's the rule the next section enforces.

See discover mechanics → for selector / synthesis / propose-apply detail.

Capabilities

Capabilities are the lateral evidence stream. Customer evidence drives what a team should build; technology evidence changes howthey'd build it — or whether a constraint that ruled an opportunity out last quarter still holds. The team's curated reading of release notes, vendor updates, and library shifts goes into a parallel raw stream and synthesises into Capability pages: a verb-imperative (“extract structured fields from scanned PDFs at <$0.01 per page”), a maturity rating (proposed / early / mature), and the existing opportunities the capability could relax a constraint for.

Two methodology guardrails hold the boundary tight. Capabilities never generate ideas on their own. The synthesis step is forbidden from creating opportunities or recommendations from a capability; capability pages can only attach to opportunities that already exist. Customer evidence still has to make the case for acting. A recommendation justified solely by “this technology now exists” is invalid by methodology rule. Capabilities widen what's feasible — they don't generate the reason to do anything in the first place.

Mature capabilities flow into recommend automatically as constraint-relaxers (see below); a separate opt-in flow lets a mature capability propose striking through a constraint paragraph it now relaxes (see Reconcile). The key methodology threshold across both consumers is maturity— only capabilities battle-tested in production with a settled cost / performance / reliability profile are eligible. A press release isn't enough.

Recommend

A recommendation that doesn't cite its evidence isn't a recommendation — it's an opinion. Recommendations are generated only from opportunities; each one ties back to the cited evidence the opportunity was built on. Expected impact, risks, unknowns, and confidence are stated up-front. They're proposals, not facts.

Mature capabilities tied to the candidate opportunity flow in as constraint-relaxers — they can change how a recommendation is framed (cost profile, build / buy choice, feasibility envelope) but never as primary justification. Customer evidence still has to make the case for acting; the capability just widens the answer.

The whole evidence-cited context that justified a recommendation can be handed straight to an AI builder via the export-to-prompt flow. A paste-ready brief bundles the recommendation, its parent opportunity, the cited evidence, and the five foundation pages — with foundations labelled as non-negotiable constraints so Claude Code, Cursor, Lovable, or any other AI builder treats them as guardrails rather than suggestions.

See recommendations-into-AI-builders mechanics → for the export-to-prompt / MCP / gateway integration surfaces.

Reconcile

Baselines change only when reality changes. A decision has been documented (you logged it to the decisions folder), or delivery has changed reality (you logged a shipped outcome). Those are the two triggers. Sales notes, customer feedback, and competitor intel are not sufficient — those flow through ingest and discover. A speculative belief shift never reaches the baseline; observable reality does.

Reconcile proposes the foundation edits as a sidecar — never silently. You review each one and either apply or discard. Every applied update carries a dated changelog entry naming the artifact that triggered it; historical statements are annotated as superseded, not deleted.

An opt-in third lane: a mature capability whose evidence shows a constraint paragraph no longer holds can propose striking the paragraph through. Same propose / apply two-step, same human approval gate. Capability-triggered baseline edits may only target the constraints foundation — vision, lifecycle, target markets, and product proposition still require decisions or delivery as triggers. Tech maturity doesn't get to rewrite strategy.

Why opportunity-first

The discipline of not jumping to solutions.Most product processes jump from “I heard a customer say something” to “we should build X.” Cultivate refuses to. A signal becomes an insight, an insight becomes a problem, a problem becomes an opportunity — and only an opportunity earns a recommendation.

The alternative is shipping features whose success criteria you can't articulate. With an opportunity in the loop, every recommendation has a reason it should exist that traces back to evidence — and the opportunity layer is where humans push back, validating or deferring or discarding before any solution work begins.

Why baseline files are sacred

You don't get to rewrite strategy because you're tired of it. Foundations propagate. If they shifted on the strength of any new sales note, one weird Sunday-afternoon LLM call could drift the team's strategy without anyone noticing. So they don't — they change only via the reconcile flow above, gated by a documented decision or observable delivery, and only after a human reviews the proposed edit.