Frictionless SaaS, Chapter 6: The Activation Event - The One Metric That Predicts Everything Else
This is the seventh post in the Frictionless SaaS blog series, and the first of Part III — Activation: Delivering First Value Fast. Part II was about onboarding design. Now we turn to the question every founder secretly wants answered: how do I know if my users have actually “gotten it”? The answer lives in a single, precisely-defined event that predicts almost everything else about your product.
The Most Misunderstood Metric in SaaS
Ask five founders to define “activation” and you’ll get five different answers. “They logged in.” “They used a feature.” “They came back the second day.” “They seemed engaged.” Every one of these is close enough to sound reasonable and vague enough to be useless. You can’t optimize a feeling. You can’t A/B test “seemed engaged.” You can’t build a roadmap around “they used a feature.”
Chapter 6 of Frictionless SaaS makes a strong claim, and I think it’s the right one:
Activation is not a moment. It’s a specific, nameable, measurable event — and it’s the single most important metric in your product’s early stage.
Not MAU. Not DAU. Not NPS. Not feature adoption. Not revenue. In the early stage, the percentage of signups who reach their activation event is the metric from which everything else in your business follows. Move it by 10 points and your retention moves, your revenue moves, your word-of-mouth moves. It is the root of the tree. Everything else is branches.
The problem is that most teams don’t know what their activation event actually is. They have a vague idea. The vague idea is worse than no idea, because it gives the team the illusion that they’re optimizing something when they’re not.
The Activation Event Framework
The chapter walks through a disciplined way of finding, defining, and measuring your activation event. Not a single-line definition with a shrug, but a real framework.
Step 1: Find it in the data
Start with two cohorts you probably already have: users who are still active at day 30, and users who churned after day 1. Look at what the retained users did in their first session and first three days. Look at what the churned users didn’t do.
The action that retained users have in common, and churned users don’t, is almost always your activation event. Some typical shapes it takes:
- Team collaboration tool: “Sent a message” or “created a project and invited a teammate.”
- Productivity tool: “Created a task and marked it complete.”
- Analytics tool: “Created a custom report.”
- Design tool: “Created a design and exported it.”
- CRM: “Created a contact and added notes.”
Notice the pattern: the activation event is almost never “logged in” (too easy — no commitment) and almost never “upgraded to premium” (too late — activation already happened). It’s the action right in the middle. The one that shows the user gets it.
Step 2: Define it with ruthless precision
This is the step most teams botch. Don’t say “created a project.” Say “created a project with a name and at least one task.” Don’t say “used a feature.” Say “created a report with at least two custom fields.”
Precision matters because you’re going to be measuring this constantly, and every member of your team needs to be measuring the same thing. A fuzzy activation event produces fuzzy optimization. A precise activation event produces a roadmap everyone can argue about honestly.
Step 3: Separate correlation from causation
Here’s the trap the chapter is careful about, and it’s one most teams walk into: just because retained users created a task doesn’t mean the task creation caused the retention. Maybe they were already committed users, and the task creation was just evidence of that commitment.
To tell the difference, the book suggests a simple experiment: build an onboarding flow that nudges uncommitted users toward the activation action. If those guided users retain at higher rates than similar users who didn’t get the nudge, the action is causal. If they don’t, it’s just a symptom, and you’re optimizing the wrong thing. This distinction changes your entire strategy, which is why the chapter insists you don’t skip it.
Step 4: Measure the two numbers that matter
Once you’ve defined the event, two metrics become your north stars:
- Activation rate — what percentage of signups reach the event within seven days. Below 10% is an emergency. 30–50% is the target range for early-stage products. Above 50% means you have something special — protect it.
- Time to activation — the median time from signup to reaching the event. Under 5 minutes is the goal. At 30 minutes you have friction. At 2 hours you have a lot of friction and most users never arrive.
Measuring these requires instrumentation — a single “activation_event_complete” event fired at the exact moment the user crosses the line, then sliced by signup source, user segment, and cohort. The book goes into the specifics of how to set this up without creating analytics debt, and how to avoid the common mistake of defining it too generously so the numbers look better than they are. That’s the tactical part worth reading in the chapter itself.
A note on segmenting
Not every user has the same activation event. A solo user on a team tool activates by completing a task. A manager on the same tool activates by inviting a teammate and assigning them work. A power user activates by setting up an advanced workflow. If you force one definition onto all three, your numbers get fuzzy and your optimization effort gets diluted. The chapter argues for segmented activation events and personalized onboarding paths for each — more sophisticated, but the payoff in activation rate is large enough to be worth the work.
The Micro-Success Ladder
Here’s the other half of the chapter, and the part I think is most underappreciated in practice.
Your activation event is a single action. But users don’t teleport to that action. They get there through a series of smaller steps, and each of those smaller steps is a place where they can either feel like they’re winning or feel like they’re stuck. The book calls this the Micro-Success Ladder, and its core insight is simple:
The difference between a product with 10% activation and a product with 40% activation is usually not the final step. Both products require the same activation action. The difference is whether the path to that action is paved with micro-successes or paved with friction.
A project management tool with an activation event of “created a project with at least one task” has a ladder that looks something like this: (1) signed up, (2) saw the welcome screen and understood what the tool is, (3) clicked New Project, (4) named the project, (5) saw the empty project, (6) created a task, (7) saw the task appear in the project. Seven rungs. Seven places where the user can take a small win — or stumble.
Designing each rung
The chapter’s principle for designing rungs is that each one should do three things: make forward progress obvious, make the next step obvious, and give the user positive feedback that they just accomplished something. Not confetti and a trophy — just the basic psychological signal that says “yes, you did that, it worked, here’s what changed.”
The opposite — a user clicks something and nothing visible happens — is one of the most common and most lethal mistakes in SaaS first sessions. The user assumes they did it wrong. They try again. Nothing. They leave. The feature worked perfectly on the backend, and you still lost the user.
Visualizing the funnel
The ladder becomes useful the moment you turn it into a funnel chart. A stylized example from the book:
Sign Up → Welcome Screen → Click Create → Name Project → Create Task → Activation
100 95 80 75 60 50
Look at that funnel for thirty seconds. The biggest drop is between the welcome screen and the first click (95 → 80, a 16% leak in a single step). That’s your friction point. That’s where your next week of design work should go — not a redesigned dashboard, not a new integration, not a marketing tweak. The biggest leak in your ladder.
Most teams, when they do this exercise for the first time, discover that their biggest leak is nowhere near where they expected it to be. That discovery is worth more than a month of feature work. The chapter goes into the playbook for how to fix each type of leak (confusion leaks, motivation leaks, technical leaks, interruption leaks), and those playbooks are the part of the chapter worth reading in full.
A One-Hour Exercise for This Week
- Write down, in one sentence with precise thresholds, what you believe your activation event is. (“Created a project with a name and at least one task,” not “used the product.”)
- Pull your last 100 signups. For each one, check whether they completed that event within seven days. Now you have an activation rate.
- List the 5–8 micro-successes a new user has to walk through to reach that event.
- For each rung, estimate (or measure) the completion rate. Circle the biggest drop.
- That’s your top priority. Everything else on the roadmap waits.
In the next post, we’ll continue through Part III with Chapter 7 — the “Aha Moment” engineering problem: how to reliably manufacture the instant where a user stops being skeptical and starts being yours.
📖 Want the Full Activation Framework?
This post gives you the shape of the Activation Event Framework and the Micro-Success Ladder. The book goes further: how to segment activation events by user type, how to instrument them without creating analytics debt, how to run the causal experiment that separates real activation drivers from symptoms, the specific playbooks for fixing each kind of funnel leak, and the full activation architecture in the rest of Part III that builds on this chapter.
If you’re going to bet one metric on your early-stage product’s future, this is the chapter that tells you exactly how to pick it.
— Sho Shimoda
Based on Frictionless SaaS: Designing Products Users Discover, Adopt, and Never Leave (2026).
Sho Shimoda
I share and organize what I’ve learned and experienced.カテゴリー
タグ
検索ログ
Development & Technical Consulting
Working on a new product or exploring a technical idea? We help teams with system design, architecture reviews, requirements definition, proof-of-concept development, and full implementation. Whether you need a quick technical assessment or end-to-end support, feel free to reach out.
Contact Us