“Your overall retention rate is 40% - but does that number describe anyone in your product accurately?”
Most metrics lie to you in a very specific way: they blend together users who joined yesterday with users who joined two years ago, producing averages that describe nobody accurately. Your overall retention rate might be 40%, but that number could mean your earliest users retain at 65% while your newest users retain at 15%. Those are two completely different stories with completely different implications, hidden behind a single number.
Cohort analysis solves this problem by grouping users based on when they started or what they did, and then tracking each group independently over time. Instead of asking “what is our retention rate?” you ask “what is the retention rate for users who signed up in January versus February versus March?” The answers to those questions tell you something that aggregated metrics never can: whether your product is actually getting better.
This guide covers the full practice of cohort analysis in KISSmetrics - from the conceptual foundation through practical setup to advanced applications that can transform how your team thinks about growth, retention, and product quality.
What Cohorts Are and Why They Matter
A cohort is a group of users who share a common characteristic, typically defined by when they first did something. The most common cohort definition is acquisition date: all users who signed up in the first week of January form one cohort, all users who signed up in the second week form another, and so on. But cohorts can also be defined by behavior: all users who completed onboarding, all users who made a purchase, or all users who came from a specific marketing campaign.
The Problem with Aggregate Metrics
To understand why cohorts matter, consider a simple example. Imagine your product has 10,000 active users and a monthly retention rate of 45%. That seems concerning, but is it getting better or worse? You cannot tell from the aggregate number alone. If you added 3,000 new users last month and most of them have not yet had time to churn, the 45% might actually represent an improvement. If your growth has slowed and your user base is increasingly composed of long-tenured users, 45% might represent a deterioration.
Aggregate metrics are a weighted average of all your cohorts. When your growth rate changes, the composition of that average shifts, making it unreliable as a measure of underlying product quality. This is sometimes called Simpson’s paradox in statistics: a trend that appears in the aggregate can disappear or even reverse when you look at individual groups.
Time-Based Cohorts
Time-based cohorts group users by when they first appeared: when they signed up, when they made their first purchase, or when they first performed some other significant action. The power of time-based cohorts is that they let you compare the behavior of users who started at different times, holding the “age” of the user constant. You can ask: how did the users who signed up in March behave in their first month, compared to the users who signed up in April in their first month? This comparison reveals whether changes to your product, onboarding, or marketing are actually affecting user behavior.
Behavioral Cohorts
Behavioral cohorts group users by what they did rather than when they started. All users who completed onboarding form one cohort; all users who did not form another. All users who engaged with feature X versus those who did not. Behavioral cohorts are extremely powerful for identifying which actions drive retention. If users who complete onboarding retain at 70% while users who skip it retain at 20%, you have identified a critical lever for improving your overall retention rate.
Setting Up Cohort Reports
Building a cohort report in KISSmetrics requires three decisions: how to define the cohort, what metric to track, and what time granularity to use. Each decision shapes the story the report tells.
Defining the Cohort
Start by choosing the event that defines when a user enters the cohort. For most products, this is the sign-up event. But it could be the first purchase, the first login after a product change, or the date a user was added to a specific campaign. The defining event should represent the starting point of the experience you want to measure.
In the KISSmetrics reports interface, you select the cohort-defining event and the time period for grouping. Weekly cohorts work well for products with high engagement frequency. Monthly cohorts work better for products with longer usage cycles. Choose the granularity that matches how often your users typically interact with your product.
Choosing the Retention Metric
The retention metric defines what counts as “retained.” For a SaaS product, this is often “logged in” or “performed core action.” For an e-commerce business, it might be “made a purchase.” For a content platform, it could be “consumed content.”
Choose a metric that represents genuine engagement, not passive activity. “Visited site” is too broad - a user might visit once, see nothing relevant, and leave. “Completed a workflow” or “sent a message” or “generated a report” better represents the user actually getting value from your product.
Setting the Time Frame
Decide how far back and how far forward you want your cohort report to look. Looking at cohorts from the past six months with twelve weeks of follow-up data gives you a good balance between recency and completeness. Older cohorts will have more follow-up data, letting you see long-term retention. Newer cohorts show whether recent changes are having an effect, even if you only have a few weeks of data.
Reading Retention Curves
A cohort retention chart is typically presented as a table or a set of curves. Each row represents a cohort (for example, users who signed up in week one, week two, week three). Each column represents a time period after the cohort’s defining event (week zero, week one, week two). The values show the percentage of the cohort that was active during each period.
The Shape of the Curve
Healthy retention curves share a common shape. There is always a steep initial drop - many users who sign up will never return. This is normal. The critical question is whether the curve flattens out. A curve that continues to decline steadily over time indicates a product that fails to create habitual usage. A curve that flattens after the initial drop indicates that users who survive the first few periods tend to stick around. The flattening point is sometimes called the “retention plateau,” and reaching one is the single most important milestone for any product.
Example: Week-4 Retention by Monthly Cohort
What Good Looks Like
Benchmarks vary dramatically by product type. A social network might aim for 30-day retention of 40% or higher. A SaaS tool might consider 80% monthly retention excellent. An e-commerce store might target a 20% repeat purchase rate within 90 days. The important thing is not the absolute number but the trajectory. A 25% retention rate that is improving by two percentage points per month is more promising than a 40% rate that is declining.
Reading Between the Rows
The most valuable information in a cohort table is not in any single row - it is in the comparison between rows. Look at the same column across multiple rows. For example, compare the “week four” retention for every monthly cohort. If each successive cohort has higher week-four retention, your product is improving for new users. If each successive cohort has lower retention, something is degrading - possibly product quality, possibly traffic quality, possibly both.
Comparing Cohorts Over Time
The power of cohort analysis becomes fully apparent when you compare cohorts across multiple time periods. This comparison answers the most fundamental question any product team can ask: is our product getting better for new users over time?
Improvement Tracking
Suppose your team ships a new onboarding flow in March. To measure its impact, compare the March cohort’s retention curve with February’s and January’s. If March shows higher retention at equivalent time points, the new onboarding is working. If not, it is not. This is a much cleaner signal than looking at aggregate retention, which is contaminated by the behavior of users who signed up before the change.
Regression Detection
Cohort comparison also catches regressions. If your May cohort performs worse than your April cohort at every time point, something changed in May that hurt the new user experience. Maybe a product update introduced a bug. Maybe a new marketing campaign is attracting lower-quality traffic. Maybe a competitor launched a compelling alternative. The cohort data raises the flag; your investigation determines the cause.
Seasonality Identification
Some businesses have predictable seasonal patterns in cohort quality. Users who sign up during the holidays may behave differently than users who sign up in the spring. By tracking cohorts over a full year, you can identify these patterns and adjust your expectations accordingly. This prevents you from misinterpreting seasonal variation as product improvement or degradation.
Identifying Trends in Your Product
Cohort analysis is not just about retention. It is a lens through which you can evaluate the overall trajectory of your product. Here are the key trends to watch.
Is Your Product Getting Better?
The most important trend is whether newer cohorts outperform older ones at equivalent time points. If your January cohort had 30% week-four retention, your March cohort had 33%, and your May cohort had 37%, your product is improving. This is the gold standard metric for product development effectiveness. Every feature you ship, every onboarding change you make, every bug you fix should, in aggregate, push this number up over time.
Is Your Acquisition Quality Stable?
If newer cohorts have worse initial retention but similar long-term retention, the problem is not your product - it is your acquisition. You are attracting users who are less likely to be interested in the first place. This is common when companies scale paid advertising: the initial audiences are well-targeted, but as you broaden targeting to increase volume, the quality of incoming users decreases. Understanding this distinction is critical for teams working on conversion rate optimization.
Are Your Engagement Features Working?
Compare the retention curves of users who engage with specific features versus those who do not. If users who use your reporting feature retain at 75% while users who do not retain at 35%, the reporting feature is a strong driver of retention. This insight should inform your onboarding (push users toward reporting early), your product roadmap (invest in making reporting better), and your marketing (highlight reporting in acquisition campaigns).
Using Cohorts for Churn Prediction
One of the most powerful applications of cohort analysis is predicting which users are likely to churn before they actually do. By studying the behavior of users who have already churned, you can identify patterns that predict future churn - and intervene to prevent it.
Identifying At-Risk Patterns
Look at the cohort data for users who eventually churned. What did their engagement look like in the weeks before they left? In most products, churned users show a characteristic pattern: declining login frequency, reduced feature usage, and eventually a final session followed by silence. The specific pattern varies by product, but there is almost always a detectable decline that precedes the final departure.
By quantifying this pattern - for example, “users whose weekly logins decline by more than 50% for two consecutive weeks have a 70% probability of churning within the next month” - you can build early warning systems that flag at-risk users before they are gone.
Intervention Strategies
Once you can identify at-risk users, you can intervene. Common interventions include personalized email campaigns highlighting features the user has not tried, in-app messages offering help or guidance, proactive customer success outreach, and special offers or incentives. The key is matching the intervention to the probable cause of disengagement. A user who is struggling with a feature needs help. A user who has outgrown your basic plan needs an upgrade path. A user who has found an alternative needs a compelling reason to stay.
KISSmetrics lets you build dynamic population segments based on behavioral criteria, so you can create an “at-risk” segment that automatically updates as users match the churn-prediction pattern. Pairing this with automated campaigns creates a system that works continuously without manual intervention. For a deeper dive on the metrics behind this, see our guide on SaaS product analytics.
Measuring Intervention Effectiveness
Use cohort analysis to measure whether your interventions work. Create cohorts of at-risk users who received the intervention and compare their retention to a control group of at-risk users who did not (or to historical at-risk users before the intervention existed).If the intervention improves retention by even a few percentage points, it can be worth enormous amounts of revenue over time.
Advanced Cohort Strategies
Once you have mastered the basics of cohort analysis, several advanced strategies can extract even deeper insights from your data.
Multi-Dimensional Cohorts
Combine time-based and behavioral cohort definitions. For example, look at “users who signed up in January AND completed onboarding” versus “users who signed up in January AND did not complete onboarding.” This multi-dimensional view reveals how different behaviors affect retention within the same time cohort, controlling for any external factors that might differ between time periods.
Revenue Cohorts
Instead of tracking retention as the cohort metric, track revenue. How much revenue does each cohort generate over time? Revenue cohorts reveal whether your monetization is improving alongside your retention. A cohort might retain well but generate declining revenue per user, indicating a pricing or upselling problem. Conversely, a cohort might have modest retention but excellent revenue expansion, indicating that your surviving users are increasingly valuable.
Cohort-Based Forecasting
Because cohort curves tend to follow predictable patterns, you can use them for forecasting. If your last five monthly cohorts all retained at roughly 35% after three months, you can reasonably predict that next month’s cohort will behave similarly. This makes revenue forecasting, capacity planning, and growth projections much more reliable than models based on aggregate metrics.
Cohorts for Feature Launches
When you launch a new feature, create a cohort of users who adopt it in the first week and compare their subsequent behavior to non-adopters and to users from before the feature existed. This gives you a clean read on whether the feature is driving the behavior changes you intended, separate from any confounding factors.
Key Takeaways
Cohort analysis is not an advanced technique reserved for data scientists. It is a fundamental practice that every product and growth team should adopt. Without it, you are navigating by aggregate metrics that can mislead you about the true health of your business. With it, you gain a clear, unbiased view of whether your product is improving, which users are at risk, and where your biggest opportunities lie.
The best product teams in the world obsess over their cohort curves. They celebrate when the curves shift upward and investigate urgently when they shift down. If you are not doing cohort analysis today, it is the single highest-leverage change you can make to your analytics practice. Start with a simple retention cohort, read it carefully, and let the data guide your next move.
Continue Reading
Funnel Reports: The Complete Guide to Building and Analyzing Conversion Funnels
Funnel reports show you exactly where customers drop off in your conversion process. This guide covers how to build effective funnels, interpret the data, and take action on what you find.
Read articleRevenue Attribution Reports: Connect Every Dollar to Its Source
Revenue attribution answers the most important marketing question: which activities actually generate revenue? This guide shows you how to set up and interpret revenue reports.
Read articleA/B Test Reports: Measure Experiment Impact Beyond the Landing Page
Most A/B test reports stop at conversion rate. KISSmetrics A/B test reports track the downstream impact on revenue and retention, showing whether your winner actually wins where it counts.
Read article