A founder was spending Monday mornings in spreadsheet hell
Every Monday at 7am, the CEO of a Series A SaaS company (roughly 40 employees, $4M ARR) would open three browser tabs: Stripe for revenue, HubSpot for pipeline, and Google Sheets for a manual tracker that tied it all together.
Three hours later, she would paste a summary into Slack. By then, the leadership team had already started their week without aligned numbers. Worse, the manual process introduced inconsistencies. Revenue figures sometimes included refunds, sometimes did not. Churn numbers were calculated differently week to week depending on which Stripe export filter she remembered to apply.
This post walks through exactly what I built, how long it took, and what the founder got for $5K.
What the founder actually needed
Before writing any code, I spent two hours on a call with the CEO and her VP of Sales. The goal was not to build a dashboard or a data warehouse. The goal was to replace a specific manual workflow with something reliable.
We identified five KPIs that mattered for the Monday brief:
- Weekly recurring revenue (WRR) - net new MRR for the week, with expansion and contraction separated
- Pipeline coverage - weighted pipeline value divided by quarterly target remaining
- Trial-to-paid conversion rate - 14-day rolling window
- Weekly active users (WAU) - unique users performing a core action (pulled from the team's existing Mixpanel export in Google Sheets)
- Net revenue retention - trailing 12-month cohort calculation
The brief needed to show each metric's current value, the prior week's value, and the trend (up, down, flat). It needed to land in the #leadership Slack channel every Monday at 8am Eastern, before the 9am leadership standup.
The technical build: five days, start to finish
Day 1: Source connections and staging models
I set up connections to two primary data sources using existing API credentials the team already had:
- Stripe - subscriptions, invoices, and refunds via the Stripe API, landed into a
raw_stripeschema - HubSpot - deals and pipeline stages via HubSpot's v3 API, landed into
raw_hubspot
Product usage data (WAU) came from an existing Mixpanel CSV export the team was already dropping into Google Sheets weekly. Rather than building a Mixpanel API integration (which would have added 1-2 days), I pulled from the sheet via a simple Python script. Good enough for weekly granularity.
Each source got dbt staging models that handled the messy transformation work: timezone normalization, deduplication, and type casting. For example, Stripe timestamps come in UTC epoch seconds, while HubSpot sends ISO 8601 strings with timezone offsets. The staging layer normalized everything to UTC timestamps in PostgreSQL.
Day 2: Mart models and metric calculations
With clean staging data in place, I built the models that calculated the actual KPIs:
int_revenue__weekly_summary- calculated WRR, expansion, contraction, and churn from Stripe subscription eventsint_pipeline__coverage- joined HubSpot deals with quarterly targets stored in a dbt seed filemart_kpi__weekly_brief- the final model pulling from all sources, producing one row per week with all five KPIs, their prior-week values, and trend indicators
I used dbt's incremental materialization for the Stripe models since the subscription event history was large enough that full refreshes would have been slow on their PostgreSQL instance.
Day 3: Testing and data quality
This is where most quick-and-dirty reporting projects fall apart, and where I spent a full day to make sure it would not.
I added dbt tests at every layer:
- Source freshness tests -
dbt source freshnessconfigured to warn if any source was more than 6 hours stale, error at 12 hours - Schema tests -
not_null,unique, andaccepted_valueson all primary keys and categorical columns - Custom data tests - a test asserting that weekly revenue never deviates more than 30% from the prior week (catches extraction bugs), and a test confirming that the sum of expansion + contraction + new equals the net change (catches calculation bugs)
In total, 18 tests ran on every pipeline execution. If any test failed, the pipeline would halt before producing the brief.
Day 4: Orchestration and delivery
I configured a cron-triggered Python script (the team did not have Airflow or any orchestration tool, and standing one up was out of scope) that ran every Monday at 7:30am Eastern on their existing EC2 instance:
- Extract - pull from Stripe and HubSpot APIs, load the Google Sheets CSV
- Transform - run
dbt build --select tag:weekly_briefwhich executes staging, intermediate, and mart models plus all associated tests - Deliver - query
mart_kpi__weekly_brieffor the latest week, format into a Slack message, and post via the Slack webhook
The Slack message was designed to be scannable in under 30 seconds. Each KPI showed the metric name, current value, prior week value, and a simple indicator (up, down, flat). At the bottom, a link to a shared Google Sheet for anyone who wanted to dig deeper.
Weekly KPI Brief - Week of April 7, 2026
Revenue
WRR: $82.3K (+4.2% from last week)
Net Revenue Retention: 112% (flat)
Pipeline
Coverage: 3.2x (target: 3.0x)
Product
WAU: 1,847 (+6.1%)
Trial Conversion: 18.2% (+1.4%)
Day 5: Documentation and handoff
The final day covered everything that makes the difference between a project that works for a month and one that works for a year:
- Documentation - a README in the repo covering the pipeline architecture, how to add a new KPI, how to change the delivery schedule, and troubleshooting steps for common failures
- Runbook - step-by-step instructions for the most likely failure scenarios (API credential expiration, source schema changes, server restart)
- Walkthrough - a 45-minute screen-share with the CEO and her head of engineering, walking through the codebase, the cron job, and the Slack integration
What the founder actually got
The deliverables for $5K:
- 8 dbt models (3 staging, 2 intermediate, 1 mart, 2 seed/utility)
- 18 automated data quality tests
- A cron-based pipeline with extract, transform, and deliver steps
- Slack integration with a formatted weekly message
- Documentation and runbook in the project repo
- 30 days of post-delivery support (I fixed one minor issue during this period: a Stripe API pagination bug that appeared when invoice volume crossed a threshold)
The time savings were immediate: 3+ hours per week returned to the founder. Over a year, that is roughly 150 hours of CEO time freed up.
But the less obvious benefit was consistency. Before the pipeline, different people in the company quoted different numbers for the same metric. The VP of Sales had one churn number, the CEO had another, and the board deck had a third. After the pipeline shipped, everyone referenced the same Monday brief.
Why this matters for founders weighing build vs. wait
I talk to founders regularly who know they have a reporting problem but assume the fix requires hiring a data engineer, buying an expensive BI tool, or waiting until they are "big enough" for a real data stack.
The reality is that a focused engagement - one specific workflow, one clear deliverable, one fixed price - can solve the immediate pain in days rather than months. You do not need a full data team to stop wasting your Monday mornings.
If this sounds like a problem you are dealing with, book a call and I can scope a sprint like this one for your company in 30 minutes.
FAQ
How long does it take to build an automated KPI pipeline?
For a focused project with 3-5 KPIs and 2-3 data sources, the typical timeline is 5-7 business days. The main variables are data source complexity (clean APIs vs. messy exports), the number of calculated metrics that require business logic validation, and whether the team has existing infrastructure. The project described in this post took 5 days because the team had well-documented API credentials and a server to run the pipeline on.
What does a dbt-based KPI pipeline cost for a startup?
A fixed-scope KPI automation project typically runs $5K-$8K depending on the number of data sources and the complexity of the metric calculations. This includes extraction setup, dbt models, automated testing, orchestration, Slack or email delivery, documentation, and 30 days of support.
Can my existing engineering team maintain a dbt pipeline?
Yes, and that is a core design principle. dbt models are SQL files with Jinja templating, which means any engineer or analyst comfortable with SQL can read, modify, and extend them. During the handoff, I walk the team through the codebase and leave a runbook covering common maintenance tasks: adding a new metric, updating a source connection, and troubleshooting test failures. Most teams handle ongoing changes independently after the 30-day support period.
What happens if a data source API changes or breaks?
This is why the testing layer matters. Every pipeline I build includes source freshness checks and schema tests that catch upstream changes before they corrupt downstream metrics. If Stripe changes their API response format or HubSpot deprecates an endpoint, the pipeline fails loudly at the extraction or staging layer rather than silently producing wrong numbers. The runbook covers the most common failure scenarios, and during the 30-day support window, I handle any issues that come up.