Period to date analysis

Hi
I am interested in building ‘period to date’ analyses using the “trend” visualization.

A “trend” object might contain these info for example:

  • Month to date orders
  • variance vs previous period
  • variance vs previous year

The same logic will then apply to year to date, quarter to date, week to date etc
I see there is some old discussion on this topic (e.g. Trends should use "Year-to-date" or "period to date" comparisons · Issue #11978 · metabase/metabase · GitHub )

I understand there is no off the shelves solution, but I am pretty sure the issue can be solved with custom SQL
What is the best way to build this?

For a ‘month to date’ comparison between this month and last, I came up with the query below. You can apply a Trend viz to that query and have it compare to Previous Month and it will compare the two months to date (as of the end of yesterday/start of today).

‘X to date’ type queries can be a bit complicated as the precise specification differs depending on the range. For months you can use days of the month, but for years you can either use day-of-year (watch out for leap years) or a date bracketed between date_trunc(‘year’, created_at) and date_trunc(‘day’, created_at). You can also explicitly check date parts with EXTRACT().

In some cases you can use window functions with a RANGE qualifier (this is what the MBQL offset function does).

The trick is to output the desired dates as timestamps; the Trend viz can do the math after that. It never actually displays the date, so you can fool it into picking the intervals you want.

The query (using Sample Database, PostgreSQL syntax is similar but the interval spec is slightly different):

select 
date_trunc('month', created_at) as the_month,
sum(total) as month_to_date
from orders
where 
-- This keeps it from calculating MTD for every month in the DB
created_at > date_trunc('month', current_timestamp) - interval '1' MONTH 
and created_at < current_timestamp
-- This limits to day-of-month-to-date
and extract(day from created_at) < extract(day from current_timestamp)
group by date_trunc('month', created_at)
order by date_trunc('month', created_at)

This outputs 2 rows like:

Nov 1, 2025, 12:00 AM 46,300.96
Dec 1, 2025, 12:00 AM 45,733.35

The resultant viz:

Thanks! This is clever

Yesterday I was thinking around this and I think I will implement something like this:

date dimension table (dim_date) with one row per day and helper columns such as:

  • month_start, month_end

  • prev_month_same_dom (same day-of-month in previous month, clamped)

  • prev_year_same_dom (same day-of-month previous year, clamped)

Then the query to extract a certain period to date will (example Month to Date):

  1. Select all dates from period start to as_of
  2. Join metrics on:

- date_day for current

- prev_month_same_dom for previous period

- prev_year_same_dom for prior year

  1. Aggregate to a single value per comparison

I think this method is scalable and easy to replicate on all variants (e.g. week to date, quarter to date). And also it has no issues in relation to edge cases (eg. MTD comparing Dec (31 days) vs Nov (30 days) on the 31st.

What do you think?

That’s another way to do it. Calendar tables like what you describe are common in data warehouses.