How to implement execute-reducible-query

I'm trying to write a driver for Redis and i am reading thought the docs. The docs mentions a sample driver but it out dated and from v0.35.0 there is new way to implement drivers and it should implement metabase.driver/execute-reducible-query instead of metabase.driver/execute-query but the doc is does not cover it. I found a pull request on this change which states

execute-reducible-query should call return-results with initial results metadata and a object that can be reduced to get resulting rows. Because the driver effectively decorates the reduction process by managing the context in which return-results is called, it can hold handles to resources (e.g. connections or java.sql.ResultSet s) open for the duration of the reduction and close them afterward.

now i don't understand what should be happening here in that pull request there is a sample code

(defmethod driver/execute-reducible-query :my-driver
  [_ query chans return-results]
  (with-open [results (run-query! query)]
    (return-results
     {:cols [{:name \"my_col\"}]}
     (qp.util.reducible/reducible-rows (get-row results) chans)))

Can someone explain it to me more clearly?
To be more exact what is

An object that can be reduced to get resulting rows.

  • What does qp.util.reducible/reducible-rows do and how can i implement it myself if i need to?

I've tried to look at other implemented drivers like Mongo and Google analytics but I'm still confused.

Now i need to mention i am totally new to Clojure and basically learning through this project so i really would appreciate your inputs.

I worked on v0.35 compatibility recently for a custom driver based on the postgres one, so for me the reference implementation was more the sql-jdbc one. If you’re confused about the term “reducible” (like I was), my reading of it was in the sense of the programming term “reduce” (like map/filter/reduce - ref). Another terminology gem you might see, under sql-jdbc.exec/reducible-rows is “thunk” which is kind of like a callback.

Anyway, in this context say Metabase has a query result that’s going to eventually go into JSON or CSV or XLSX. Instead of the old behaviour where it’d wait for the query to complete then gulp down all the rows at once (bad for memory usage etc), the reducible rows thing uses (the equivalent of) a server-side cursor to chomp down batches of rows at a time.

To make that play nicely with cancellation I think the query processor needs to be more involved than before so that a user cancelling the query can signal to the driver and the data warehouse that the query is terminated and no more rows will be consumed. I suppose if Redis doesn’t have a cursor concept then you’d need to just return all rows at once (there’d only be one round of reduction).