From Redshift documentation- “By default, the JDBC driver collects all the results for a query at one time. As a result, when you attempt to retrieve a large result set over a JDBC connection, you might encounter a client-side out-of-memory error. To enable your client to retrieve result sets in batches instead of in a single all-or-nothing fetch, set the JDBC fetch size parameter in your client application.”
I took a heap dump and noticed 7 com.amazon.redshift.client.PGMessagingContext occupying memory of approximately 800 MB (ranging from 150 MB to 40 MB). We had couple of reports running which is downloading about 70 MB (70,000 rows of about 1 KB each) csv file apart from other smaller queries. Possibly, we have large resultset in metabase instead of fetching small numbers of rows and then converting to csv.
Do we know if we can set fetch size for resultset and if that is going to help in this?