Database sync error: "clojure.lang.LazySeq cannot be cast to clojure.lang.Associative" with version (0.36.6)

I have a connection to BigQuery. I recently upgraded to Metabase v0.36.6 and since then, I haven’t been able to connect to one of our databases. Every time I re-authnticate with this DB in BigQuery, it works for a few seconds and then I get this error.

Any help would be appreciated.

Log snipper:

context :ad-hoc,
:error "clojure.lang.LazySeq cannot be cast to clojure.lang.Associative",
:row_count 0,
:running_time 0,
:preprocessed
{:database 66,
:query
{:source-table 1729,
:fields

Diagnostic Info:
{
“browser-info”: {
“language”: “en-GB”,
“platform”: “MacIntel”,
“userAgent”: “Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Safari/537.36 Edg/84.0.522.40”,
“vendor”: “Google Inc.”
},
“system-info”: {
“file.encoding”: “UTF-8”,
“java.runtime.name”: “OpenJDK Runtime Environment”,
“java.runtime.version”: “1.8.0_262-heroku-b10”,
“java.vendor”: “Oracle Corporation”,
“java.vendor.url”: “http://java.oracle.com/”,
“java.version”: “1.8.0_262-heroku”,
“java.vm.name”: “OpenJDK 64-Bit Server VM”,
“java.vm.version”: “25.262-b10”,
“os.name”: “Linux”,
“os.version”: “4.4.0-1078-aws”,
“user.language”: “en”,
“user.timezone”: “Etc/UTC”
},
“metabase-info”: {
“databases”: [
“mysql”,
“postgres”,
“bigquery”,
“googleanalytics”
],
“hosting-env”: “heroku”,
“application-database”: “postgres”,
“application-database-details”: {
“database”: {
“name”: “PostgreSQL”,
“version”: “10.13 (Ubuntu 10.13-1.pgdg16.04+1)”
},
“jdbc-driver”: {
“name”: “PostgreSQL JDBC Driver”,
“version”: “42.2.8”
}
},
“run-mode”: “prod”,
“version”: {
“tag”: “v0.36.6”,
“date”: “2020-09-15”,
“branch”: “release-0.36.x”,
“hash”: “cb258fb”
},
“settings”: {
“report-timezone”: “Europe/Berlin”
}
}
}

Hi @gidi9
I have never seen that error before. Not sure what is causing it.
Do you see any other errors in the log?
Can you try to re-sync the database and check the log?
Which version did you upgrade from? I would recommend that you revert to your backup to downgrade.

Hi @flamber

thank you for your suggestion.

We noticed that the database is still connected, because when we run a normal SQL query, the results appear. This error occurs only when we browse the database through the Metabase interface. Here are some new logs. It seems like the 2000 row limit isn’t being enforced.

[cb431e67-a13b-4a57-958e-ac8a005739ee] 2020-10-03T00:47:21+02:00 ERROR metabase.query-processor.middleware.catch-exceptions Error processing query: null
{:database_id 66,
:started_at #t "2020-10-02T22:47:19.659Z[Etc/UTC]",
:via
[{:status :failed,
:class clojure.lang.ExceptionInfo,
:error "Error reducing result rows",
:stacktrace
["--> query_processor.context.default$default_reducef$fn__37988.invoke(default.clj:61)"
"query_processor.context.default$default_reducef.invokeStatic(default.clj:58)"
"query_processor.context.default$default_reducef.invoke(default.clj:49)"
"query_processor.context$reducef.invokeStatic(context.clj:69)"
"query_processor.context$reducef.invoke(context.clj:62)"
"query_processor.context.default$default_runf$respond_STAR___37992.invoke(default.clj:70)"
"driver.bigquery$post_process_native$fn__1427.invoke(bigquery.clj:201)"
"driver.bigquery$do_with_finished_response.invokeStatic(bigquery.clj:156)"
"driver.bigquery$do_with_finished_response.invoke(bigquery.clj:147)"
"driver.bigquery$post_process_native.invokeStatic(bigquery.clj:184)"
"driver.bigquery$post_process_native.invoke(bigquery.clj:178)"
"driver.bigquery$process_native_STAR_$thunk__1521.invoke(bigquery.clj:231)"
"driver.bigquery$process_native_STAR_.invokeStatic(bigquery.clj:233)"
"driver.bigquery$process_native_STAR_.invoke(bigquery.clj:226)"
"driver.bigquery$eval1525$fn__1527.invoke(bigquery.clj:250)"
"query_processor.context$executef.invokeStatic(context.clj:59)"
"query_processor.context$executef.invoke(context.clj:48)"
"query_processor.context.default$default_runf.invokeStatic(default.clj:69)"
"query_processor.context.default$default_runf.invoke(default.clj:67)"
"query_processor.context$runf.invokeStatic(context.clj:45)"
"query_processor.context$runf.invoke(context.clj:39)"
"query_processor.reducible$pivot.invokeStatic(reducible.clj:34)"
"query_processor.reducible$pivot.invoke(reducible.clj:31)"
"query_processor.middleware.mbql_to_native$mbql__GT_native$fn__45635.invoke(mbql_to_native.clj:26)"
"query_processor.middleware.check_features$check_features$fn__44911.invoke(check_features.clj:42)"
"query_processor.middleware.optimize_datetime_filters$optimize_datetime_filters$fn__45800.invoke(optimize_datetime_filters.clj:133)"
"query_processor.middleware.wrap_value_literals$wrap_value_literals$fn__47328.invoke(wrap_value_literals.clj:137)"
"query_processor.middleware.annotate$add_column_info$fn__43532.invoke(annotate.clj:574)"
"query_processor.middleware.permissions$check_query_permissions$fn__44786.invoke(permissions.clj:64)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__46318.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__44984.invoke(cumulative_aggregations.clj:61)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__46850.invoke(resolve_joins.clj:183)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__39262.invoke(add_implicit_joins.clj:245)"
"query_processor.middleware.large_int_id$convert_id_to_string$fn__45596.invoke(large_int_id.clj:44)"
"query_processor.middleware.limit$limit$fn__45621.invoke(limit.clj:38)"
"query_processor.middleware.format_rows$format_rows$fn__45576.invoke(format_rows.clj:81)"
"query_processor.middleware.desugar$desugar$fn__45050.invoke(desugar.clj:22)"
"query_processor.middleware.binning$update_binning_strategy$fn__44076.invoke(binning.clj:229)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__44592.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__38811.invoke(add_dimension_projections.clj:318)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__39018.invoke(add_implicit_clauses.clj:141)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__39411.invoke(add_source_metadata.clj:105)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__46515.invoke(reconcile_breakout_and_order_by_bucketing.clj:98)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__43717.invoke(auto_bucket_datetimes.clj:125)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__44639.invoke(resolve_source_table.clj:46)"
"query_processor.middleware.parameters$substitute_parameters$fn__46300.invoke(parameters.clj:114)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__44691.invoke(resolve_referenced.clj:80)"
"query_processor.middleware.expand_macros$expand_macros$fn__45306.invoke(expand_macros.clj:158)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__39442.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__47212.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46526$fn__46530.invoke(resolve_database_and_driver.clj:33)"
"driver$do_with_driver.invokeStatic(driver.clj:61)"
"driver$do_with_driver.invoke(driver.clj:57)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__46526.invoke(resolve_database_and_driver.clj:27)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__45524.invoke(fetch_source_query.clj:267)"
"query_processor.middleware.store$initialize_store$fn__47221$fn__47222.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:46)"
"query_processor.store$do_with_store.invoke(store.clj:40)"
"query_processor.middleware.store$initialize_store$fn__47221.invoke(store.clj:10)"
"query_processor.middleware.cache$maybe_return_cached_results$fn__44568.invoke(cache.clj:209)"
"query_processor.middleware.validate$validate_query$fn__47230.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__45648.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__39280.invoke(add_rows_truncated.clj:36)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__47197.invoke(results_metadata.clj:147)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__44927.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__46389.invoke(process_userland_query.clj:136)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__44870.invoke(catch_exceptions.clj:174)"
"query_processor.reducible$async_qp$qp_STAR___38074$thunk__38075.invoke(reducible.clj:101)"
"query_processor.reducible$async_qp$qp_STAR___38074.invoke(reducible.clj:107)"
"query_processor.reducible$sync_qp$qp_STAR___38083$fn__38086.invoke(reducible.clj:133)"
"query_processor.reducible$sync_qp$qp_STAR___38083.invoke(reducible.clj:132)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:215)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:211)"
"query_processor$fn__47372$process_query_and_save_execution_BANG___47381$fn__47384.invoke(query_processor.clj:227)"
"query_processor$fn__47372$process_query_and_save_execution_BANG___47381.invoke(query_processor.clj:219)"
"query_processor$fn__47416$process_query_and_save_with_max_results_constraints_BANG___47425$fn__47428.invoke(query_processor.clj:239)"
"query_processor$fn__47416$process_query_and_save_with_max_results_constraints_BANG___47425.invoke(query_processor.clj:232)"
"api.dataset$fn__50707$fn__50710.invoke(dataset.clj:55)"
"query_processor.streaming$streaming_response_STAR_$fn__35496$fn__35497.invoke(streaming.clj:73)"
"query_processor.streaming$streaming_response_STAR_$fn__35496.invoke(streaming.clj:72)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:66)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:64)"
"async.streaming_response$do_f_async$fn__23282.invoke(streaming_response.clj:85)"],
:error_type :qp,
:ex-data {:type :qp}}],
:error_type :qp,

@gidi9

  1. Which version did you upgrade from?
  2. Do you have multiple BQ databases, and this is only a problem on one of them?
  3. Where do you see an indication that row limit isn’t being set?
  4. Try doing a manual sync+scan in Admin > Databases > (your-db), and check the log.

@flamber

  1. We upgraded from v0.36.4 on Sept 16

  2. We have over 10 BQ databases and the problem is strangely only with this database. I have disconnected and re-connected this particular DB a few times, but the issue still persists.

  3. I attached a screenshot from the log above: “Error reducing result rows”

  4. I tried it and it successfully synced but got this “fingerprinting” error below.

    [f6ea4c57-abdd-469a-85d8-199c28d05ab1] 2020-10-05T10:03:57+02:00 INFO metabase.sync.util FINISHED: Analyze data for bigquery Database 66 ‘raw_sw’ (265.8 ms)
    [f6ea4c57-abdd-469a-85d8-199c28d05ab1] 2020-10-05T10:04:42+02:00 ERROR metabase.models.field-values Error fetching field values
    [f6ea4c57-abdd-469a-85d8-199c28d05ab1] 2020-10-05T10:04:42+02:00 ERROR metabase.sync.util Error fingerprinting Table 1,075

Thank you @flamber for looking into this!
14

@gidi9

  1. Downgrade to 0.36.4 as a workaround
  2. That’s strange. Are you using Service Accounts or OAuth for that database?
  3. That’s not related to query limits
  4. You need to enable more debug logging, so we can see exactly where it’s failing - example:
    java -Dlog4j.configuration="https://log4j.us/templates/metabase?trace=metabase.sync" -jar metabase.jar
    And then you need to check the server console, since you’re also seeing this issue, which means some errors will not show the stacktrace in the UI:
    https://github.com/metabase/metabase/issues/12851

Hi @flamber
I looked through the server logs and found this error:

java.sql.SQLException: Connections could not be acquired from the underlying database!

I then found this possible fix on the Metabase website.

However, this feature has been removed. We can’t include OAuth details manually anymore, I can only use Service Accounts. So it seems like I can’t implement this fix.

  1. I will consider this
  2. We use Service Accounts just for this DB. The other DBs use OAuth
  3. Error above

@gidi9 Okay, there’s an issue open for allowing switching back to OAuth:
https://github.com/metabase/metabase/issues/13181 - upvote by clicking :+1: on the first post

Okay, thank you very much

Having the same issue after upgrading 0.36.4 to 0.36.7.

Only happens on some of our tables - not specific to a particular database.
However, all our databases are BigQuery - we are still using OAuth.

Re-sync went fine.

Same entry in log;

class clojure.lang.LazySeq cannot be cast to class clojure.lang.Associative (clojure.lang.LazySeq and clojure.lang.Associative are in unnamed module of loader ‘app’)

Error reducing result rows

Diagnostic data

{
  "browser-info": {
    "language": "en-GB",
    "platform": "MacIntel",
    "userAgent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.121 Safari/537.36",
    "vendor": "Google Inc."
  },
  "system-info": {
    "file.encoding": "UTF-8",
    "java.runtime.name": "OpenJDK Runtime Environment",
    "java.runtime.version": "11.0.7+10",
    "java.vendor": "AdoptOpenJDK",
    "java.vendor.url": "https://adoptopenjdk.net/",
    "java.version": "11.0.7",
    "java.vm.name": "OpenJDK 64-Bit Server VM",
    "java.vm.version": "11.0.7+10",
    "os.name": "Linux",
    "os.version": "4.14.138+",
    "user.language": "en",
    "user.timezone": "GMT"
  },
  "metabase-info": {
    "databases": [
      "googleanalytics",
      "bigquery"
    ],
    "hosting-env": "unknown",
    "application-database": "postgres",
    "application-database-details": {
      "database": {
        "name": "PostgreSQL",
        "version": "9.6.18"
      },
      "jdbc-driver": {
        "name": "PostgreSQL JDBC Driver",
        "version": "42.2.8"
      }
    },
    "run-mode": "prod",
    "version": {
      "date": "2020-10-09",
      "tag": "v0.36.7",
      "branch": "release-0.36.x-with-new-build-scripts",
      "hash": "ec751f0"
    },
    "settings": {
      "report-timezone": null
    }
  }
}

@velocity Interesting. While I don’t think it’s related to OAuth vs Service Accounts, I’m generally unsure what the problem could be. Do you see anything in the BigQuery log?

What if you downgrade to 0.36.4 again?

There are no changes to how BigQuery or general connections functions between those two versions, so that seems to indicate the problem is something specific to your BigQuery setup (or your tables).

@flamber Reverting the version solved the issue for now.

BigQuery logs suggest that query executed successfully and that results were returned. This corresponds with having to wait several seconds for “Doing Science…” and then as I would expect the results to be rendered, the error appears. So I’m not sure it’s related to BigQuery at all.

It seemed to affect our largest tables.

@velocity Can you try enabling debug logging to see if we can figure out more information? See my previous comment: Database sync error: "clojure.lang.LazySeq cannot be cast to clojure.lang.Associative" with version (0.36.6)

@gidi9 @velocity I’m also seeing this error on one instance. Will try to figure out how to reproduce and return with more information.

@gidi9 @velocity And here’s an issue for it:
https://github.com/metabase/metabase/issues/13475 - upvote by clicking :+1: on the first post