How is the Data Model (Schema) determined? (Schema incorrectly identified for BigQuery)

I have loaded a BigQuery database into Metabase. The data can be queried correctly, however the data model doesn’t seem to mirror the BigQuery schema. There are a lot of ‘float’ fields in the schema, but these seem to be identified with a ‘Category’ type instead of a ‘Number’ type. This is quite annoying, as it blocks features like ‘Autobinning’ to be used on the fields. How can I improve the way the schema is read?

One important thing to note is there is very little data in the schema and the values of the float fields can be NULL. I am guessing that Metabase tries to find the data schema based on the entried rather than the schema definition of the database?

eg. In this example both float1 and float2 will be identified as ‘Category’ fields instead of ‘Number’ fields.

 name   | float1 | float2 | ...
 "test" | 1.1    | NULL   | ...
 "more" | NULL   | 2.5    | ...

EDIT:
I’ve found the source code where fingerprinting happens (https://github.com/metabase/metabase/blob/5688c060de14b88481fc2f90df6b6f3baf35feb3/src/metabase/sync/analyze/classifiers/name.clj). It seems that the name of the column is used to determin the type.

EDIT2:
Seems like EDIT is only valid for some types like Latitude, Longitude.