This is the full sequence of events that can get us from having a collection or collection group in Firestore to having a table in Supabase with no downtime.
All of this is assumed to be happening on both prod and dev. You would be wise to test your changes on dev first.
get_document_table_spec
(used by the replicator trigger to dispatch incoming writes.)collectionTables
or subcollectionTables
in common/src/supabase/utils.ts
.logCollections
or logSubcollections
Firestore triggers to pick up your change in step 3.scripts/supabase-import.ts
.The Supabase table is now useful as a read replica of the JSON blobs from Firestore. To migrate completely over to 100% Supabase, continue reading.
Alter the Supabase table to add appropriate columns and column types. Do not add constraints yet (e.g. do not make them not null
.) If you aren’t planning to migrate writes right away, you should probably consider only adding columns that you are pretty sure people are not going to dick around with in Firestore land, and add a comment to the type definition in common
so that people changing the Firestore schema will know about this dependency.
Add a trigger to the Supabase table on before or update
that looks like this:
create or replace function foo_populate_cols()
returns trigger
language plpgsql
as $$ begin
if new.data is not null then
new.whatever := (new.data)->>'whatever';
// etc.
end if;
return new;
end $$;
create trigger foo_populate before insert or update on foo
for each row execute function foo_populate_cols();
If your Supabase table isn’t quite small (i.e. <100K rows), drop all non-absolutely-critical indices on it. This is required for step 4 to not take a million years.
Run update foo set fs_updated_time = fs_updated_time
in order to execute the trigger on all existing rows. This may take a while if your table has >1M rows; perhaps a few minutes per million. If your table is substantially large (e.g. >10M rows) you may wish to update chunks of rows at a time (e.g. update foo set fs_updated_time = fs_updated_time where id >= 'A' and id <= 'M'
.)
If you aren’t going to migrate all the reads to the new native columns, re-add any indices you need that you dropped in step 3. If you aren’t going to migrate all the writes to the new columns, do a repack now (see below.)