This is the full sequence of events that can get us from having a collection or collection group in Firestore to having a table in Supabase with no downtime.

All of this is assumed to be happening on both prod and dev. You would be wise to test your changes on dev first.

Set up Supabase read replica

  1. Create an appropriate replica table in Supabase.
  2. Add a case for that table in get_document_table_spec (used by the replicator trigger to dispatch incoming writes.)
  3. Add a case for the table in collectionTables or subcollectionTables in common/src/supabase/utils.ts.
  4. Redeploy the logCollections or logSubcollections Firestore triggers to pick up your change in step 3.
  5. New writes should now be arriving in the Supabase table.

Import existing data

  1. Add a case for your table in scripts/supabase-import.ts.
  2. Run the import.
  3. The Supabase table should now be up to date with Firestore. As a sanity check, you may wish to compare the count of documents in either one.

The Supabase table is now useful as a read replica of the JSON blobs from Firestore. To migrate completely over to 100% Supabase, continue reading.

Create native Supabase columns

  1. Alter the Supabase table to add appropriate columns and column types. Do not add constraints yet (e.g. do not make them not null.) If you aren’t planning to migrate writes right away, you should probably consider only adding columns that you are pretty sure people are not going to dick around with in Firestore land, and add a comment to the type definition in common so that people changing the Firestore schema will know about this dependency.

  2. Add a trigger to the Supabase table on before or update that looks like this:

    create or replace function foo_populate_cols()
      returns trigger
      language plpgsql
    as $$ begin
      if new.data is not null then
        new.whatever := (new.data)->>'whatever';
        // etc.
      end if;
      return new;
    end $$;
    
    create trigger foo_populate before insert or update on foo
    for each row execute function foo_populate_cols();
    
  3. If your Supabase table isn’t quite small (i.e. <100K rows), drop all non-absolutely-critical indices on it. This is required for step 4 to not take a million years.

  4. Run update foo set fs_updated_time = fs_updated_time in order to execute the trigger on all existing rows. This may take a while if your table has >1M rows; perhaps a few minutes per million. If your table is substantially large (e.g. >10M rows) you may wish to update chunks of rows at a time (e.g. update foo set fs_updated_time = fs_updated_time where id >= 'A' and id <= 'M' .)

  5. If you aren’t going to migrate all the reads to the new native columns, re-add any indices you need that you dropped in step 3. If you aren’t going to migrate all the writes to the new columns, do a repack now (see below.)