Skip to content

DynamoDB Schema Migrations: The Guide Nobody Wrote

In the r/aws thread on the SaaS Multi-Tenant pattern, u/gottcha- asked: “How do you handle schema changes?”

It’s the question every DynamoDB developer eventually hits. SQL developers have ALTER TABLE. DynamoDB doesn’t. There’s no official migration tool, no standard pattern, and almost no documentation from AWS on how to handle this in production.

The reason this guide doesn’t exist is that DynamoDB schema changes aren’t a single thing - they’re several different operations with very different difficulty levels. Most of them are easier than people expect. One of them (key structure changes) is genuinely painful - and if you’re hitting key structure changes frequently, it may signal that your access patterns weren’t stable enough when you designed the schema, which is one of the core reasons not to use single-table design in the first place.

DynamoDB’s schema model

Before the migration patterns, it helps to understand what DynamoDB actually enforces.

DynamoDB enforces exactly three things at the schema level:

  1. Every item must have the table’s partition key attribute
  2. Every item must have the sort key attribute (if the table has one)
  3. GSI key attributes must be present on items that should appear in that GSI (items without the GSI key are simply excluded from the index - this is sparse indexes, and it’s a feature)

Everything else - attribute names, types, which attributes exist - is entirely up to the application. DynamoDB has no concept of columns. Two items in the same table can have completely different attributes. This is both the reason schema migrations are different and the reason most of them are straightforward.

Type 1: Attribute-level changes (easy)

Adding, renaming, or removing attributes that aren’t part of any key.

Adding a new attribute

Just start writing it. There’s no migration needed.

// Before: UserEntity has { userId, email, name }
// After: UserEntity adds { avatarUrl }

// Old items don't have avatarUrl. New items do.
// ElectroDB returns undefined for missing optional attributes.
// If you need a default for old items, add it on read:

const user = await UserEntity.get({ userId }).go();
const avatarUrl = user.data?.avatarUrl ?? 'https://singletable.dev/default-avatar.png';

No backfill required unless you need to query by the new attribute (in which case you need a GSI, covered below).

Removing an attribute

Stop writing it in new items. Old items still have it. DynamoDB doesn’t care. If you need to clean up:

// Lazy deletion: remove on next write
await UserEntity.update({ userId }).remove(['legacyField']).go();

// Or backfill via scan if you need it gone everywhere

Renaming an attribute

DynamoDB has no rename. The pattern: write both old and new names simultaneously for a transition period, backfill old items with the new name, then stop writing the old name.

// Step 1: Write both
await UserEntity.update({ userId }).set({
  displayName: newValue,  // new name
  name: newValue,         // old name, keep writing during transition
}).go();

// Step 2: Backfill all items (see backfill pattern below)

// Step 3: Stop writing old name, remove it
await UserEntity.update({ userId }).remove(['name']).go();

Type 2: Adding a GSI (medium)

Adding a new GSI to support a new access pattern. This is the most common “schema change” in DynamoDB.

The good news: you can add a GSI to an existing table and DynamoDB will backfill it automatically. Existing items that have the GSI key attributes will appear in the index. Existing items without those attributes are excluded (sparse index behavior).

aws dynamodb update-table \
  --table-name MainTable \
  --attribute-definitions \
    AttributeName=gsi2pk,AttributeType=S \
    AttributeName=gsi2sk,AttributeType=S \
  --global-secondary-index-updates \
    "[{\"Create\":{
      \"IndexName\": \"GSI2\",
      \"KeySchema\": [{\"AttributeName\":\"gsi2pk\",\"KeyType\":\"HASH\"},
                     {\"AttributeName\":\"gsi2sk\",\"KeyType\":\"RANGE\"}],
      \"Projection\":{\"ProjectionType\":\"ALL\"}
    }}]"

DynamoDB will show the GSI as CREATING and backfill it from the base table. Backfill time depends on table size - for large tables this can take hours. The table remains fully available during backfill.

The catch: if your existing items don’t already have gsi2pk and gsi2sk attributes, they won’t appear in the new GSI. You need to backfill those attributes onto existing items before (or while) the GSI builds.

ElectroDB: adding a new index

In ElectroDB, adding an index to the entity definition is enough for new writes. For existing items:

// Update existing items to include new GSI key attributes
const { data: items, cursor } = await UserEntity.scan.go({ limit: 100 });

await Promise.all(
  items.map(item =>
    UserEntity.update({ userId: item.userId })
      .set({
        // ElectroDB computes gsi2pk/gsi2sk from these composite values
        createdBy: item.createdBy,
      })
      .go()
  )
);

Type 3: Key structure changes (hard)

Changing the PK or SK format of existing items. This is the genuinely difficult migration.

Why it’s hard: DynamoDB keys are immutable. You can’t rename a partition key or change PROJECT#<createdAt>#<projectId> to PROJECT#<ulid>. The only way to change a key is to write a new item with the new key and delete the old one.

This is exactly the migration I described in the ULIDs vs UUIDs post - the PROJECT#<createdAt>#<projectId> to PROJECT#<ulid> fix.

The migration pattern

Step 1: Dual-write. Write every new item with both the old and new key format. For existing items, do nothing yet.

Step 2: Backfill. Scan existing items with old key format, write new items with new key format. Old items remain live.

Step 3: Read from new. Update read paths to use the new key format. Keep old items in place as fallback.

Step 4: Verify. Confirm no reads are hitting old-format keys.

Step 5: Delete old items. Remove the old-format items once you’re confident.

Here’s what this looks like for the project key migration:

// Step 1 & 2: Backfill Lambda
import { ulid, decodeTime } from 'ulid';

async function migrateProjectKeys() {
  let cursor: string | undefined;

  do {
    // Scan for old-format project items
    const { data: items, cursor: nextCursor } = await ProjectEntity.scan
      .where(({ sk }, { begins }) => begins(sk, 'PROJECT#20'))  // timestamp prefix
      .go({ cursor, limit: 25 });

    cursor = nextCursor;

    for (const item of items) {
      // Generate a ULID that preserves the original creation time
      const originalTimestamp = new Date(item.createdAt).getTime();
      const newProjectId = ulid(originalTimestamp);

      // Write new item
      await ProjectEntity.put({
        ...item,
        projectId: newProjectId,
        // New SK will be PROJECT#<ulid> - computed by ElectroDB
      }).go();

      // Mark old item for deletion (or delete immediately if you're confident)
      await ProjectEntity.delete({
        tenantId: item.tenantId,
        projectId: item.projectId,  // old projectId
      }).go();
    }
  } while (cursor);
}

Key principle: generate the ULID from the original createdAt timestamp using ulid(timestamp). This preserves chronological ordering in the new key format - items that were created earlier get earlier ULIDs - so your sort order is maintained after migration.

Making the transition safe

Key structure migrations are where you can lose data or have a bad deployment window if you’re not careful. A few practices that help:

Never delete before verifying. Complete the backfill in full, run your test suite against the new key format, run production traffic against it for at least one deployment cycle, then delete old items.

Use point-in-time recovery. Enable PITR before starting any key structure migration. It’s free insurance - if something goes wrong, you can restore to a known state.

aws dynamodb update-continuous-backups \
  --table-name MainTable \
  --point-in-time-recovery-specification PointInTimeRecoveryEnabled=true

Batch with exponential backoff. Scanning and rewriting large tables will consume capacity. Use small batch sizes and add backoff:

async function sleep(ms: number) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

// Between batches
await sleep(100); // Start slow, increase if no throttling

Type 4: ElectroDB entity versioning

ElectroDB has built-in entity versioning for managing schema evolution on items within a single entity type.

export const UserEntity = new Entity({
  model: {
    entity: "user",
    version: "2",  // bump this when the schema changes
    service: "saas",
  },
  // ...
});

ElectroDB stores the version on each item. When you read an item with an older version, you can detect it and handle the migration:

const result = await UserEntity.get({ userId }).go();

// ElectroDB exposes __edb_e__ (entity name) and __edb_v__ (version)
if (result.data.__edb_v__ === '1') {
  // Old item - apply migration logic
  const migrated = migrateUserV1ToV2(result.data);
  await UserEntity.put(migrated).go();
  return migrated;
}

return result.data;

This is lazy migration - items are migrated on their first read after the version bump. It works well for small tables or low-traffic entities. For large tables where you need all items migrated before a cutover, use the scan-and-backfill approach instead.


The migration checklist

For any schema change in production:

  1. Enable PITR before starting
  2. Dual-write new format alongside old for one deployment cycle
  3. Backfill existing items in a background Lambda, small batches, with backoff
  4. Verify read paths work with new format in staging
  5. Cut over reads to new format
  6. Monitor for errors for at least 24 hours
  7. Delete old-format items only after you’re confident

The most common mistake is skipping step 6 and deleting old items immediately after the backfill. Give yourself a window to catch anything you missed.


Schema migrations are the gap in DynamoDB tooling that I find most frustrating - and it’s one of the things I’m building singletable.dev to help with. Seeing two schema versions side by side and understanding what migration is required to move between them is a feature I want to exist. If that sounds useful, join the list.