Skip to content

Single-Table vs Table-Per-Tenant in DynamoDB: The Real Tradeoffs

When I posted the SaaS Multi-Tenant schema pattern, the first question in the thread was the most obvious one: why not just give each tenant their own table?

It’s a reasonable instinct. Separate tables means no cross-tenant data in one place, simpler queries, and a mental model that matches how most developers think about isolation. If Tenant A is causing problems, their table is clearly identifiable.

So why doesn’t everyone do it? Here’s the actual answer - not “single-table is always right” but a real cost comparison and a decision framework.

The case for table-per-tenant

Let’s steelman it properly.

Isolation is obvious. Each tenant’s data physically lives in a separate table. Querying the wrong tenant’s data requires constructing the wrong table name, which is harder to do accidentally than constructing the wrong partition key. From a blast-radius perspective, a bug that exposes data touches one tenant’s table, not everyone’s.

IAM is clean. You can attach per-tenant IAM policies at the table level, not at the row level. dynamodb:* on arn:aws:dynamodb:...:table/tenant-abc-* is a straightforward, auditable policy.

CloudWatch is clean. Metrics per table = metrics per tenant. Seeing which tenant is hammering your capacity is trivial when each has their own table with its own CloudWatch metrics.

Schema flexibility per tenant. Large enterprise tenants sometimes need custom attributes or different access patterns. Table-per-tenant lets you evolve schemas per tenant without affecting anyone else.

These are real advantages. They’re also largely available through other means, and they come with costs that compound over time.

The case against: the costs that bite you later

AWS limits

DynamoDB has a default limit of 2,500 tables per account per region. You can request increases, but the limit exists and AWS doesn’t grant unlimited increases freely.

For most SaaS applications, 2,500 tenants is success, not a distant edge case. If you’re aiming at SMB customers, you could hit this in your first year. The mitigation - spreading tenants across multiple AWS accounts - is a significant operational burden.

Write cost at scale

Each DynamoDB table has a minimum cost even at zero traffic: you’re paying for provisioned capacity or you’re on on-demand (which has no minimum, but the table itself has overhead). More critically, each table has its own partition space. A single-table design lets DynamoDB distribute load across partitions naturally. Table-per-tenant means a tenant who spikes doesn’t affect others - but it also means you can’t pool capacity.

In practice: a customer on your free plan with 10 requests/day gets a full table allocation. In single-table design they’re just a few items in your existing partition space. At 1,000 free-tier tenants, this adds up to meaningful waste.

Cross-tenant queries become impossible

Some queries are legitimately cross-tenant: admin dashboards, usage reporting, billing aggregation, churn analysis. In single-table design, these are GSI queries or Scan operations on one table. In table-per-tenant, they require iterating over every tenant’s table and aggregating results in application code - or maintaining a separate aggregation table that you update manually.

The ops and billing queries you need to run your business become engineering problems.

Operational overhead multiplies with tenant count

Consider what happens when you need to:

  • Add a new GSI to support a new access pattern
  • Update a Lambda’s IAM permissions
  • Enable point-in-time recovery
  • Change TTL settings
  • Run a data migration

In single-table: one operation.

In table-per-tenant: one operation per tenant. At 500 tenants, you’re scripting everything or you’re doing it wrong. The scripts are possible, but they’re a category of work that simply doesn’t exist in single-table design. The schema migrations guide covers what a single GSI addition or key structure change looks like - imagine running each of those steps 500 times in parallel.

DynamoDB Streams processing

Streams are per-table. If you process events (audit logs, search indexing, analytics pipelines) via DynamoDB Streams, you need a separate stream consumer per tenant table. Lambda event source mappings, monitoring, error handling - all of it multiplies by tenant count.

The actual cost comparison

Here’s a concrete scenario: 200 tenants, median 500 items each, all on-demand pricing.

Single-table: One table, 100K items total. You pay for the reads and writes you actually do, plus a trivial storage cost ($0.25/month for 100K items at 1KB each).

Table-per-tenant: 200 tables. On-demand has no provisioned minimum, but you’re managing 200 stream consumers, 200 CloudWatch metric namespaces, 200 backup configurations. Each migration script runs 200 times. Each IAM change touches 200 resources.

The direct cost difference is small at this scale. The operational cost difference is meaningful from day one and grows with every tenant.

When table-per-tenant is actually right

Despite all of the above, there are scenarios where it’s the right call.

When you have very few, very large tenants. If you’re building vertical SaaS for enterprise customers - 10 tenants, each paying $50K/year - table-per-tenant isolation is worth the operational overhead. Each tenant is large enough to care deeply about data isolation guarantees, and 10 tables is trivial to manage.

When regulatory requirements mandate physical data separation. Some compliance frameworks (particularly in healthcare and government) treat logical isolation (partition key scoping) and physical isolation (separate tables) differently. If your customer’s compliance team requires physical separation, that requirement overrides architecture preferences.

When tenants need meaningfully different schemas. If enterprise tenants genuinely need different attributes, indexes, or even different data models - not just more data, but different structure - table-per-tenant gives you that flexibility. This is rare in practice; most “custom” requirements can be handled with optional attributes and sparse indexes.

When your tenant count is small and bounded. Building an internal SaaS tool for your company’s 5 business units? Table-per-tenant is simple and maintainable. Building a product for an unbounded market? Single-table scales, table-per-tenant creates a ceiling.

The subcategory approach (best of both)

u/finitepie raised a variant worth discussing: instead of pure single-table or pure table-per-tenant, use key subcategories within a single-table design.

Instead of:

PK: TENANT#<tenantId>

Use:

PK: TENANT#<tenantId>#USER
PK: TENANT#<tenantId>#ORDER

This gives each entity type a logical namespace within a tenant’s partition, making it easier to reason about which data belongs where and enabling more granular dynamodb:LeadingKeys policies. It’s not physical isolation, but it’s tighter logical isolation than generic partition key prefixes.

The tradeoff: it breaks “fetch all tenant data with one Query” - you now need one Query per entity namespace. For most use cases that’s fine; you rarely want to fetch all entity types simultaneously.

Decision framework

ConditionSingle-TableTable-Per-Tenant
Tenant count > 100
Tenant count < 20Either
Cross-tenant admin queries needed
Regulatory physical isolation required
Per-tenant schema differences
Streams processing per entity type
Strong per-tenant CloudWatch metrics

The pattern I use for rasika.life and recommend for most SaaS applications: single-table with TENANT#<tenantId> partition keys, application-layer scoping enforced in tRPC middleware, and a GSI for the handful of cross-tenant queries the admin dashboard needs. Table-per-tenant gets revisited only if a compliance requirement makes it non-negotiable. If neither architecture fits cleanly, the broader question of when single-table design is the wrong choice is worth revisiting too.


The SaaS Multi-Tenant pattern shows the full single-table design with 10 access patterns and GSI overloading. The tenant isolation post covers the security enforcement side - application-layer vs IAM LeadingKeys. I’m building singletable.dev to make multi-tenant schema decisions visual before they’re in production.