When DynamoDB Single-Table Design Is the Wrong Choice

Last week I published a SaaS multi-tenant schema pattern on r/aws. It hit #1 for the day. But the comments were more interesting than the upvotes.

Three separate developers described single-table design as a disaster at their companies. “God awful idea if complicated.” “People at my work have been doing this and it’s a disaster.” “Over complicated trash.”

These aren’t trolls. These are engineers who tried single-table design and got burned. And they’re right - in their situations, they probably should have never used it.

So here’s the honest version: when NOT to use DynamoDB single-table design, from someone who uses it in production and still thinks it’s the right call for the right use case.

Single-table design is a tool, not a religion

The DynamoDB community has a tendency to present single-table design as the One True Way. Rick Houlihan’s re:Invent talks are legendary. Alex DeBrie’s book is excellent. But somewhere along the way, “single-table design works well for high-scale serverless applications with well-defined access patterns” became “always put everything in one table.”

That’s wrong, and it causes real damage.

Single-table design optimizes for one thing: minimizing round trips to DynamoDB at the cost of schema flexibility. That tradeoff is worth it under specific conditions. When those conditions aren’t met, you pay the cost without getting the benefit.

Here’s when those conditions aren’t met.

When single-table design is the wrong choice

1. Your access patterns aren’t stable yet

Single-table design requires you to know your access patterns upfront and bake them into your key structure. This is a front-loaded investment that pays off when those patterns don’t change.

Early-stage products don’t have stable access patterns. You’re still figuring out what the product is. Every pivot means your key structure is wrong. In a relational database, you add a column and write a migration. In a single-table DynamoDB schema, you might need to backfill millions of records with a new key format.

If you’re pre-product-market-fit, use a relational database or multi-table DynamoDB. You can migrate to single-table design later when you actually know what you’re building.

2. Your team doesn’t have a DynamoDB champion

Single-table design has a steep learning curve and requires ongoing discipline. Someone on your team needs to truly understand it - not “I read the blog post” understand it, but “I can explain in a PR review why we’re using this key structure for this entity type” understand it.

Without that person: new engineers add entities without understanding the existing structure, access patterns get added that require table scans, and nobody can confidently review schema changes.

I’ve seen teams where the single-table design worked perfectly until the one engineer who built it left. The team that inherited it couldn’t maintain it and migrated away at enormous cost.

If nobody on your team can own the DynamoDB schema with confidence, use something they can all reason about.

3. You have serious analytical or reporting requirements

DynamoDB is an OLTP database. It’s optimized for known-pattern, low-latency access to individual records or small sets of records. It is genuinely bad at ad-hoc queries, aggregations, and reporting.

Single-table design makes this worse. When all your entity types share a table with generic key names like GSI1PK and GSI1SK, writing even simple reports becomes painful. There’s no “select all orders where total > $100” - you need a GSI designed specifically for that query, with DynamoDB-specific query logic.

If you have significant reporting requirements, pair DynamoDB with a proper analytical store - Redshift, Athena over S3, or even Postgres for read replicas. DynamoDB for OLTP, something else for analytics.

4. You have fewer than 6 access patterns

The efficiency of single-table design comes from co-locating related data to serve multiple access patterns from a single query. If you only have a handful of access patterns, the overhead isn’t justified.

Say you’re building an internal tool with four entity types and five access patterns. You could design an elegant single-table schema - or you could use four DynamoDB tables with straightforward key structures that any engineer can understand in five minutes.

Simple use cases deserve simple solutions. Multi-table DynamoDB isn’t a failure. It’s often exactly right.

5. Your team is coming from relational databases and won’t invest in learning

This one is uncomfortable but true. Single-table design requires a genuine mindset shift, not just learning new syntax. If your team is going to mentally map DynamoDB onto a relational model - fighting the urge to add columns for every attribute, reaching for SQL-style joins - you’ll get the worst of both worlds.

DynamoDB rewards teams who embrace its constraints. It punishes teams who resist them. If there’s no organizational commitment to learning the model properly, don’t use single-table design.

6. Different teams or services own different entity types

Single-table design in a microservices architecture creates organizational coupling disguised as a technical decision. If the Orders service and the Users service share a table, they now have to coordinate on GSI allocation, key naming conventions, and every schema change. A developer on the Payments team needs to understand the full table design before adding a new entity - and one wrong GSI choice affects everyone.

The problem isn’t technical, it’s organizational. Separate tables give each service ownership of its own schema. Changes don’t require cross-team coordination. IAM roles can be scoped to individual tables without complex condition policies.

If multiple teams or bounded contexts share the same DynamoDB deployment, give each service its own table. The operational independence is worth more than the co-location benefit.

7. You need granular DynamoDB Streams processing

DynamoDB Streams are per-table. If you want a Lambda that reacts only to Order status changes, you can’t configure that at the stream level on a single table - you get every change to every entity type and have to filter in code.

This isn’t a dealbreaker for simple patterns, but it compounds. Your stream processor becomes a router that has to understand the full schema. Add a new entity type, update the stream handler. That handler now has implicit coupling to everything in the table.

If you have multiple distinct event-driven workflows triggered by different entity types, separate tables give you clean stream isolation.

What actually went wrong in “disaster” stories

When I read comments from engineers who had bad experiences, a pattern emerges. It’s almost never “single-table design was inherently wrong for our use case.” It’s usually one of these:

They started without defining access patterns first. This is the most common mistake. Single-table design is access-pattern-driven - you design the key structure to serve your access patterns, not the other way around. Teams that jumped into key design before listing every access pattern end up with a schema that can’t serve half their queries without table scans.

They mixed single-table with relational thinking. When you start adding GSIs because “we need to filter by this field,” you’ve lost the plot. GSIs in single-table design serve specific, pre-defined access patterns - not ad-hoc filtering.

They put the wrong data in the table. Not every entity belongs in your single DynamoDB table. Your audit log, your reporting data, your config tables - these might belong somewhere else.

A decision framework

Before committing to single-table DynamoDB design:

ConditionSingle-table worksUse something else
Access patternsWell-defined and stableStill evolving
Team DynamoDB expertiseHas a championLearning from scratch
Query patternsKnown upfrontAd-hoc / analytical
ScaleHigh throughput requiredModerate traffic
Access pattern count6+ patterns benefiting from co-locationFewer than 6
Reporting needsMinimal, or separate analytical storeSignificant
Team/service ownershipSingle team or monolithMultiple teams or services
Streams processingNo or unified stream handlerPer-entity event-driven workflows

If you’re checking more boxes on the right than the left, single-table design isn’t your answer right now.

What I actually use

For rasika.life - a cultural events platform built on SST with tRPC - single-table design is the right call. High-throughput reads, access patterns defined before I wrote a line of code, and I’m the DynamoDB champion on my own project.

If I were building a prototype with uncertain requirements and a team of engineers who haven’t used DynamoDB before, I’d use Postgres and not think twice.

The goal is shipping great software, not proving you can use the most sophisticated design pattern.


Want to see what single-table design looks like when it IS the right choice? The SaaS Multi-Tenant pattern covers a real production schema with 10 access patterns and a GSI overloading strategy. The E-Commerce Orders pattern shows a simpler case - 3 entity types, 1 GSI, 8 access patterns. And I’m building singletable.dev - a visual schema designer to make these tradeoffs easier to see before you commit to a key structure.