DynamoDB vs Postgres for E-Commerce: An Honest Comparison
I while ago I posted the DynamoDB e-commerce orders pattern on r/webdev. The post got removed by mods, but not before a thread developed - and the core objection was consistent: “Why not just use Postgres?”
It’s a fair question. Postgres is excellent. For a lot of e-commerce use cases it’s the right call. The honest answer isn’t “DynamoDB is better” - it’s “it depends on which problems you have.”
Here’s the actual comparison, from someone who has built e-commerce backends on both.
What we’re comparing
A typical e-commerce backend needs to handle data in 2 modes - online analytical processing (OLAP) and online transaction processing (OLTP):
- Order placement and status updates (OLTP - high throughput, low latency)
- Customer order history (OLTP - per-customer queries)
- Inventory management (OLTP + some OLAP)
- Admin dashboards and reporting (OLAP - aggregations, filters, sorting)
- Search (usually delegated to Elasticsearch/Algolia regardless of DB choice)
- Returns and refunds processing
These requirements don’t all point to the same database. Understanding which ones dominate your workload drives the right choice.
Where DynamoDB wins
High-throughput order processing
DynamoDB’s core strength is predictable, low-latency reads and writes at any scale. If you’re processing thousands of orders per minute, DynamoDB handles it without the connection pool management, query planner overhead, or vertical scaling concerns that come with Postgres.
The e-commerce orders pattern handles the core OLTP operations cleanly:
GetItemby order ID: single-digit millisecond, always- Customer order history: one
Queryon the customer partition, sorted by ULID - Status update: single
UpdateItem, propagates to GSI automatically
None of these involve joins. None require a query planner. DynamoDB just fetches the data and returns it.
At Shopify’s scale (millions of orders per day), DynamoDB-style key-value access is the only model that works without heroic infrastructure effort. If you’re building toward that scale, starting with DynamoDB avoids a painful migration later.
Serverless architectures
DynamoDB is the natural database for Lambda-based backends. No connection pooling to manage - DynamoDB uses HTTP, so every Lambda invocation opens and closes cleanly. Postgres requires persistent connections, and Lambda’s ephemeral execution model doesn’t compose well with connection limits. RDS Proxy helps, but it adds cost and latency.
If your stack is SST + Lambda + API Gateway, DynamoDB is the path of least resistance. Postgres on RDS is workable but you’re fighting the architecture.
Cost at scale
DynamoDB on-demand pricing means you pay for what you use with no idle cost. A small e-commerce store processing 1,000 orders/month pays almost nothing. A store doing 1M orders/month pays proportionally - and the per-request cost compares favorably to the compute + storage + I/O cost of a comparably-scaled RDS instance.
The inflection point varies, but for most mid-sized e-commerce workloads, DynamoDB is cheaper than RDS at equivalent scale once you factor in operational overhead.
Where Postgres wins
Reporting and analytics
This is the biggest gap in DynamoDB for e-commerce. The moment your ops team asks “show me all orders over $200 that shipped in the last 30 days, grouped by product category” - you’re in trouble with DynamoDB.
That query in SQL:
SELECT
p.category,
COUNT(*) as order_count,
SUM(oi.total) as revenue
FROM orders o
JOIN order_items oi ON o.id = oi.order_id
JOIN products p ON oi.product_id = p.id
WHERE o.shipped_at BETWEEN '2026-02-01' AND '2026-03-01'
AND o.total > 200
GROUP BY p.category
ORDER BY revenue DESC;
In DynamoDB, that query is impossible without either a full table scan (expensive and slow) or pre-computing the aggregation and storing it separately (operationally complex). There’s no JOIN, no GROUP BY, no ad-hoc filtering.
If reporting is a core part of your e-commerce operation - and for any business beyond a small store, it is - you need either Postgres or a separate analytical store fed from DynamoDB (Redshift, Athena over S3, or even a read replica Postgres via DynamoDB Streams).
Flexible access patterns
E-commerce schemas tend to evolve. Early on you might not know whether you’ll need “show orders by promotion code used” or “show orders that included a backordered item.” Postgres lets you add a column and write a query. DynamoDB requires a new GSI and potentially a schema migration.
If you’re early-stage and still discovering your access patterns, Postgres’s flexibility is genuinely valuable. Single-table DynamoDB rewards knowing your access patterns upfront - an investment that’s hard to make when the product is still finding its shape. (Unstable access patterns are the first failure condition in when not to use single-table design.)
Team familiarity
This one is underrated. A team that knows SQL and Postgres can build and debug an e-commerce backend confidently. A team that doesn’t deeply understand single-table DynamoDB will write an application-layer mess: inconsistent key patterns, GSIs that don’t match access patterns, and queries that accidentally scan the table.
The DynamoDB skill gap is real and not trivial to close. If your team isn’t going to invest in learning it properly, Postgres is the right call - not because it’s better, but because it’s better for your team.
Complex transactions
DynamoDB has transactions (TransactWriteItems), but they’re limited to 100 items and have strict conditions. Complex order operations - reserving inventory, creating the order, charging the customer, updating loyalty points - often require coordinating multiple records in ways that push against DynamoDB’s transaction model.
Postgres’s full ACID transactions handle this naturally. Serializable isolation level, savepoints, deferred constraints - the full toolkit for complex multi-step operations.
The honest scorecard
| Concern | DynamoDB | Postgres |
|---|---|---|
| Order placement throughput | ✅ | ✅ (up to ~10K/s) |
| Predictable read latency | ✅ | ⚠️ (query planner variance) |
| Customer order history | ✅ | ✅ |
| Reporting and analytics | ❌ | ✅ |
| Flexible/ad-hoc queries | ❌ | ✅ |
| Complex transactions | ⚠️ | ✅ |
| Serverless / Lambda | ✅ | ⚠️ (connection limits) |
| Schema flexibility | ❌ | ✅ |
| Cost at very high scale | ✅ | ⚠️ |
| Team learning curve | Steep | Gentle |
| Operational simplicity | ✅ | ⚠️ (connection mgmt) |
Neither wins overall. They’re different tools.
What I’d actually recommend
Use DynamoDB if:
- You’re on a serverless/Lambda stack already
- Order volume is high and growing (or you’re building toward it)
- Your core access patterns are stable: get order by ID, customer history, orders by status
- You have DynamoDB expertise on the team
- Reporting can be handled separately (stream to Redshift/Athena, or use a BI tool)
Use Postgres if:
- Your team knows SQL and doesn’t want to invest in DynamoDB
- Reporting is a first-class requirement (not an afterthought)
- You’re pre-PMF and access patterns are still evolving
- You need complex transactions across many record types
- You’re not on serverless and connection management isn’t a concern
The pragmatic hybrid (what many mature e-commerce backends do): DynamoDB for OLTP (orders, inventory, customer data), feeding into a data warehouse (Redshift, BigQuery, Athena) for analytics. DynamoDB Streams trigger a Lambda that transforms and loads records into the warehouse in near-real-time. You get DynamoDB’s throughput and cost profile for operational traffic, and SQL’s flexibility for reporting.
This is more infrastructure to manage, but it’s the right architecture if your traffic justifies DynamoDB and your reporting needs justify a real analytical store.
The question nobody asks
Before choosing between DynamoDB and Postgres, answer this: what queries does your schema explicitly not support?
Every database design is a set of access patterns you’ve committed to. The ones you’ve left out are the ones that will hurt you later. Write them down. If the unsupported queries are primarily analytical (reporting, dashboards), you can handle them in a separate store. If they’re operational (features that users interact with directly), you either need to add them to your DynamoDB design or reconsider the database choice.
The r/webdev thread that generated this post had a comment that stuck with me: “The backend of our e-commerce app has grown into something nobody fully understands.” That’s not a DynamoDB problem or a Postgres problem - it’s a schema problem. The tool matters less than understanding what queries you’re optimizing for.
The DynamoDB e-commerce orders pattern shows the full single-table schema with 8 access patterns and the tradeoffs made. If you’re building on DynamoDB, singletable.dev is a visual schema designer to make these access pattern decisions explicit before they’re in production.