Django’s @atomic Decorator Didn’t Prevent My Race Condition

Django’s @atomic Decorator Didn’t Prevent My Race Condition

The insane reality where wrapping everything in @transaction.atomic still allows two users to book the same seat, and “all or nothing” doesn’t mean “one at a time.”

Look at code:

@transaction.atomic
def book_seat(user, seat_id):
    seat = Seat.objects.get(id=seat_id)
    
    if seat.is_available:
        seat.is_available = False
        seat.booked_by = user
        seat.save()
        return True
    return False

This is wrapped in @transaction.atomic. The docs say it's "all or nothing." I thought that meant it was safe.

  • Two users clicked “Book Seat” at the exact same millisecond.
  • Both got confirmation emails. Both got charged. Both showed up to the concert.
  • One seat. Two tickets. Zero errors. Total chaos.

The transaction was atomic. It rolled back perfectly when things failed. But it didn’t prevent two transactions from running simultaneously and both succeeding.

I thought @atomic meant "thread-safe."

Django’s docs never said it wasn’t.

The Code That Looked Perfect

Here’s what I wrote. Straight from Django’s transaction documentation:

from django.db import transaction

@transaction.atomic
def purchase_limited_item(user, item_id):
    """Buy a limited edition item. Only 100 available."""
    
    item = Product.objects.get(id=item_id)
    
    # Check if still available
    if item.stock > 0:
        # Create order
        order = Order.objects.create(
            user=user,
            product=item,
            price=item.price
        )
        
        # Decrement stock
        item.stock -= 1
        item.save()
        
        # Charge payment
        charge_user(user, item.price)
        
        return order
    
    raise OutOfStock("Sorry, sold out!")
What I THOUGHT @atomic guaranteed:
✅ All operations succeed or all fail (atomicity)
✅ No partial updates (consistency)
✅ No other transaction sees intermediate state (isolation)
✅ Changes are permanent after commit (durability)
✅ Thread-safe, prevents race conditions ❌❌❌ WRONG
What @atomic ACTUALLY guarantees:
✅ All operations succeed or all fail
✅ No partial updates
✅ No other transaction sees intermediate state
✅ Changes are permanent after commit
Does NOT prevent concurrent access

Here’s what happened in production:

Time    Transaction A                   Transaction B
----    -------------                   -------------
0ms     BEGIN                           BEGIN
1ms     SELECT stock FROM product (100)
2ms                                     SELECT stock FROM product (100)
3ms     stock = 100, check passes 
4ms                                     stock = 100, check passes 
5ms     UPDATE stock = 99
6ms                                     UPDATE stock = 99  (NOT 98!)
7ms     COMMIT                          
8ms                                     COMMIT

Both transactions read stock = 100. Both decreased it to 99. Both succeeded.

We sold 2 items but only decremented stock by 1.

After 100 “successful” sales, stock showed 50. We oversold by 50 units. We didn’t have the inventory. We couldn’t fulfill orders.

Black Friday turned into refund Monday.

What the Hell is @atomic Actually Doing? Let’s break down what @transaction.atomic DOES and DOESN'T do:

What @atomic DOES

@transaction.atomic
def transfer_money(from_account, to_account, amount):
    from_account.balance -= amount
    from_account.save()
    
    # Simulate error
    if amount > 1000:
        raise ValueError("Amount too large")
    
    to_account.balance += amount
    to_account.save()

# If this fails, BOTH saves are rolled back
# from_account keeps its money
# Database stays consistent ✅

This is atomicity — all or nothing. Beautiful.

What @atomic DOESN’T DO

@transaction.atomic
def decrement_counter(counter_id):
    counter = Counter.objects.get(id=counter_id)
    counter.value -= 1
    counter.save()

# Thread A: Reads value=10, sets to 9
# Thread B: Reads value=10, sets to 9 (SAME TIME)
# Result: value=9 (should be 8)
# Both transactions committed successfully ❌

This is a race condition — concurrent access isn’t prevented. Terrible.

The Proof: Let’s Break It

Still don’t believe concurrent transactions can interfere? Let’s prove it:

import threading
from django.db import transaction
from myapp.models import Counter

# Reset counter
Counter.objects.all().delete()
Counter.objects.create(id=1, value=100)

@transaction.atomic
def decrement():
    counter = Counter.objects.get(id=1)
    current = counter.value
    # Simulate some processing time
    import time
    time.sleep(0.01)
    counter.value = current - 1
    counter.save()
    print(f"Thread {threading.current_thread().name}: {current} -> {current-1}")

# Spawn 10 threads
threads = []
for i in range(10):
    t = threading.Thread(target=decrement, name=f"T{i}")
    threads.append(t)
    t.start()

for t in threads:
    t.join()

# Check final value
counter = Counter.objects.get(id=1)
print(f"Expected: 90, Actual: {counter.value}")
# Output: Expected: 90, Actual: 94 (or 95, or 93...)
  • Expected: 90 (100–10)
  • Actual: 94 (or 95, or 93, or 92…)
  • Each thread decremented. Each transaction was atomic. But the final result is WRONG.
  • Why? Because they all read the same starting value before any of them wrote.

When This Destroys Production

This isn’t academic. This is the bug that makes headlines.

Scenario 1: The Concert Ticket Oversell

@transaction.atomic
def book_ticket(user, event_id, seat_number):
    # Get the seat
    seat = Seat.objects.get(event_id=event_id, number=seat_number)
    
    # Check availability
    if not seat.is_booked:
        # Book it
        seat.is_booked = True
        seat.booked_by = user
        seat.save()
        
        # Create ticket
        ticket = Ticket.objects.create(
            user=user,
            seat=seat,
            price=seat.price
        )
        
        # Charge user
        charge_payment(user, seat.price)
        send_confirmation_email(user, ticket)
        
        return ticket
    
    raise SeatTaken("This seat is already booked")

Taylor Swift concert. 50,000 people hitting refresh.

  • User A clicks "Buy" → Checks seat 42A → Available → Books it
  • User B clicks "Buy" → Checks seat 42A → Available (A hasn't saved yet) → Books it
  • Both get charged. Both get tickets. Both show up. Security nightmare.
Result:
10,000 seats available
12,000 tickets sold
2,000 angry fans with no seats
Venue at capacity
Refund 2,000 tickets manually
PR disaster

Scenario 2: The Bank Account That Went Negative

@transaction.atomic
def withdraw_money(account_id, amount):
    account = Account.objects.get(id=account_id)
    
    # Check sufficient balance
    if account.balance >= amount:
        # Withdraw
        account.balance -= amount
        account.save()
        
        # Record transaction
        Transaction.objects.create(
            account=account,
            type='withdrawal',
            amount=amount
        )
        
        return True
    
    raise InsufficientFunds("Not enough money")

User has $100. Tries to withdraw $80 twice simultaneously.

  • ATM 1: Check balance → $100 → Withdraw $80 → Balance = $20
  • ATM 2: Check balance → $100 → Withdraw $80 → Balance = $20
  • Final balance: $20 (should be $0 or reject one)
  • User withdrew $160 from $100 account
Result:
User got $160
Account shows $20
Bank lost $60
Multiply by thousands of users
Regulatory investigation
Someone gets fired

Scenario 3: The Promo Code Used 1,000 Times

@transaction.atomic
def apply_promo_code(user, code):
    promo = PromoCode.objects.get(code=code)
    
    # Check if still valid
    if promo.uses_remaining > 0:
        # Apply discount
        discount = Order.objects.create(
            user=user,
            promo=promo,
            discount_amount=promo.discount
        )
        
        # Decrement uses
        promo.uses_remaining -= 1
        promo.save()
        
        return discount
    
    raise PromoExpired("This code has been fully used")

Black Friday. Promo code “SAVE50” limited to 100 uses.

  • 1000 users apply it simultaneously
  • All check → 100 uses remaining → All pass
  • All decrement → 99, 99, 99... (NOT 99, 98, 97...)
  • Final count: 99 (should be 0)
  • 900 people got discount who shouldn't have
  • You lose $45,000 in revenue

Why select_for_update() Saves You

@transaction.atomic
def book_seat_correctly(user, seat_id):
    # LOCK the row while we work with it
    seat = Seat.objects.select_for_update().get(id=seat_id)
    
    if seat.is_available:
        seat.is_available = False
        seat.booked_by = user
        seat.save()
        return True
    return False

Now watch what happens:

Time    Transaction A                   Transaction B
----    -------------                   -------------
0ms     BEGIN                           BEGIN
1ms     SELECT FOR UPDATE (LOCKS row)
2ms                                     SELECT FOR UPDATE (WAITS...)
3ms     Check available  Yes
4ms     UPDATE available=False
5ms     COMMIT (UNLOCKS row)
6ms                                     (Lock acquired)
7ms                                     Check available  No
8ms                                     Return False
9ms                                     COMMIT

Transaction B waits until A is done. No race condition. No double-booking.

The Different Database Locking Strategies

Django gives you options for locking:

Option 1: select_for_update() (Pessimistic Locking)

@transaction.atomic
def decrement_stock(product_id):
    # Lock this row until transaction ends
    product = Product.objects.select_for_update().get(id=product_id)
    product.stock -= 1
    product.save()
Pros:
Prevents all race conditions
Simple to understand
Database handles synchronization
Cons:
Slower (other transactions wait)
Can cause deadlocks
Locks the entire row

Option 2: select_for_update(nowait=True)

@transaction.atomic
def try_to_book_seat(user, seat_id):
    try:
        # Try to lock, fail immediately if locked
        seat = Seat.objects.select_for_update(nowait=True).get(id=seat_id)
        seat.book(user)
        return True
    except DatabaseError:
        # Someone else is booking it, fail fast
        return False
Pros:
Doesn’t wait
User gets immediate feedback
Better UX than timeout
Cons:
Need to handle the exception
User might retry repeatedly

Option 3: select_for_update(skip_locked=True)

@transaction.atomic
def process_pending_jobs():
    # Get jobs that aren't locked by other workers
    jobs = Job.objects.filter(
        status='pending'
    ).select_for_update(skip_locked=True)[:10]
    
    for job in jobs:
        job.process()
Pros:
Multiple workers can process different jobs
No waiting
Great for job queues
Cons:
Doesn’t guarantee order
Can skip items temporarily

Option 4: F() Expressions (Optimistic)

from django.db.models import F

# Instead of:
product = Product.objects.get(id=product_id)
product.stock -= 1
product.save()

# Use F() to do it atomically at database level:
Product.objects.filter(id=product_id).update(
    stock=F('stock') - 1
)
Pros:
No locking needed
Very fast
Works at SQL level
Cons:
Can’t check conditions before update
Need separate query to verify result
No Python-level access to old value

Option 5: Database Constraints (Let DB Enforce)

class Product(models.Model):
    stock = models.IntegerField()
    
    class Meta:
        constraints = [
            models.CheckConstraint(
                check=models.Q(stock__gte=0),
                name='stock_non_negative'
            )
        ]

# Let the database reject invalid updates
@transaction.atomic
def purchase_item(product_id):
    product = Product.objects.get(id=product_id)
    product.stock -= 1
    try:
        product.save()  # Fails if stock would be negative
        return True
    except IntegrityError:
        return False  # Out of stock
Pros:
Database guarantees correctness
Can’t accidentally bypass
Works even with raw SQL
Cons:
Error handling needed
Less flexible than Python checks

The Isolation Level Trap

Here’s something Django doesn’t tell you: isolation level matters.

# PostgreSQL default: READ COMMITTED
# Can see changes from other committed transactions

@transaction.atomic
def bad_counter():
    counter = Counter.objects.get(id=1)  # Reads: 10
    # Another transaction commits, counter = 11
    counter.value -= 1  # Sets to 9 (not 10!)
    counter.save()

You can change isolation level:

from django.db import transaction

@transaction.atomic(isolation_level='SERIALIZABLE')
def safe_counter():
    counter = Counter.objects.get(id=1)
    counter.value -= 1
    counter.save()

Different databases support different levels:
SQLite: Only SERIALIZABLE
PostgreSQL: READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ, SERIALIZABLE
MySQL: Same as PostgreSQL

Django defaults to READ COMMITTED, which allows race conditions.

The Right Way to Think About Transactions

What @atomic IS:
Error handling (rollback on exception)
Consistency within one transaction
“All or nothing” for YOUR operations
What @atomic ISN’T:
Concurrency control
Race condition prevention
Mutual exclusion

Better mental model:

# @atomic means:
try:
    # Do database operations
    # If any fail, undo all of them
except Exception:
    # Rollback everything
    raise

# @atomic does NOT mean:
with database_lock:  # ← This doesn't exist
    # Do database operations
    # Block everyone else

The Checklist

Before you ship code with @transaction.atomic, ask:

1 - Can this run concurrently?
Multiple users clicking same button?
Background workers processing same data?
Webhooks being retried?
2 - Am I reading then writing based on that read?
Check stock → Decrement stock? ❌ Race condition
Check balance → Withdraw money? ❌ Race condition
Check availability → Book item? ❌ Race condition
3 - Do I need to prevent concurrent access?
Yes → Use select_for_update()
No → @atomic alone is fine
4 - Can I use F() expressions instead?
Incrementing/decrementing counters? Use F()
Mathematical operations? Use F()
Need to check conditions first? Can’t use F()
5 - Do I have database constraints?
Unique constraints can prevent duplicates
Check constraints can prevent invalid states
Foreign keys can prevent orphans

The Takeaway

  • @transaction.atomic provides atomicity, NOT concurrency control
  • Race conditions happen when you read-then-write without locking
  • select_for_update() locks rows and prevents concurrent modifications
  • F() expressions do atomic updates at SQL level
  • Database constraints provide the ultimate safety net
  • Different isolation levels affect what transactions can see
  • Always test concurrent scenarios, not just single-user flows

Finally

  • This is peak Django confusion.
  • A decorator called “atomic” that doesn’t make things atomic in the way you expect.
  • A feature that prevents partial failures but not concurrent conflicts.
  • A guarantee of consistency that only applies within one transaction, not across them.
  • It’s not a bug. Databases work this way by design. But Django’s docs could be clearer.
Next time you write @transaction.atomic, ask yourself:
“Can two of these run at the same time?”
“Am I reading a value and then updating based on it?”
“Do I need select_for_update()?”

If you’re handling money, inventory, seats, or anything limited, the answer is YES.

Because @atomic prevents your transaction from being inconsistent.

But it won’t prevent two consistent transactions from conflicting with each other.

And remember: In Django, “atomic” doesn’t mean “exclusive access.”

SUBSCRIBE FOR NEW ARTICLES

@
comments powered by Disqus