Transaction isolation determines what data a transaction can see while other transactions are running concurrently. The choice of isolation level affects both correctness and performance. Higher isolation prevents more anomalies but reduces concurrency. Lower isolation improves performance but allows reads of uncommitted or inconsistent data.
Understanding isolation levels helps you choose appropriately for each use case. A financial transfer needs different guarantees than a dashboard query. Using maximum isolation everywhere degrades performance unnecessarily; using minimum isolation everywhere causes data anomalies.
Isolation Anomalies
Transaction isolation exists to prevent anomalies: situations where concurrent transactions interact incorrectly. Each isolation level prevents certain anomalies while allowing others.
Dirty reads occur when a transaction reads uncommitted data from another transaction. If that transaction rolls back, the first transaction has read data that never existed. Most applications consider dirty reads unacceptable.
The following example shows how a dirty read can lead to incorrect application behavior. Transaction B sees a value that Transaction A never actually committed.
-- Transaction A
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
-- balance is now 900 (uncommitted)
-- Transaction B (dirty read)
SELECT balance FROM accounts WHERE id = 1;
-- Sees 900, even though A hasn't committed
-- Transaction A
ROLLBACK;
-- balance is back to 1000, but B saw 900
Non-repeatable reads occur when a transaction reads the same row twice and gets different values because another transaction modified and committed in between. The data changed out from under the first transaction.
-- Transaction A
BEGIN;
SELECT balance FROM accounts WHERE id = 1;
-- Returns 1000
-- Transaction B
UPDATE accounts SET balance = 500 WHERE id = 1;
COMMIT;
-- Transaction A (non-repeatable read)
SELECT balance FROM accounts WHERE id = 1;
-- Now returns 500, different from first read
This can cause subtle bugs in code that makes decisions based on reading the same data multiple times.
Phantom reads occur when a transaction executes the same query twice and gets different rows because another transaction inserted or deleted matching rows. The result set changed between queries.
-- Transaction A
BEGIN;
SELECT COUNT(*) FROM orders WHERE status = 'pending';
-- Returns 5
-- Transaction B
INSERT INTO orders (status) VALUES ('pending');
COMMIT;
-- Transaction A (phantom read)
SELECT COUNT(*) FROM orders WHERE status = 'pending';
-- Now returns 6, new row appeared
Phantoms are particularly problematic for aggregate queries and reporting where you expect consistent totals.
Standard Isolation Levels
SQL defines four isolation levels with increasing guarantees.
Read Uncommitted is the lowest level, allowing dirty reads. A transaction sees uncommitted changes from other transactions. This level is rarely used because dirty reads cause obvious correctness problems.
Read Committed prevents dirty reads. Transactions only see committed data. However, non-repeatable reads are possible; data can change between reads within a transaction. This is the default in many databases including PostgreSQL.
Repeatable Read prevents dirty reads and non-repeatable reads. Once a transaction reads a row, subsequent reads return the same data even if other transactions modify it. Phantom reads may still occur.
Serializable is the highest level, preventing all anomalies. Transactions execute as if they ran one at a time, in some serial order. This provides complete isolation but significantly reduces concurrency.
In Laravel, you can set isolation levels explicitly when needed. The default typically works for most operations, but critical sections may require higher isolation.
// Setting isolation level in Laravel
DB::transaction(function () {
// Using default isolation (usually READ COMMITTED)
$balance = Account::find(1)->balance;
// ...
});
// Explicit isolation level
DB::beginTransaction();
DB::statement('SET TRANSACTION ISOLATION LEVEL SERIALIZABLE');
// Now runs with serializable isolation
DB::commit();
// With specific connection
DB::connection('mysql')->transaction(function () {
// ...
}, attempts: 3, isolationLevel: 'REPEATABLE READ');
The attempts parameter handles retry logic when transactions fail due to serialization conflicts.
Implementation Variations
Different databases implement isolation levels differently. The standard defines minimum guarantees; implementations may be stronger.
PostgreSQL's Repeatable Read actually prevents phantoms, providing what some databases call snapshot isolation. Its Serializable level uses serializable snapshot isolation (SSI), which detects conflicts and aborts transactions that would violate serializability.
MySQL's InnoDB implements Repeatable Read with gap locking, which prevents phantoms for certain operations. Its behavior differs from PostgreSQL's snapshot-based approach.
These examples show how the same isolation level can behave differently across databases.
-- PostgreSQL's MVCC approach
-- Transaction A sees a snapshot from when it started
BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT * FROM accounts WHERE balance > 500;
-- Returns rows as of transaction start, regardless of concurrent changes
-- MySQL's locking approach
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT * FROM accounts WHERE balance > 500 FOR UPDATE;
-- Locks matching rows plus gaps, preventing inserts by other transactions
PostgreSQL's snapshot approach never blocks readers; MySQL's locking approach provides stronger guarantees for some use cases but causes contention.
Understanding your specific database's implementation is crucial. Reading the documentation for PostgreSQL versus MySQL versus SQL Server reveals significant behavioral differences.
Choosing Isolation Levels
Choose based on your consistency requirements and performance needs.
Read Committed works for most read operations where seeing committed data is sufficient. Dashboard queries, reports, and analytics typically don't need stronger isolation. The performance is good because transactions don't block each other for reads.
Repeatable Read suits operations that read data multiple times and need consistency within the transaction. Complex calculations, multi-step validations, and reports requiring consistent snapshots benefit from this level.
Serializable is necessary when you need correctness guarantees that lower levels don't provide. Financial transactions, inventory management, and any operation where concurrent modifications could cause business logic violations may need serializable isolation.
This transfer example shows why isolation matters for financial operations. Without proper isolation or locking, two concurrent withdrawals could both succeed even if the balance is insufficient.
class TransferService
{
public function transfer(int $fromId, int $toId, Money $amount): void
{
// Serializable prevents race conditions in balance checking
DB::transaction(function () use ($fromId, $toId, $amount) {
$from = Account::lockForUpdate()->find($fromId);
$to = Account::lockForUpdate()->find($toId);
if ($from->balance < $amount->cents()) {
throw new InsufficientFundsException();
}
$from->decrement('balance', $amount->cents());
$to->increment('balance', $amount->cents());
});
}
}
The lockForUpdate() method acquires exclusive locks on both accounts, preventing concurrent modifications until the transaction completes.
Locking Strategies
Beyond isolation levels, explicit locking provides additional control. SELECT FOR UPDATE locks selected rows, preventing other transactions from modifying them.
The following examples demonstrate different locking approaches. Choose pessimistic locking when conflicts are common, optimistic locking when they're rare.
// Pessimistic locking - lock rows when reading
$account = Account::lockForUpdate()->find($id);
// Other transactions wait to modify this row
// Shared lock - allow reads, block writes
$account = Account::sharedLock()->find($id);
// Others can read but not write
// Optimistic locking - check version on update
$account = Account::find($id);
// Later...
$affected = Account::where('id', $id)
->where('version', $account->version)
->update([
'balance' => $newBalance,
'version' => DB::raw('version + 1'),
]);
if ($affected === 0) {
throw new ConcurrentModificationException();
}
Optimistic locking is useful when reads far outnumber writes and conflicts are rare. You read without locking, then check at update time whether anyone else modified the record.
Pessimistic locking prevents conflicts by blocking concurrent access. Optimistic locking allows concurrent access but detects conflicts at commit time. Choose based on conflict frequency: pessimistic for high contention, optimistic for low contention.
Performance Implications
Higher isolation levels reduce concurrency. Serializable transactions may wait for or abort due to conflicts. This affects throughput under load.
Monitor for lock contention and deadlocks. Deadlocks occur when transactions wait for each other's locks in a cycle. Databases detect and resolve deadlocks by aborting one transaction, but frequent deadlocks indicate design problems.
This example shows how to handle deadlocks with retry logic and lock ordering to prevent them in the first place.
// Handling deadlocks with retry
public function transferWithRetry(int $fromId, int $toId, Money $amount): void
{
$attempts = 0;
$maxAttempts = 3;
while ($attempts < $maxAttempts) {
try {
DB::transaction(function () use ($fromId, $toId, $amount) {
// Consistent lock ordering prevents some deadlocks
$ids = [$fromId, $toId];
sort($ids);
$accounts = Account::lockForUpdate()
->whereIn('id', $ids)
->get()
->keyBy('id');
// ... transfer logic
});
return;
} catch (DeadlockException $e) {
$attempts++;
if ($attempts >= $maxAttempts) {
throw $e;
}
usleep(rand(10000, 50000)); // Random backoff
}
}
}
Notice the lock ordering: by sorting IDs before acquiring locks, you ensure all transactions acquire locks in the same order. This prevents the circular wait condition that causes deadlocks.
Keep transactions short. Long transactions hold locks longer, increasing contention. Do expensive operations outside transactions when possible.
Conclusion
Transaction isolation levels balance correctness against performance. Read Committed provides reasonable guarantees for most operations. Repeatable Read ensures consistency within transactions. Serializable guarantees correctness but limits concurrency.
Understand your database's specific implementation. PostgreSQL, MySQL, and SQL Server implement isolation differently despite using the same terminology. Test concurrent behavior explicitly; assumptions based on documentation may not match reality.
Choose the minimum isolation level that provides necessary guarantees. Use explicit locking strategically for critical sections. Monitor for contention and deadlocks. With appropriate isolation levels, concurrent transactions maintain correctness while achieving good performance.