Why Your Java Counter Prints Less Than 200000 , A Deep Dive into Race Conditions
Modern backend systems process millions of concurrent operations every second.
Yet, one of the simplest Java programs can expose one of the most important concepts in concurrent programming: Race Conditions.
Consider this code:
class Counter {
int count = 0;
void increment() {
count++;
}
}
public class Test {
public static void main(String[] args) throws Exception {
Counter counter = new Counter();
Thread t1 = new Thread(() -> {
for (int i = 0; i < 100000; i++) {
counter.increment();
}
});
Thread t2 = new Thread(() -> {
for (int i = 0; i < 100000; i++) {
counter.increment();
}
});
t1.start();
t2.start();
t1.join();
t2.join();
System.out.println(counter.count);
}
}At first glance, the answer should obviously be:
200000Two threads.
Each increments 100000 times.
So why does the output often become:
173421
189302
196774and almost never exactly 200000?
Let’s break it down.
The Real Problem: count++ Is NOT Atomic
Most developers assume:
count++;is a single operation.
It is not.
Under the hood, it becomes three separate CPU operations:
1. Read count from memory
2. Increment locally
3. Write updated value backEquivalent pseudo-operations:
temp = count;
temp = temp + 1;
count = temp;Now imagine two threads executing simultaneously.
The Race Condition
Suppose:
count = 5Now both threads execute count++.
Thread Interleaving
StepThread 1Thread 2count1Read 552Read 553Increment to 654Increment to 655Write 666Write 66
Expected result:
7Actual result:
6One increment is lost.
This is called a:
⚠️ Lost Update Problem
And because this happens thousands of times across both threads, the final result becomes unpredictable.
Why Does This Happen?
Because:
Threads share the same heap memory
CPU scheduling is nondeterministic
Context switching can occur anytime
count++is not thread-safe
This entire category of bugs falls under:
⚠️ Race Conditions
A race condition occurs when:
Multiple threads access and modify shared state concurrently without proper synchronization.
How to Fix It
There are multiple ways.
Each has different performance and scalability tradeoffs.
1. Using synchronized
class Counter {
int count = 0;
synchronized void increment() {
count++;
}
}Now only one thread can execute increment() at a time.
How It Works
Java uses an intrinsic monitor lock.
Before entering:
increment()thread acquires the monitor.
Other threads wait.
Pros
✔ Easy to understand
✔ Guarantees correctness
✔ Built into JVM
Cons
❌ Blocking
❌ Context switching overhead
❌ Poor scalability under heavy contention
2. Using ReentrantLock
import java.util.concurrent.locks.ReentrantLock;
class Counter {
int count = 0;
ReentrantLock lock = new ReentrantLock();
void increment() {
lock.lock();
try {
count++;
} finally {
lock.unlock();
}
}
}Why Use It?
More flexible than synchronized.
Supports:
Fair locking
Try lock
Interruptible lock acquisition
Timed waits
Pros
✔ More control
✔ Better advanced concurrency handling
Cons
❌ Manual unlock required
❌ Still blocking
3. Using AtomicInteger (Best for Counters)
import java.util.concurrent.atomic.AtomicInteger;
class Counter {
AtomicInteger count = new AtomicInteger();
void increment() {
count.incrementAndGet();
}
}Why This Is Better
AtomicInteger uses:
⚡ Compare-And-Swap (CAS)
instead of locking.
What CAS Does
CPU instruction:
if current_value == expected_value
update
else
retryNo thread blocking.
No monitor locking.
Extremely efficient.
Internally
while(true) {
int existing = value;
int next = existing + 1;
if(CAS(existing, next))
break;
}Why Modern Systems Prefer Atomic Operations
High-performance systems like:
Kafka
Netty
Aerospike
Cassandra
Redis internals
heavily rely on:
Lock-free algorithms
CAS operations
Atomic primitives
because locks become bottlenecks at scale.
Performance Comparison
ApproachThread SafeBlockingScalablePlain int❌❌❌synchronized✅✅MediumReentrantLock✅✅MediumAtomicInteger✅❌High
But AtomicInteger Is Not Always Enough
For extremely high contention systems:
100+ threads
millions of increments/seceven CAS retries become expensive.
That’s why Java introduced:
⚡ LongAdder
Used internally in:
ConcurrentHashMap
Metrics systems
High throughput counters
It reduces contention using striped counters.
Real-World Backend Engineering Insight
This tiny example explains why distributed systems are hard.
Now imagine:
Millions of concurrent users
Multiple JVMs
Distributed databases
Replication
Network retries
The exact same “lost update” problem appears everywhere:
Inventory systems
Banking transactions
Payment systems
Distributed counters
Leaderboards
Concurrency bugs scale from a single integer → to entire distributed architectures.
Final Takeaway
The biggest lesson is:
Concurrency problems are rarely visible in code syntax.
The bug is hidden inside:
count++;What looks like one operation is actually multiple CPU-level operations racing against each other.
Understanding this deeply is foundational for:
Multithreading
JVM internals
High-performance backend engineering
Distributed systems design
Because at scale:
Correctness is harder than performance.
#Java #Concurrency #Multithreading #BackendEngineering #DistributedSystems #JVM #SystemDesign

