Running an MMORPG on the JVM??
Opening
I have been getting lazy lately. I need to get my head back in the game. :( It has been quite a while since I last wrote a blog post.
Back in How We Make Games, I wrote that MMORPG would be the final challenge. Since then, the most common reaction has probably been this: Java? You are really going to build and run a real-time MMORPG on the JVM? Is that not something that only works at a very small scale?
This post has one core claim. The biggest reason high-Hz server ticks have become a realistic option on the JVM is progress in GC. Netty matters. Kotlin matters. MMO techniques such as zone partitioning and AOI matter. But before any of that, you have to look at pause time.
In this genre, the real question is not average performance but GC
If you remember older Java, the first concern that comes to mind for a real-time networked game is usually GC. More precisely, it is not throughput but tail latency. Even if the average looks fine, one long stop-the-world pause can push a tick off schedule and immediately break input responsiveness and combat feel.
This is especially painful on a game server.
- A
20Hzserver has a tick budget of50ms. - A
60Hzserver has a tick budget of16.6ms. - A
128Hzserver has a tick budget of7.8ms.
If GC pauses spike into the tens of milliseconds, the average TPS can still look healthy while players immediately feel something is wrong. Rubber banding, input delay, combat desync, and delayed packet flushes often start from exactly this kind of tail problem. That is why the question of whether real-time games are viable on the JVM is really closer to this: how predictably short has GC become?
Modern JVM GC is not the old JVM GC
This is where the situation has changed a lot. Even the OpenJDK roadmap makes the shift obvious.
- ZGC became a production feature in JDK 15.
- Generational ZGC arrived in JDK 21.
- In JDK 23, Generational ZGC became the default mode.
- In JDK 24, non-generational ZGC was removed.
- That means Java 25 can be seen as the first LTS in the generational-only ZGC era.
- Shenandoah was already a production GC, and in Java 25 its generational mode was promoted to a product feature.
This matters for more than just another JVM option. According to OpenJDK, Generational ZGC was designed to collect young objects more frequently while lowering the risks of allocation stalls, the required heap overhead, and the GC CPU overhead. The pause profile stays the key point. JEP 439 explains that application pauses are typically kept below 1ms.
That means the old assumption that made many engineers dismiss Java game servers, namely that GC occasionally stops the world in a way that wrecks latency, is no longer the default premise. GC has not disappeared. But GC itself is no longer the decisive reason to reject the JVM outright.
Real-world production cases point in the same direction
One of the most interesting examples is Netflix. Netflix switched its default GC posture toward Generational ZGC on JDK 21 and says that more than half of its critical streaming video services now run on JDK 21 with Generational ZGC.
More importantly, the results were better than expected. Netflix explains that it originally expected ZGC to be a tradeoff where it would buy lower latency at the cost of some throughput. In practice, average latency and P99 latency both improved, while CPU utilization was equal or better. In one of the worst cases they tested, non-generational ZGC used 36% more CPU than G1 for the same workload, but Generational ZGC turned that into nearly a 10% CPU improvement.
Of course, Netflix is not a game company. But the point here is not the product category. It is the question of how well a system controls pauses and tail latency. A real-time game server is solving the same problem under an even tighter budget.
So has the answer changed for real-time games on the JVM?
At this point, it is no longer useful to ask whether real-time games are possible on the JVM in one vague sentence. You need to separate the problem space.
The realistic territory is already quite broad.
- Tick-based MMORPGs
- Sandbox world servers
- Lobbies, matchmaking, and state synchronization backends
- Multiplayer games that need something like
20~60Hzserver ticks
A mainstream example is the Minecraft Java Edition server. That fact alone makes it difficult to keep insisting that Java cannot run a real-time multiplayer world.
At the same time, there are still areas where you need to be more careful.
- Competitive FPS games at
128Hz, where the tick budget is extremely tight - Workloads that need extreme deterministic latency
- Environments with very little memory headroom
- Systems where GC threads and application threads are already competing on a saturated CPU
So the point is not that the JVM can do everything. The point is that the GC pause problem, which used to be the biggest reason to reject the JVM for most game-server scenarios, has been reduced dramatically.
Only after that do MMO-specific design techniques become meaningful
The order matters. I have repeatedly said that zone tick rates, per-system cadence, and AOI-driven synchronization are important. The mmorpg-service I am building follows that structure in practice.
- The default loop runs at
10Hz - World and zone settings can be adjusted in the
1~15Hzrange - Monster AI uses different cadence values depending on state
- Drop despawn checks and auto-recovery retries run at slower intervals
- Netty I/O threads only receive packets, while zone loops apply the actual game rules
AOIManagerchanges synchronization frequency based on distance and events
But all of that is still the second layer. If pause times are still jumping by tens of milliseconds, splitting zone ticks and tuning AOI on top of that does not fix the foundation. The biggest reason higher Hz has become practical is GC progress. MMO design is the layer you build on top of that.
In the end, this post comes down to one sentence
The biggest reason higher-Hz processing has become practical on the JVM is progress in GC.
In the past, the association of Java with dangerous stop-the-world pauses was simply too strong. Today, low-pause collectors are far more mature, and with Generational ZGC plus the latest Shenandoah direction, the JVM is no longer a platform that gets naturally excluded from real-time server discussions.
Once you place Kotlin and Netty on top of that, then add zone loops, cadence, and AOI as MMO-specific design tools, the story changes completely. The question should no longer be Does the JVM make this impossible? It should be How do we design GC and tick behavior within our latency budget?
That is exactly why I think operating an MMORPG on the JVM has become a very realistic option.