The core problem
When a player closes Manu Idle and comes back 14 hours later, they expect their character to have trained, gathered resources, and made progress. The naive solution is to run the game simulation for those 14 hours when they reopen the app. That’s obviously not feasible — you can’t make someone wait minutes while their phone crunches numbers.
So you need to produce the same result (or close enough) in a fraction of a second. That’s the fundamental math problem of idle games, and it’s more interesting than it sounds.
Why you can’t just multiply
The tempting shortcut is to take your per-second rates and multiply by elapsed seconds. If a player earns 10 gold per second, and they were away for 50,400 seconds, give them 504,000 gold. Done.
Except nothing in an RPG works that way. Rates change as skills level up. Leveling up changes how fast you earn XP. Resources unlock new activities that produce different resources. There are feedback loops everywhere — the output of one system is the input of another.
Multiplying by time only works for flat, linear systems. The moment you have compounding growth, interdependent skills, or any kind of branching logic, simple multiplication gives you wrong answers.
The simulation approach
What I do instead is run a compressed simulation. Rather than ticking every second, I break the offline period into intervals — larger chunks when rates are stable, smaller chunks around level-up boundaries where rates change. For each interval, I calculate the expected outcomes using the rates that were active during that window.
The key insight is that between level-ups, most systems in Manu Idle behave predictably. A character mining copper at level 12 with a steel pickaxe has a calculable expected yield per hour. I don’t need to simulate every swing — I need to know when they’ll hit level 13, recalculate, and continue.
This turns an O(n) problem (simulate every tick) into something closer to O(k) where k is the number of state transitions during the offline window. For a 24-hour session, that might be 20-30 transitions instead of 86,400 ticks.
Where it gets tricky
The hard part isn’t the math — it’s the edge cases.
Cascading level-ups. Sometimes gaining a level in one skill immediately makes another skill train faster, which causes that skill to level up too, which unlocks a new recipe, which changes resource flow. You have to detect and resolve these cascades within the simulation without letting them compound into absurd results.
Randomness. Active play has random elements — rare drops, critical successes, event encounters. You can’t roll dice for 24 hours of gameplay, but you also can’t just use expected values or offline play feels flat and deterministic. My compromise: I use expected values for the bulk of the simulation but sprinkle in a small number of random outcomes weighted by probability. This gives offline sessions a bit of narrative (“you found a rare ore!”) without wild variance.
Time boundaries. A player might go offline at level 9 with 90% XP, come back, and expect to be well into level 10. But level 10 might unlock a new skill that changes everything about the simulation from that point forward. You have to handle these boundaries cleanly, and they can stack.
What I chose not to simulate
Some systems just don’t translate to offline play, and I think that’s fine. Event encounters, active decision points, and certain interactive mechanics are active-play-only. Rather than producing a pale imitation of these systems offline, I leave them out and make the offline summary honest about what happened.
Players appreciate this more than you’d expect. A summary that says “trained Smithing from 14 to 17, smelted 340 iron bars, found 2 mithril ore” tells a clear story. A summary that tries to fake interactive decisions feels hollow.
The validation problem
How do you know your simulation is accurate? I built a debug mode early on that runs both the full tick-by-tick simulation and the compressed version side by side, then compares the results. During alpha testing, any discrepancy above 2% triggers a flag. Most of the time the compressed simulation lands within a fraction of a percent.
The 2% tolerance exists because of the randomness layer — the exact rare drops will differ between the two approaches, and that’s expected. What matters is that the aggregate progression feels right.
Why this matters for design
The math isn’t just an engineering problem — it shapes game design. Every mechanic I add has to answer the question: “How does this behave over a 24-hour offline window?” If the answer is “poorly” or “it doesn’t,” I either redesign the mechanic or make it active-only.
This constraint has actually made the game better. It forces every system to have clear, predictable behavior. No black boxes. No mystery formulas. If I can’t simulate it cleanly, it probably isn’t well-designed enough for the player to understand either.