Tuesday, June 16, 2020

Nothing can stop a cheater?

A recent discussion on the Capitalism Lab forums led to the issue of optimising vs exploiting vs cheating in single player games. Below is a response to this quote:

While I think there is room for exploitation, in the end this comes down to the desire of the player for fair game - nothing can stop a cheater from cheating. - JasonLJ on BankingDLC


As a starting point, the simplest way to define cheating is operating outside the rules. In a computer game the rules are set by the designer and embedded into the simulation. In general this also means that you can interpret cheating as "operating outside the simulation", given the provision that if there are unintended consequences (bugs, etc) the designer can rectify them to clarify / enforce the rules.

On top of the simulation rules there can also be additional "house rules" where players in corporate or individually agree to abide by different rules to achieve a different goal to that which the designer intended. This can be anything from custom mods, scenarios, competitions, the obligatory "one city challenge" in Civ, to someone choosing to only buy and sell purple items. In this case it is the responsibility of the agreeing parties to enforce the rules and if this cannot be done absolutely through the simulation (Eg: turning off all imports at a port for a tougher CapLap challenge) then a level of trust that all players are "playing fair" starts to exist. Note that most sports and even collaborative play operate in this space.

Saying "nothing can stop a cheater" isn't quite right. The simulation space can be hardened against attacks from outside of the simulation, and within the simulation it is almost completely possible to stop cheating by encoding the rules into the simulation.

Now from the other side:
Within the simulation, the aim is to optimise play to maximize whatever the winning criteria is defined to be. In a competitive sense this is the heart and soul of the challenge and the main striving factor. To achieve greater and greater optimization requires deeper and deeper understanding of the rules (and therefore the simulation) and the skills needed to execute optimal play.

This still holds true for "house rules" games, although if there are rules that need to be enforced from outside the simulation, the line between "breaking the rules" and "optimal play" is not as clear cut and opens a window for debate about what is or isn't "fair play".

With JasonLJ's comment:
Cons
- Could be exploited by some players to bankrupt AI competitor banks. However, this is a matter of "willingness to cheat" not "capability to cheat" - even if deposit cap was 20%, players who wanted to use the same trick to cause a liquidity crisis with AI firms could still do so.

There are 2 different angles to view the question of cheating (not sure which one Jason is using, but both are worth discussing anyway):
- If Jason is referring to "tricking" the AI because they don't know how to handle it, then Jason is expressing the fact that any other player wouldn't fall for that, or, more officially, the AI is not playing like a real company would in the real world. Jason is expressing a desire to play with a house rule of "only do what would work in the real world", therefore taking advantage of a dumb AI would be cheating. Unfortunately since CapLab is actually trying to simulate the real world, there is some legitimacy in thinking that the game you are playing is operating under the same rules as the real world, however this isn't the case; the game's rules are enforced by the simulation. That said the designer should, as much as possible, encapsulate the desirable rules from the real world and embed into the simulation to enhance the simulation's relevance to people with this particular house rule. In the scenario above, making the AI aware of their liquidity risk exposure would remove this element of cheating from those expecting the "real world" house rule regardless of any changes to the deposit cap.

- But what if the AI knew about liquidity risk exposure and still got itself into a vulnerable position? Would it then be Ok to punish them? What if the the player got into a vulnerable position, would/should the AI bankrupt them? If Jason was thinking that the ethical choice of intentionally bankrupting someone is available in the game but classified as cheating, then he is putting a new house rule in effect; "Don't bankrupt opponents". Once again the designer COULD choose whether this rule is enforced in the simulation through some sort or anti-competitive behaviour legislation and enforcement (as would likely happen in the real world).

Although it sounds like I'm advocating a wild-west anything-goes-inside-the-simulation approach, it's more the opposite that the designer has the LUXURY of controlling exactly what is or is not simulated, what rules are or are not enforced, what ethical choices do or do not appear, and what learnings the player can take away from the game to apply to the real world. Ideally all players uptake the rules as intended, obviating the need for house rules and potentially divergent views on how to play the game.


Onto the game-breaking strategy:
As mentioned I generally play single-player games until I "break" them by discovering a strategy that guarantees a win, because all that is left is to execute the strategy over and over again. What I REALLY enjoy are complex games where the balance allows a variety of strategies packed full of interesting choices along the way. CapLab is one of those games and I've been playing on and off since the original Capitalism. When I started using the land pre-purchase strategy it felt wonky, but somewhat believable since it seemed to simulate the land-developer mindset. After investigating exactly what was going on and finding the edge cases (especially infinite money generators) it seems FAR more likely that this is unintended and would be classified as a bug or at least needing clarification / enforcement of the intended rules. If so, I look forward to a new version with many more optimising challenges ahead.