• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

All Systems Will Be Gamed – paper by Brian Arthur

ashmoran

Member
I'm currently reading the book Complexity Economics, which is a recent book (2014) that collects together a number of papers Brian Arthur has authored or co-authored.

Brian Arthur was one of the original members of the Santa Fe Institute, which has been looking at better ways to understand economic problems than the failing neoclassical mathematical approach. Specifically, they have been applying ideas from complexity to understand economic systems. Complexity is (to paraphrase Brian Arthur from another chapter) the study of how structures emerge from the interaction of simpler elements. So a solid iron bar emerges from the interaction of iron atoms; an marketplace emerges from the interactions of merchants and shoppers.

I'm half way through the book and I've already read a number of ideas that are relevant to Dash, but the earlier papers are quite mathematical and I will need a lot more time to digest them – if I ever manage that :) This one, however, is quite simple to understand.

The paper's full title (and location online) is All Systems will be Gamed: Exploitive Behavior in Economic and Social Systems

All Systems Will Be Gamed

Arthur sets the premise of the paper along these lines:

Given any governmental system, any legal system, regulatory system, financial system, election system, set of policies, set of organizational rules, set of international agreements, people will find unexpected ways to manipulate it to their advantage. “Show me a 50-foot wall,” said Arizona’s governor Janet Napolitano, speaking in 2005 of illegal immigration at the US-Mexico border, “and I’ll show you a 51-foot ladder.”
He has four questions:
  1. What are the causes of exploitive behavior and how does it typically arise?
  2. Given a particular economic system or proposed policy, how might we anticipate where it might fail, and what can we learn from disciplines such as structural engineering that try to foresee potential failure modes, and could help us in this?
  3. How can we construct models of systems being gamed or exploited, and of agents in these models “discovering” ways to exploit such systems?
  4. What are the future prospects for constructing artificially intelligent methods that could automatically anticipate how economic and social systems might be exploited?
Arthur describes exploitation as not something unusual or something exceptional, but something built in to every system:

Our first observation is that exploitive behavior is not rare. This is not because of some inherent human tendency toward selfish behavior; it is because all policy systems—all social policies—pose incentives that are reacted to by groups of agents acting in their own interest, and often these reactions are unexpected and act counter to the policy’s intentions.
His claim is that economics has been too focused on equilibrium systems in which actors are assumed to have no incentive to alter the system, and ignore ways they can exploit it. He thinks economics would benefit from "failure mode analysis" in the same way as engineering. He doesn't claim to have a full solution to this but he offers four broad categories of gaming that have been exploited in the past:
  1. Use of asymmetric information
  2. Tailoring behavior to conform to performance criteria
  3. Taking partial control of a system
  4. Using system elements in a way not intended by policy designers
I hope these category names are largely self explanatory, but there are plenty of examples of each in the paper which illuminate them. What I find interesting is that Bitcoin has fallen prey to at least two.

Under point 3, I find the situation in Bitcoin not entirely unlike this scenario Arthur describes:

Within the insurance giant AIG, some years before the 2008 crash, a small group of people (the Financial Products Unit) managed to take effective control of much of the company’s assets and risk bearing, and began to invest heavily in credit default swaps. The group profited greatly through their own personal compensation—they were paid a third of the profits they generated—but the investments collapsed, and that in turn sank AIG (Zuill, 2009).

Likewise, under point 4, I again reminded of Bitcoin:

An example would be using a website’s rating possibilities to manipulate others’ ratings. Often too, players find a rule they can use as a loophole to justify behavior the designers of the system did not intend. Usually this forces a flow of money or energy through the rule, to the detriment of the system at large.

Dash is immune to, or at least heavily protected against, a number of attacks along these lines due to design improvements it has made over Bitcoin, but it may still have points that can be exploited or games to the detriment of the whole system. For example, under the category of asymmetric information, someone recently commented on Reddit:

Even more insidious, if it is possible that master nodes can log data from their Private send mixes, logs could be sold and no one would necessarily be the wiser. Such sales would just be an extra source of income for master node owners.

This is not to say that this attack is feasible, but it demonstrates some value in this paper that it fits so neatly into one of the failure mode categories. (The author does point out that these are not exhaustive, and that new failure modes may be discovered.)

The next step in the paper is to ask: how might we be able to anticipate failure modes? I'll summarise a number of ideas (again, explicitly not exhaustive, but they are each described in more detail in the paper):
  1. An obvious first step is to have at hand knowledge of how similar systems have failed in the past
  2. Second, we can observe that in general the breakdown of a structure starts at a more micro level than that of its overall design
  3. Third, and again by analogy, we can look for places of high “stress” in the proposed system and concentrate our attentions there
  4. All this would suggest that if we have a design for social system and an analytical model of it, we can “stress test” it by first identifying where actual incentives would yield strong inducements for agents to engage in behavior different from the assumed behavior
  5. Next we construct the agents’ possibilities from our sense of the detailed incentives and information the agents have at this location. That is, we construct detailed strategic options for the agents
  6. Once we have identified where and how exploitation might take place, we can break open the overall economic model of the policy system at this location, and insert a module that “injects” the behavior we have in mind
The paper concludes with, among others, the following remark:

It is no longer enough to design a policy system and analyze it and even carefully simulate its outcome. We need to see social and economic systems not as a set of behaviors that have no motivation to change, but as a web of incentives that always induce further behavior, always invite further strategies, always cause the system to change.
I have attempted to pick out just enough from this paper to convey the key points that I feel make it relevant to cryptocurrency development. The paper is not long though, and I recommend reading it for its numerous historical examples.

My questions are: How relevant are these ideas to the design of Dash? Does this framework help identify any potential failures now? Could it be used to avoid failures of a different nature, but no less severe, than Bitcoin is suffering from now?

Any other feedback welcome. If this is interesting I will post more summaries of chapters as I read them.

(Please excuse any mistakes, I wrote this on my phone, and the editor is a bit fiddly on a mobile device.)
 
He has four questions:
  1. What are the causes of exploitive behavior and how does it typically arise?

We humans are like that. It starts with early childhood and develops from there. You can try to cover the most obvious exploits in your design, but never make it 100% foolproof.
To quote Douglas Adams, "A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools".

  1. (Please excuse any mistakes, I wrote this on my phone, and the editor is a bit fiddly on a mobile device.)


Don't worry, if I would have to write this on my phone I'd be still at line 2...
 
A common mistake that people make when trying to design something completely foolproof is to underestimate the ingenuity of complete fools".

I agree, to make something 100% resilient is impossible. If I understand it – which I don't :) – that's the one of messages of the Incompleteness Theorem: there will always be something which undoes a system, no matter how clever. Aiming for perfection is not productive, you can improve pretty much anything indefinitely, but at some point it becomes more worthwhile to do something else instead.

Don't worry, if I would have to write this on my phone I'd be still at line 2...

Looking back, it would have been much quicker to make a round trip home to pick up my laptop! Lesson learnt…
 
Back
Top