You are currently browsing the tag archive for the ‘game theory’ tag.
In response to my post about the case for working for singleton futures, Proper Dave made a point that had occurred to me in passing but which I have never properly thought through.
I actually believe the “singleton” scenario to be very, very improbable, even more so after reading your definition: ”a single decision-making agency … capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy”
“effective control” it obviously have to delegate responsibility (not a singleton anymore) or move about its domain to do that… The speed of light problem, I actually quite confidently predict this to be, well impossible…
Again no one else can be allowed to do something even in small domains, the singleton has to do mundane stuff like the plumbing to high tasks like: “permanently preventing both internal and external threats to its supremacy” how is it going to do this? There will be have to be some parallelism, so the possibility of an “other” emerging.
This concept is just self contradictory and illogical really.
I do believe there maybe some way to setup a strict monopoly over a domain with free individuals, but it will be difficult to guarantee it to be “permanent”.
I think Dave is too pessimistic. A singleton is possible with delegated responsibility as long as there is one decision maker can rein in the delegate if they attempted to deviated from the central decision maker’s goals. This is clearly very difficult when the delegate is light years away. It would take too long to find out about and intervene in any conspiracy to deviate from the singleton’s plans before the conspiracy could prepare and defend itself. Anticipating this, a singleton would have to design any space colonisers it released such that they would never want to deviate from its original plans. For an AI this might be possible if the AI made an exact copy of itself and designed the copy in such a way that its goals could not change in random ways. Ensuring that its utility function could never change might require making the AI less flexible or less able to grow and evolve – that is to say, make it stupider. But it may not be an insurmountable problem.
I am less clear whether this is possible for uploads or other creatures that have evolved rather than been designed from scratch, and so whose inner workings are not fully understood. Has anyone investigated this properly?
Israel recently forged Australian passports to perform an assassination of a Hamas leader in Dubai. Australia has expelled an Israeli embassy official in protest. If Australia thought assassinating that person was a bad thing to do, obviously we ought to punish Israel both for harming us like this and performing an assassination we disapprove of. But if, like most Australians, you are broadly supportive of Israel compared with Hamas or Hezbollah a case can be made that the assassination was the right thing to do. Should Australia then punish Israel even if we think they did the right thing by forging our passports? What if we care about Israel fully as much as we care about ourselves?
By punishing them regardless, we preserve the value of our passports and appear to be less slavishly supportive of Israel than we would otherwise. Any harm we dish out to Israel in response probably proportionally reduces the harm to us from our passports losing credibility by discouraging other countries from forging them. We give them good reason to ensure that any use of Australian passports does not become public (which is the only time when it harms Australia). Punishment also means that they will be less inclined to use the passports frivolously, but rather only when truly necessary. In fact, even if we cared as much about Israel’s interests as much we did about our own, it would be optimal if we could transfer all the costs we incur from their abuse of our passports onto the Israelis, so long as punishment were free. Then they would only use the passports when the total benefit outweighed the total cost, or to put it another way, they would only exploit our passports in ways we would approve of them doing so if they asked.
If transferring the harm to Israel is costly to us (that is, it is not offset by reduced harm to us), the optimal amount of punishment is less than the harm we incur. This is simply because when the price of something (in this case passing on the right incentives to people) goes up you should use it less.
If Israel already cares about Australia’s welfare and the punishment is costly, then punishing them with the full amount of harm that we suffer would result in a suboptimal exploitation of Australian passports from our perspective. This is because Israel would ‘double count’ the harm: once when we suffer it, and again when we suffer the costs of imposing the punishment. The more they already care about us, the lower is the optimal amount of costly punishment. Finally, if they care about us as much as about themselves and punishment is free, it doesn’t matter what we do.
In a similar analysis punishing BP for the oil spill in the Gulf of Mexico can be a good idea, even if it were an accident.
Last weekend there were upsets in pre-selection battles over the two lower house seats of the Australian Capital Territory. The three largest local factions in the ALP did a preference deal with one another which was expected to deliver the right’s preferred candidate in the seat of Canberra (using the left’s preferences) and the left’s preferred candidate in the seat of Fraser (with the right’s preferences). It didn’t work out that way. Because the factions were unable to control the majority of their members both seats fell to independents. This sort of thing is uncommon but it shouldn’t be at all surprising.
The factions typically keep their members in line by threatening them with expulsion for two years if they vote against the group’s instructions. In this preselection vote the power-brokers insisted on seeing their member’s ballots to confirm they were obeying orders. For everyday decisions this sort of checking with the threat of punishment is sufficient to keep everyone in line. It’s not worth being thrown out of an influential group to alter a trivial decision. But as every student of game theory knows, there are two situations in which cooperation in a repeated game is especially difficult to maintain: final rounds and very important rounds. In the final round of a series of games there is little reason to cooperate because there will be no more opportunities for cooperation and the other side has no opportunity to punish you for defecting. Similarly, if one round is much more important than all the others the temptation to defect (act selfishly) is large, because the costs of punishment or non-cooperation in future rounds is small by comparison to the gains possible from defection in this round.
Decisions about preselection are far and away the most important ones made by party members and they occur very infrequently. Unsurprisingly then, for most faction members the threat of expulsion from the faction for two years was insufficient to deter them from voting for the candidate they preferred rather than the one their faction told them to. No decisions of comparable importance would be made in the next two years anyway. They may have also anticipated others doing the same calculation and realised the faction would be unwilling to expel half of their members for defecting. The trade-off was only in favour of unconditional cooperation for the careerists who benefit a great deal from showing absolute loyalty to their political tribe.
Given that ALP factions in lower house seats are primarily set up to influence these preselection battles, it’s somewhat ironic that they are precisely the decisions they have the most trouble controlling. They could perhaps keep their members in line with a stronger punishment (a longer period of expulsion, expulsion from a whole social group, surrender of a bond, etc) but then it’s not clear why anyone except the careerists would want to join in the first place. Really the factions are much less influential than their member numbers alone would make them seem.
In unrelated news, congratulations to blogger and social scientist Andrew Leigh for winning preselection in the ACT seat of Fraser. I hope you are as productive a politician as you were an academic!
Most traits we signal are continuous variables: attractiveness, diligence, intelligence, loyalty etc. However, often the signals onlookers receive about our traits are binary, as are the rewards: did we get a job or scholarship, did we meet a deadline or arrive on time, did that girl reject your advances, etc? When there is a threshold like this it is especially desirable to fall on the good side. Let’s say you get a job if you look 5/10 or better to the selectors. The difference in effort required to move from 4.9 to 5.1 is small, but the difference in how you look to distant others can be large. If the onlooker knows nothing about you, if you don’t get the job you look on average like a 2.5 but if you do get it you look on average like a 7.5.
If the process is competitive, as is the case for most job selections, this means everyone tries very hard to signal effectively and the threshold for apparent competence rises. This is an arms war so overall nobody benefits except the employer who get a more credible signal of dedication and interest in the job. When a process is not competitive, as where people work to meet a deadline in order to seem competent, lots of people will work hard to finish something just before a deadline to avoid looking like they couldn’t do so. If deadlines make it easier for a group to coordinate or you need to overcome personal time inconsistency, this is useful.
This helps explains our caution in telling others about the things we apply for. If we fall on the bad side of the threshold we don’t want them to know we even tried. If we are risk averse with our reputations, this helps explain our reluctance to apply for jobs that we might not get or flirt with people who might reject us, even when we use only a little time and effort in the attempt. It probably contributes to the distress of a divorce or breakup; a marriage that is just short of a divorce can look OK from the outside but if your wife leaves you for all a distant onlooker knows you were a terrible husband. The less effort can substitute for underlying quality when we especially care about the signal we’re going to send, the more threatening a threshold.
These effects should be greater for people who don’t know us well relative to people who do know us well, but in some cases our certification of competence will also be important to our close friends.
Any other consequences of these thresholds you can think of?