You are currently browsing the tag archive for the ‘the future’ tag.

A common folk explanation for the triumph of capitalism over communism goes along these lines:

Communism has some lovely notions about sharing wealth between people in proportion to their needs and ideally we would indeed live that way. But people are not motivated to work under such egalitarian conditions. Humans are somewhat pro-social and do make some sacrifices for others, especially close friends and family. But that just isn’t enough to keep people working hard and productiviely in big, anonymous, industrial economies year in, year out. The economic system has to go with the grain of human nature and appeal to people’s greed by offering private rewards for work hard and risk-taking. That is why market economies have become rich and centrally planned ones have stagnated. Communism was a triumph of idealism over the realities of human nature.

If this really is the reason capitalism has been so successful, I’m afraid the future doesn’t look so good for capitalism.

In that caricature, capitalism is only the best economic system given the constraints imposed by human nature. Human nature has turned out to be harder to mould than 19th century idealists had hoped, but it will not remain fixed in that way forever. Over thousands of years evolution can and will change human nature, leaving us free to choose from a broader range of social structures.

Long before ‘natural selection‘ has much impact I expect that ‘human directed selection’ will take off. Initially children will be chosen for things like beauty, intelligence and health, but eventually our personalities will also become a parental or social choice. It will then be within our power to take the pro-social behaviour that humans currently display to only a small in-group of close friends and family, and direct it towards larger groups of our choosing. Communism could get a second run, only this time it wouldn’t have to work against a human nature that evolved to serve our hunter-gatherer ancestors!

Communist communities whose members are selected to cooperate selflessly among themselves could turn out to be more productive and gradually out-compete individualistic or capitalist communities. These communities might resemble hyper-social super-organisms like ant or bee colonies.

The competitive dynamics of such a scenario are a challenge to imagine. There would be lots of ways such cooperation could be undermined but it might also be possible to sustain. Excluding and punishing free-riders within the community will be an option for people as it is for insects.

Such communities might still choose to use markets and prices to solve the economic calculation problem but then redistribute what they produce in a very egalitarian way. Or future technologies might allow them to dispense with markets altogether.

Though I am personally quite an individualist and enjoy the classically liberal way of life, I am not so horrified by the thought of human or post-human societies being very different in the future. The members of such a future ‘communist’ society would not necessarily share my individualistic preferences and so might not suffer to live as slaves to giant communities as humans today do. The desirability of this scenario was discussed by Peter Singer and Tyler Cowen a few years ago:

Cowen: Let’s try some philosophical questions. You’re a philosopher, and I’ve been very influenced by your writings on personal obligation. Apart from the practical issue that we can give some money and have it do good, there’s a deeper philosophical question of how far those obligations extend, to give money to other people. Is it a nice thing we could do, or are we actually morally required to do so? What I see in your book is a tendency to say something like “people, whether we like it or not, will be more committed to their own life projects than to giving money to others and we need to work within that constraint”. I think we would both agree with that, but when we get to the deeper human nature, or do you feel it represents a human imperfection? If we could somehow question of “do we in fact like that fact?”, is that a fact you’re comfortable with about human nature? If we could imagine an alternative world, where people were, say, only 30% as committed to their personal projects as are the people we know, say the world is more like, in some ways, an ant colony, people are committed to the greater good of the species. Would that be a positive change in human nature or a negative change?

Singer: Of course, if you have the image of an ant colony everyone’s going to say “that’s horrible, that’s negative”, but I think that’s a pejorative image for what you’re really asking …

Cowen: No, no, I don’t mean a colony in a negative sense. People would cooperate more, ants aren’t very bright, we would do an ant colony much better than the ants do. …

Singer: But we’d also be thinking differently, right? What people don’t like about ant colonies is ants don’t think for themselves. What I would like is a society in which people thought for themselves and voluntarily decided that one of the most satisfying and fulfilling things they could do would be to put more of their effort and more of their energy into helping people elsewhere in need. If that’s the question you’re asking, then yes, I think it would be a better world if people were readier to make those concerns their own projects.

The power of exponential growth seems to make a compelling case for effective altruists to delay their donations. An average 5% return on investment (ROI) would turn one dollar into ten in 50 years time. If saving a life costs $2000 now and similar opportunities will exist in the future it would cost just $200 to save a life in 2062 – a relative bargain! Sadly things aren’t so simple. Whether we really should delay depends on specifics of the activities we are funding and difficult predictions about the future. Here I’ll summarise the most important uncertainties as a roadmap for future posts.

Our goal can be summarised as choosing the time t which maximises

(1 + Return on investment)t × Cost effectiveness of donation

× Probability of donation actually being madet.

Unless you are a multimillionaire, the relevant expected ROI is the highest one available without regard to risk. Giving $2m will do about twice as much good for the world as $1m, so to maximise your expected impact you should just maximise your expected donation. Note that if your favourite charity would be able to use money now to attract donations at a rate faster than you expect your investments could return profits then donating would have to be better.

The second and more challenging issue is how cost effective your donation will be in the future relative to now. If you thought basic health would be the optimal cause this would involve anticipating things like

  • the extent of poverty
  • the cost of delivering health services
  • how much other donors will be funding the low hanging fruit.

The last point is especially relevant for those like me thinking of funding existential risk reduction because a few billion from governments or philanthropists could make a big impact on the value of further funding in that area.

In evaluating cost effectiveness we must factor in that any good charity will have impacts that propagate through time and so offer its own ROI. For instance, combatting contagious diseases now rather than in 2062 should lead to fewer people becoming infected in the meantime and so result in a richer and healthier population in 2062. Similarly, spending on existential risk reduction draws attention, money and researchers to that issue. Giving now leaves your donations more time to have this snowball effect during the window of greatest extinction risk.

On the other hand delaying leaves you more time to identify cost-effective targets for donations. Personally, I am investing rather than giving mostly because I expect groups like 80,000 Hours to give me a much better idea of how to best reduce existential risk within the next decade.

Finally you must assess the risk of your donation never being made, for example due to a catastrophe which eliminates your savings. If you can’t bind yourself through a trust fund, you must also worry about changes to you or your life which result in you deciding not to give.

Two academics from my university think so:

Australian astronomers say finding planets outside the solar system that can sustain life should be made a top priority.

Dr Charley Lineweaver and PhD student Aditya Chopra from ANU have reviewed current research into environments where life is found on earth and the environments thought to exist on other planets.

They say understanding habitability and using that knowledge to locate the nearest habitable planet may be crucial for our survival as a species.

While I agree that in the long run space colonisation is central to humanity’s survival, this is not really sensible and is probably a misrepresentation of their research. We are far away from being able to establish self sustaining colonies on planets in our solar system let alone travelling to other star systems. By the time we have the technology to contemplate doing that we will long since have identified habitable planets without having gone out of our way to do so.

While space colonisation would help reduce the risk of human extinction the unfortunate reality is that the technologies that threaten to ruin us are going to come well before independent, robust and self-sustaining colonies are possible. Risky technologies like mind uploading or machine intelligence are probably prerequisites for colonising other star systems and maybe even long-term survival on Mars. Slowing the development of the most risky technologies, controlling their use, and developing safe havens on Earth itself are likely to be more cost effective strategies than space travel for the foreseeable future.

In response to my post about the case for working for singleton futures, Proper Dave made a point that had occurred to me in passing but which I have never properly thought through.

I actually believe the “singleton” scenario to be very, very improbable, even more so after reading your definition: ”a single decision-making agency … capable of exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy”

“effective control” it obviously have to delegate responsibility (not a singleton anymore) or move about its domain to do that… The speed of light problem, I actually quite confidently predict this to be, well impossible…

Again no one else can be allowed to do something even in small domains, the singleton has to do mundane stuff like the plumbing to high tasks like: “permanently preventing both internal and external threats to its supremacy” how is it going to do this? There will be have to be some parallelism, so the possibility of an “other” emerging.

This concept is just self contradictory and illogical really.

I do believe there maybe some way to setup a strict monopoly over a domain with free individuals, but it will be difficult to guarantee it to be “permanent”.

I think Dave is too pessimistic. A singleton is possible with delegated responsibility as long as there is one decision maker can rein in the delegate if they attempted to deviated from the central decision maker’s goals. This is clearly very difficult when the delegate is light years away. It would take too long to find out about and intervene in any conspiracy to deviate from the singleton’s plans before the conspiracy could prepare and defend itself. Anticipating this, a singleton would have to design any space colonisers it released such that they would never want to deviate from its original plans. For an AI this might be possible if the AI made an exact copy of itself and designed the copy in such a way that its goals could not change in random ways. Ensuring that its utility function could never change might require making the AI less flexible or less able to grow and evolve – that is to say, make it stupider. But it may not be an insurmountable problem.

I am less clear whether this is possible for uploads or other creatures that have evolved rather than been designed from scratch, and so whose inner workings are not fully understood. Has anyone investigated this properly?

Nick Bostrom of the Future of Humanity Institute has a new interview in The Atlantic. It’s one of the more sophisticated discussions of existential risk I’ve seen in the mainstream press and is worth sharing and reading in full.

One possible strategic response to human-created risks is the slowing or halting of our technological evolution, but you have been a critic of that view, arguing that the permanent failure to develop advanced technology would itself constitute an existential risk. Why is that?

Bostrom: Well, again I think the definition of an existential risk goes beyond just extinction, in that it also includes the permanent destruction of our potential for desirable future development. Our permanent failure to develop the sort of technologies that would fundamentally improve the quality of human life would count as an existential catastrophe. I think there are vastly better ways of being than we humans can currently reach and experience. We have fundamental biological limitations, which limit the kinds of values that we can instantiate in our life—our lifespans are limited, our cognitive abilities are limited, our emotional constitution is such that even under very good conditions we might not be completely happy. And even at the more mundane level, the world today contains a lot of avoidable misery and suffering and poverty and disease, and I think the world could be a lot better, both in the transhuman way, but also in this more economic way. The failure to ever realize those much better modes of being would count as an existential risk if it were permanent.

Another reason I haven’t emphasized or advocated the retardation of technological progress as a means of mitigating existential risk is that it’s a very hard lever to pull. There are so many strong forces pushing for scientific and technological progress in so many different domains—there are economic pressures, there is curiosity, there are all kinds of institutions and individuals that are invested in technology, so shutting it down is a very hard thing to do.

What technology, or potential technology, worries you the most?

Bostrom: Well, I can mention a few. In the nearer term I think various developments in biotechnology and synthetic biology are quite disconcerting. We are gaining the ability to create designer pathogens and there are these blueprints of various disease organisms that are in the public domain—you can download the gene sequence for smallpox or the 1918 flu virus from the Internet. So far the ordinary person will only have a digital representation of it on their computer screen, but we’re also developing better and better DNA synthesis machines, which are machines that can take one of these digital blueprints as an input, and then print out the actual RNA string or DNA string. Soon they will become powerful enough that they can actually print out these kinds of viruses. So already there you have a kind of predictable risk, and then once you can start modifying these organisms in certain kinds of ways, there is a whole additional frontier of danger that you can foresee.

In the longer run, I think artificial intelligence—once it gains human and then superhuman capabilities—will present us with a major risk area. There are also different kinds of population control that worry me, things like surveillance and psychological manipulation pharmaceuticals.

If I wanted some sort of scheme that laid out the stages of civilization, the period before machine super intelligence and the period after super machine intelligence would be a more relevant dichotomy. When you look at what’s valuable or interesting in examining these stages, it’s going to be what is done with these future resources and technologies, as opposed to their structure. It’s possible that the long-term future of humanity, if things go well, would from the outside look very simple. You might have Earth at the center, and then you might have a growing sphere of technological infrastructure that expands in all directions at some significant fraction of the speed of light, occupying larger and larger volumes of the universe—first in our galaxy, and then beyond as far as is physically possible. And then all that ever happens is just this continued increase in the spherical volume of matter colonized by human descendants, a growing bubble of infrastructure. Everything would then depend on what was happening inside this infrastructure, what kinds of lives people were being led there, what kinds of experiences people were having. You couldn’t infer that from the large-scale structure, so you’d have to sort of zoom in and see what kind of information processing occurred within this infrastructure.

It’s hard to know what that might look like, because our human experience might be just a small little crumb of what’s possible. If you think of all the different modes of being, different kinds of feeling and experiencing, different ways of thinking and relating, it might be that human nature constrains us to a very narrow little corner of the space of possible modes of being. If we think of the space of possible modes of being as a large cathedral, then humanity in its current stage might be like a little cowering infant sitting in the corner of that cathedral having only the most limited sense of what is possible.

For those concerned about the future there are a lot of things to worry about. Nuclear war, bioterrorism, asteroids, artificial intelligence, runaway climate change – the list goes on. All of these have the potential to devastate humanity. How then to pick which one is the most important to work on? I want to point out a reason to work on machine intelligence even if one thinks that there is a low probability of the technology working.

Preventing catastrophes like nuclear war does avoid human extinction and keep us on the path of growth and eventual space colonisation. However, it is unclear how pleasant this world will be for its inhabitants. If a singleton does not develop, that is “a single decision-making agency … exerting effective control over its domain, and permanently preventing both internal and external threats to its supremacy,” the logic of survival means that we will eventually end up regressing to a competitive Malthusian world. That world is one where vast numbers of beings compete for survival on subsistence incomes, as has been the case for most creatures on Earth since life first appeared billions of years ago. The creatures working to survive could be mind uploads or something else entirely. In this scenario it is competitive pressure and evolution which determine the long run outcome. There will be little if any path dependence. Just as it was not possible for a group of people planted on Earth millions of years ago to change the welfare of the beings that exist today after evolution has had its way, so too it will be impossible for anyone today to change what kinds of creatures win out in the battle for survival millions of years from now. The only impact we could have now would be to reduce the risk of life disappearing altogether at this brief bottleneck on Earth where extinction is a real possibility. The difference between the best and worst futures possible is that between the desirability of life disappearing altogether and the desirability of a Malthusian world.

As competitive pressures do not necessarily drive creatures towards states of high wellbeing, it is hard to say which of these is the better outcome. I hope that technology which allows us to consciously design our minds and therefore our experience of life will lead to a nicer outcome even in the presence of competitive pressures, but that is hard to predict. Whatever the merits of the competitive future, it falls short of what a benevolent, all-powerful being trying to maximise welfare would choose.

On the other hand if a singleton is possible or inevitable, the difference between the best and worst futures is much greater. The desires of the singleton which comes to dominate Earth will be the final word on what Earth originating life goes on to do. It will be free to create whatever utopia or dystopia it chooses without competitors or restrictions, other than those posed by the laws of physics. In this world it is possible to influence what happens millions or billions of years from now, by influencing the kind of singleton which takes over and spreads acoss the universe. The difference in desirability between the best and worst case is that between an evil singleton which unrelentingly spreads misery across the universe, and the ideal benevolent singleton which goes about turning the entire universe into the things you most value.

If you think there is much uncertainty about whether a singleton is possible, and want to maximise your expected impact on the future, you should act as though you live in a world where it is possible. You should only ignore those scenarios if they are very improbable.

What technology is most likely to deliver us a singleton in the next century or two, giving you a chance to have a big impact on the future? I think the answer is a generalised artificial intelligence, though one might also suggest a non-AI group achieving total dominance through mind uploads, ubiquitous surveillance, nanotechnology, or whatever other emerging technology.

So if any of you are tempted to dismiss the Singularity Institute because the runaway AI scenario seems so improbable: you shouldn’t. It makes sense to work on it even if it is. The same goes for those who focus on the possibility of an irreversible global government.

Update: I have tried to clarify my view in a reply to Carl Shulman below. My claim is not that the probability is irrelevant, just that it is only part of the story and that working on low probability scenarios can be justified if you can have a larger impact, which I believe is the case here. Nor do I or many people working on AI believe that an intelligence explosion scenario is particularly unlikely.

Earlier today I had the pleasure of a long Skype with Seth Baum about existential risk and how I could best contribute to reducing it. Among other things, Seth studies climate change as a global catastrophic risk at Colombia University. He is taking it on himself to work to help network people studying different aspects of global catastrophic risks across universities, governments and the private sector. He does not accept Nick Bostrum’s quip that “there is more scholarly work on the life-habits of the dung fly than on existential risks.” According to him there is a lot of research on some existential risks, in particular nuclear war and disease pandemics – it is just not organised into a cohesive literature on ‘global catastrophic risk’ as such. One of Seth’s goals is to connect people studying these risks and working in related fields in order to encourage them to study the characteristics and possible solutions they have in common. He is organising the global catastrophic risk symposium at the World Congress on Risk 2012 in July which I am looking forward to attending. Think about coming as well if you will be in the area.

He shared links to a number of organisations that were new to me which I thought I would pass along.

Seth and his colleague Tony Barrett are founding a new organisation, the Global Catastrophic Risk Institute. Their hope is to evaluate which existential risks are most important to focus on, and which techniques are most likely to succeed at reducing them, for instance stockpiling food or building bunkers. Unlike GiveWell they will be rating strategies rather than organisations. The effectiveness of different approaches presumably varies by orders of magnitude, so this is incredibly important work. It will be a useful guide to those who become concerned about global catastrophic risk and make a big difference to the universe. Sister organisations, Blue Marble Space and One Flag in Space aim to promote space colonisation in order to reduce the risk of human extinction and conflict between nations, among other reasons.

A similar organisation is Saving Humanity from Homo Sapiens which is attempting to link donors concerned about existential risk with organisations that can most effectively use extra funding. This will hopefully in the future also involve evaluating their effectiveness.

Skill Global Threats Fund is a charitable foundation aiming to support those dealing with a range of catastrophic risks such as climate change and nuclear war. It’s goal is to “work proactively to find, initiate, or co-create breakthrough ideas and/or activities that we believe will have large-scale impact, either directly or indirectly, and whether on cross-cutting issues or individual threats.”

The Tellus Institute engages in future scenario mapping, including potential collapses of humanity and growth into post-human or space-faring civilizations. The paper Great Transitions is an example, though I am yet to read it.

The UPMC Biosecurity is a leading research organisation on catastrophic biosecurity threats. The Cultural Cognition project at Yale is moving into studying duel use problems in technology, including Nanotechnology Risk Perception.

Finally, if you haven’t checked out Nick Bostrom’s personal site then you really should. He has some excellent papers on existential risk, among other futurist issues. I hope to blog about some of the highlights in the near future.

Unsurprisingly given our psychology’s origin in evolution, humans spend most of their time thinking about everyday concerns: how to get food, stay clean, find friends, get laid, etc. Most of our thinking and talking about far away issues we don’t have much control over is just for signalling nice things about ourselves. There is little reason to direct those efforts towards the things which really matter most as our views change nothing; instead it’s safest to go along with the idealistic fashions of our social group at any point in time.

Unless you’re really smart. In that case, you can go out and show just how brilliantly smart you are by forwarding strange positions no mediocre wit would feel smart enough to defend. Nick Bostrum, busy signalling his superior smarts with an unusual but consistent worldview, swims against the current of his day and proposes these fairly unusual answers to the most serious problems humanity faces: Death, Existential Risk, Suffering and Mediocre Experiences. If you knew you were going to (have the chance to) be born again in the year 3000, I think these are just the issues you would want us to start dealing with seriously now, not most of the nonsense we ostensibly do to help the future. Or you could just save some money (PPT) for them instead, if you care.

Lucky we have some really smart people: to show us how smart they are, sometimes they go out and say really outlandish but important things.

Martin Rees on existential risk:

I am concerned about the threats and opportunities posed by 21st century science, and how to react to them. There are some intractable risks stemming from science, which we have to accept as the downside for our intellectual exhilaration and—even more—for its immense and ever more pervasive societal benefits. I believe there is an expectation of a 50% chance of a really severe setback to civilization by the end of the century. Some people have said that’s unduly pessimistic. But I don’t think it is: even if you just consider the nuclear threat, I think that’s a reasonable expectation.

If we look back over the cold war era, we know we escaped devastation, but at the time of the Cuba crisis, as recent reminiscences of its 40th anniversary have revealed, we were really on a hair trigger, and it was only through the responsibility and good sense of Kennedy and Khrushchev and their advisers that we avoided catastrophe. Ditto on one or two other occasions during the cold war. And that could indeed have been a catastrophe. The nuclear arsenals of the superpowers have the explosive equivalent of one of the US Air Force’s daisy cutter bombs for each inhabitant of the United States and Europe. Utter devastation would have resulted had this all gone off.

The threat obviously abated at the end of the cold war, but looking a century ahead, we can’t expect the present political assignments to stay constant. In the last century the Soviet Union appeared and disappeared, there were two world wars. Within the next hundred years, since nuclear weapons can’t be disinvented,  there’s quite a high expectation, there will be another standoff as fearsome as the cold war era, perhaps involving more participants than just two and therefore be more unstable. Even if you consider the nuclear threat alone, then there is a severe chance, perhaps a 50% chance, of some catastrophic setback to civilization.

There are other novel threats as well. Not only will technical change be faster in this century than before, but it will take place in more dimensions. Up to now, one of the fixed features over all recorded history has been human nature and human physique; human beings themselves haven’t changed, even though our environment and technology has. In this century, human beings are going to change because of genetic engineering, because of targeted drugs, perhaps even because of implants into their brain to increase our mental capacity. Much that now seems science fiction might, a century ahead, become science fact. Fundamental changes like that—plus the runaway development of biotech, possibly nanotechnology, possibly computers reaching superhuman intelligence—open up exciting prospects, but also all kinds of potential scenarios for societal disruption—even for devastation.

Read the rest of this entry »

In the marketplace, factors of production (usually grouped into labour, capital and land/natural resources) are paid what is called their ‘marginal product’ (the extra output derived from the last unit of each employed). The logic is simple: if I run a business, I will keep on hiring more employees, borrowing more money and renting more land until the cost of each extra unit of labour, capital or land exceeds the additional output I get from that extra factor. While various market failures can modify this a bit, the prices will gravitate to those values.

"Hi, I'm here to starve you to death."

When technology first started improving rapidly some people worried that eventually most workers, already on subsistence incomes, would be put out of work by machines and starve. Luckily the opposite happened. New inventions made some uses of labour obsolete, but there were always new even more productive uses for labour in making and maintaining the inventions. Productivity went up, and new births did not come rapidly enough to push down the marginal product of labour to subsistence level again, so each person became much richer. Machines were both substitutes and complements of human labour, but they were stronger complements.

While this has been the case for technological progress so far, there is no reason to assume it will remain the case in the future as our machines become more intelligent and can substitute for ever more human functions. Machines have so far added value to the jobs humans could do more cheaply than machines. However, if machines are invented which are better than humans at everything and cost less than a human subsistence income to maintain, humans could go the same way as the horse and buggy. The marginal product of labour would fall to zero because it would always be better value to employ more capital (robots) in production than to employ another person. Without a strong welfare state to redistribute income from the few increasing rich owners of capital and land to the vast bulk of humanity which relies on the product of its labour to survive, most of us would starve and die out.

You might think that with so much wealth being produced by these amazing machines it is inconceivable that rich capital and land owning humans would let their brethren go into poverty and starvation. History doesn’t offer us much comfort here. The sight of the masses scraping by on a subsistence incomes has not troubled elites much through human history, so why would it in the future? Famines often occurred through history while rich land and capital owners looked on with indifference.

Would the impoverished masses revolt and overthrow the whole market system? This has happened a few times in the past, but the powerful weapons the (robot) army will have in the future will probably make this impossible as long as the military remains loyal. The wealthy would have no reason to keep the poor alive at all by this stage anyway as they will have nothing to gain by trading with them.

This presents a tough thought experiment for non-consequentionalist libertarians and others who support the right to keep whatever you make. So far they have had the luxury that negative freedom and property rights have almost always led to higher wages for common folk. But if they really do believe as they claim that forcibly redistributing the product of labour and capital is evil theft, in the scenario above they would have to recommend watching most of humanity die out or subsist on the charity of a few ultra-rich overlords.

Those who don’t like the prospect of a mass human die-off would presumably want to terminate or at least ameliorate the market distribution of incomes. They might seek to strong robust redistribution programs from capital to labour in existing institutions. They may also want to change our governing institutions to make them less easily manipulated by rich elites, or give elites some incentive to maintain them. While capital owners would have more power than before, they might also be so rich that a welfare system to keep the rest of humanity alive wouldn’t bother them terribly. We might also want to more evenly distribute ownership of capital and land, perhaps through forced savings accounts and greater income equality.

Read more: Economics Of The Singularity

If the idea above has gotten you down, hopefully this will cheer you up:

How much does it matter whether we have labour unions or not?

In the popular imagination labour unions are a significant factor in the incomes of ordinary people and a major reason we don’t endure the relative poverty of past generations. Here I will try to argue that in a long run view labour unions, even given very generous assumptions, are pretty inconsequential, using some data from the Australian case. In the process I’ll help explain why just 6% of leading economics bloggers described unions as very important for the health of the US economy‘ in a recent survey, while 70% described them as unimportant.

Recent research from the Melbourne Institute suggests that the lowest income earners in Australia receive a premium of ~12% of their wage from union activity, while the highest income earners receive a ~6% premium from union activity. They speculate that the reason for this is that high income employees are more skilled at bargaining individually and so benefit less from expert union representation. Such a premium is roughly comparable with union premiums observed across Europe and the US, and a bit higher than past estimates of the premium in Australia probably due to recent changes in industrial law.

This wage premium is entirely what we would expect from economic theory; where monopolies or cartels can push up prices and improve their earnings they will surely do so. Even where unions lack what economists term ‘market power‘, by allowing workers to pool their knowledge and skill when negotiating wages they can produce better outcomes for workers who might otherwise be convinced to take low wages. Unions can also use their power to push for better working conditions. Here I will assume that workers use unions to push for higher ‘non-wage compensation’ (safe and pleasant work environments, breaks, amenities) and wages roughly equally. In any round of bargaining workers might get larger improvements in one or the other of these but over decades they probably want them to rise proportionately.

Read the rest of this entry »

HT Talia Katz.

Enter your email address below to receive new posts by email.

Join 50 other followers

Robert WiblinHi! I am a young Australian man ostensibly interested in the truth and maximising the total number of preferences that are ever satisfied, weighted by their intensity. I also enjoy reading and writing about the topics listed above. If you share my interests, friend me on , , or or subscribe to my RSS feed .

All opinions expressed here are at most mine alone, and have nothing to do with any past, present, future or far future employers.

Some popular posts:

The lives you could have saved
How feasible is a ‘charter city of refugees’ in Australia?
NEWS FLASH: multiverse theory proven right
Choosing the best status games for society
Eat cows to save mice? Hold your horses!
Beeminding your way to greatness
Should you floss: a cost benefit analysis
What should we do about wilderness?

Twitter Updates

Past Posts

Follow

Get every new post delivered to your Inbox.

Join 50 other followers

%d bloggers like this: