Home » 2014
Yearly Archives: 2014
Today the Rockefeller Foundation announced its latest group in the 100 Resilient Cities program. I was surprised to find Sydney as one of the cities who are now part of the program.
Except it’s not Sydney it’s the City of Sydney, one of the 40 or so local governments that make up the real Sydney metro area. Depending on how you want to measure resilience it’s probably one of the more resilient, if not the most resilient LGA in the Sydney Metro area. And with the possible exception of terrorism it would be on the lower side in terms of hazard profile too.
It contains 4% of the population and just 0.2% of the land area in the metropolitan region. It does however account for 28% of the Gross Regional Product, 25% of the jobs and 13% of the businesses of the Sydney metro area. However this economic contribution is heavily dependent on the rest of the metro area. Extreme interconnectedness is one of the things that reduces resilience and improved collaboration is one of the best ways to build it.
If City of Sydney is really serious about being a resilient city it will need to engage heavily with a much broader set of partners including the State Government and the other 40-odd local governments in the metro area. It will also need to ensure that the benefits it gains from being part of the Rockefeller collaboration are shared amongst these partners.
There is currently no city-wide emergency management or resilience forum in Sydney. Perhaps this grant will be the impetus that establishes one.
Highly fragmented local government is not unusual according to data from the OECD, with the average across 275 metro areas being 74 local governments (and that’s not even counting other levels or single purpose local governments). A number of other cities in the 100 resilient cities campaign have similar problems of fragmentation, and it’s not clear whether its something that’s even being considered.
It’s going to be interesting to see how the City of Sydney handles this issue moving forward with the program (and for that matter who gets the Chief Resilience Officer job). I’ll be staying tuned.
A Black Swan is by definition something you did not expect to occur. It’s something you’re not prepared for (specifically), something that took you by surprise. Once you’ve forecast an event (or at least entertained its possibility) it ceases to be a Black Swan.
But are Black Swans really a complete surprise?
Someone, somewhere may have thought of the possibility of any event. When you think about it, all events are really Gray Swans or Gray Geese (I like the alliteration better). Indeed the events usually trotted out as Black Swans (9/11, the GFC, the Tohoku earthquake and Fukushima disaster) had been entertained as possibilities by smart people working in relevant fields. Indeed proposals had been made for various actions that would have prevented or substantially mitigated these disasters. But the decision makers, the people who really count, did nothing. They either weren’t told about the risk or chose to do nothing.
However, just because these events can be foreseen doesn’t mean we need to be expending enormous effort in searching for them.
One of the problems with the ‘hunting black swans’ mentality is that there are a near infinite number of possible events, of which only a much smaller subset will actually occur in a reasonable planning horizon. Trying to address every Gray Goose you can think of will quickly get you nowhere. What’s needed are actions that will enable organisations and communities to adapt to any event – this is what much of the resilience discussion is getting at.
Yet some of the risks we face are worth addressing on their own, either because they’re very likely or easy to mitigate – the ‘low hanging fruit’ of the risk management world. We shouldn’t ignore these in our quest for resilience, yet these ‘easy’ risks are sometimes forgotten about in the resilience discussion.
The risk landscape is not static either. Some risks that were Gray Geese may become easier to mitigate or likely enough that they warrant individual attention. Others may diminish in likelihood or become harder to mitigate individually and so are better off treated as part of broader resilience processes.
That might be harder than running workshops around finding black swans, but it will be much more useful.
*I’m sure it probably irritates N.N. Taleb even more.
By now you will have probably seen the reports about new research apparently showing that female named Atlantic Hurricanes are deadlier than male named storms because female names are ‘less scary’ than male names. The main investigation consisted of an analysis of 92 Hurricanes between 1950 and 2012 that made landfall in the USA (Katrina and Audrey, the two most deadly storms were excluded) examining fatalities, normalised damage, category, minimum pressure, gender of the storm name and year of occurrence.
There has been a fair amount of criticism on the paper from around the web including GRRLScientist in the Guardian, Future Tense in Slate, Not Exactly Rocket Science in National Geographic and on Mashable. The authors have responded to some of the criticism in these pieces.
The principal criticism has been on the approach to show that their finding is ‘significant’ which involves generalised linear regression on a negative binomial distribution and two-way interaction terms. Less fancy methods (simple correlation and a multi-linear regression) found no significant effect. B
Is there a difference in the number of people who die in Hurricanes based on the storm name’s gender?
You can’t just compare the average rates – Hurricane fatalities are a fat tail phenomenon, a small number of big storms contribute most of fatalities. Let’s forget modelling, regression algorithms and the fancier statistics. Let’s just look at the data. Below I’ve plotted the cumulative distributions of fatalities from male named storms and female named storms from the entire dataset.
You can see that for the high frequency, low fatality events the distributions are basically the same but there appears to be some divergence at the more severe end with a higher fatality rate in female named storms. But the difference isn’t that much – and there’s the data issue, with storms between 1953 and 1979 only receiving female names there’s way more of them than male named storms. So visually there’s not a huge difference and we can quantify this using a statistical measure called the Two-sample Kolmogorov-Smirnov test. This test allows comparisons of two samples of data and helps you decide between one of two hypotheses:
H0 – They’re from the same distribution.
H1 – They’re from different distributions.
For these data the KS-test gives D = 0.1065, p-value = 0.976. This means that we can’t reject H0 – i.e. we can’t tell if the two samples are different or there’s no detectable difference in the underlying pattern of fatalities based on storm name gender.
We can also look at only the storms during the time in which both male and female names were used:
Here we can see that what little difference we saw in the complete dataset disappears completely, further suggesting that the effect found by Jung et. al. is quite probably a statistical fluke. If you look at enough variables you’ll eventually find a statistically significant correlation. In fact if you look hard enough you can find all sorts of strange things that correlate with each other.
To explain the effect, the authors turn to additional experiments where they measured a difference based on storm name gender in perceived threat and willingness to evacuate. Implicit sexism plays a big, big role in our society. (I highly recommend Cordelia Fine’s book, Delusions of Gender, which goes into the detail very well) but it’s hardly the only thing at play – many factors cause people to (usually) downplay threats from and delay responses to Hurricanes and other disasters.
The scenarios presented (you can see examples here) were not anything like real Hurricane warnings and media broadcasts, which tend to be much more alarming and action oriented, especially for more severe Hurricanes which is where the authors claim there is an effect. A better experiment would be to mock up a TV news broadcast of Hurricane Alexander/Alexandria and show it to people in hurricane prone areas and see what the results are. I would be very surprised if there was still a detectable storm name effect once a heavy dose of reality is injected.
As for policy recommendations I think its definitely too soon to consider changing the naming system, but whilst we’re talking about communication the NHC could really overhaul it’s woeful public advisory messages.
Alternatively there’s always this idea:
Well the Abbot Government’s first budget and it ain’t pretty. Here’s a roundup of all the new measures and cuts related to disasters. Although there’s a few new measures (such as the back-to-the-future National Bushfire Mitigation Programme) most of the ‘new’ spending are really the continuation of programs from previous governments.
New and continued spending
- Stronger biosecurity and quarantine arrangements – $20 million over 4 years
- Up to $320 million in assistance for drought affected farmers. However most of this is in the form of concessional loans or is contingent on state co-payments. Much of this spending is largely consistent with that of previous governments.
- National Bushfire Mitigation Programme – $15 million over 3 years
- Expanding research at the Australian Institute of Tropical Health and Medicine – $42 million over 4 years, though it’s unclear whether this will be offset by cuts elsewhere in the Australian Research Council’s budget.
- $9 million over three years for the continuation of the National Climate Change Adaptation Research Facility (though this is a drop in the amount of funding compared to previous governments).
- Addressing insurance costs in North Queensland. $12.5 million in grants to bodies corporate to undertake engineering assessments of strata properties to understand natural disaster risks and identify mitigation measures. This initiative will also establish an insurance comparison website for home building and contents and strata insurance for North Queensland.
- Continue the Natural Disaster Resilience Program and the National Partnership on pest and disease preparedness and response programmes (although with reduced funding)
- Closure of the Australian Emergency Management Institute. Some of its programs will be transitioned to a ‘virtual’ institute.
- Cuts of $7.6 billion to foreign aid over 5 years
- Cut the Exotic Disease Preparedness Program
- Merger of a number of biosecurity committees and working groups
- Cuts of $10 million over 4 years to the Bureau of Meteorology
- Cuts of $21.7 million over 4 years by merging the National Environmental Research Programme and the Australian Climate Change Science Programme
- Cease the National Insurance Affordability Initiative which had a number of activities related to flood risk management at a saving of $72.2 million
I’ve sourced these all from the budget papers. Let me know in the comments if I’ve missed anything.
Between 2000 and 2009 natural disasters cost the federal government about $1.5 billion. Then in the following 3 years they cost $7.7 billion, $1.6 billion and $2.1 billion respectively. That’s right, in each of the last 3 years disasters have cost more than in the previous ten years combined.
This huge increase has not gone unnoticed, the federal government will initiate a Productivity Commission inquiry into national disaster funding arrangements later this year.
The inquiry will undoubtedly consider where governments are spending on disasters, but will it look at where that money is coming from?
Although state and local governments do insure some of their assets the predominant approach to funding disaster losses in Australia has been to rely on the federal government’s ability to borrow money at rock-bottom rates. This is clearly not sustainable in the long term.
There are a range of ways governments can deal with disaster costs and their variability, from public disaster funds to catastrophe linked securities. These methods can also make the cost of disasters something that’s up-front and thus give governments strong incentive to invest in mitigation.
Up-front spending requires knowledge of how much the government will need to pay in the long term. All existing estimates for annual disaster losses in Australia are based on statistics of past events. Leaving aside the future influence of climate change and demographic growth these figures are heavily flawed. Simple approaches based on historical statistics just don’t work. Disaster losses follow a power law and statistical predictions will always underestimate the probabilities of large losses.
Fortunately there is another way – a comprehensive, bottom-up National Disaster Risk Assessment.
This process would bring together the massive amounts of existing data and modelling expertise on disaster risk in Australia, identify and address gaps and refine tools to improve risk assessment. The results would enable the estimation of not only the annual costs of disasters, but also the cost of the worst disaster seasons.
Risk Assessment is more than just modelling. To get the best outcomes requires collaboration among stakeholders to share knowledge, experience and ideas for reducing disaster risk. Governments, NGOs, academia, businesses and communities all have unique abilities to reduce disaster risks and the risk modelling activities should meet their needs.
It’s in actually contributing to risk reduction that a National Disaster Risk Assessment could really see gains. Since the 2002 COAG inquiry into Natural Disasters in Australia there have been more than 160 government inquiries into disasters, producing a wish-list of close to 4000 recommendations. Though the National Strategy for Disaster Resilience has brought together key strategic priorities, the level of its implementation is unclear.
Coming out of the twin strands of data-driven risk modelling and stakeholder-driven risk assessment a more focussed approach to resilience could be taken: A 3-year National Plan with a small number of concrete, achievable priorities and clear deadlines for implementation. As these priorities are completed new ones can be added through the risk assessment process, ensuring that the National Disaster Risk Assessment is an ongoing project rather than something done once and then shelved.
A National Disaster Risk Assessment would need a custodian to ensure this continuity and ensure national risk assessment becomes a long-term activity of government. The Productivity Commission, with its modelling and consultative expertise and long history of influence of national policy could be one potential option. Or perhaps the creation of a new agency, say a National Disaster Risk Commission, could better meet this task.
Regardless, making decisions about funding future disaster losses without even really knowing what they could be is a risky game.
We’re always bombarded with news about natural disasters, acts of god and stories of the wrath of mother nature. It’s a recurring meme in the public discussion about disasters and even amongst disaster management experts.
But we know that so called “natural disasters” aren’t really natural – disasters are a social phenomenon – they need something to impact on before . Even talking about natural and anthropogenic hazards doesn’t really work very well. Human practices around land-clearing and vegetation management have a significant influence on floods and bushfires. Landslides that impact on human development are very often due to modifications made to slopes. And before we started building fragile structures earthquakes would have been a curiosity like solar eclipses. Then there’s climate change – we’re beginning to drive change in the natural processes that govern many hazards.
Is there a better way for talking about hazards and disasters that goes beyond this false dichotomy?
One concept that I’ve been introduced to recently is the idea of policy domains and policy communities. A policy community is the group of all the participants in the policy making and implementation process for a particular subject. Some players are only concerned with one policy area (such as engineering seismologists) whilst others are concerned with many policy areas (such as meteorologists or disaster recovery experts).
To see how this concept could be used in categorising hazards I used Gephi to build a map of various hazards and how their policy communities are connected. For example blizzards and heatwaves are connected because they both relate to meteorology and climate – there’s overlap in their policy community. An engineer may work in both the earthquake policy community, the dam failure policy community and the structural collapse policy community and so on. This is a very subjective process, without data on how experts in various fields are connected (using say LinkedIn), this is really just built on what I think. But let’s see if we can pull some groups of hazards out of this map:
Based on the connections between the different policy communities I’ve pulled out three separate broad policy domains:
Settlements – this comprises most of the traditional natural hazards policy domain, but adds in a few other engineering related hazards such as structural collapse (which has huge overlaps with policy domains like earthquake) and dam failure (which could almost be considered a sub-speciality of the flood policy domain). The drivers and mitigation options for these hazards relate to where and how we build our houses, neighbourhoods and cities.
Society – this comprises human health, human security and agricultural hazards. This is a pretty diverse set of hazards (as evidenced by their sparse connections) but they mostly relate to people and societies. It could be broken up a bit further, but for simplicity I’ve grouped them.
Economy – this group has industrial/technological accidents, transport accidents and utilities failures, there’s pretty big overlap with terrorism (which I placed in the societal group) and a number of the Settlements policy domains. Most of these hazards relate to economic activity in the modern age and comprise most of the traditional anthropogenic hazards policy domain. I think that labelling this group Economy is instructive as it reminds us that the so-called ‘human-caused’ disasters aren’t caused by people per se, but the productive activities we do and the materials and technologies used in them. It’s this group of hazards that have evolved the fastest and continue to evolve rapidly.
So let’s forget natural hazards and anthropogenic hazards – how about Settlement hazards, Societal hazards and Economic hazards?