Autoren-Bilder

Werke von Erica Thompson

Getagged

Wissenswertes

Für diesen Autor liegen noch keine Einträge mit "Wissenswertem" vor. Sie können helfen.

Mitglieder

Rezensionen

Ever since I was taught about "error analysis" with experiments, I've had a guarded view of mathematical models. And I've continued to be surprised at the way mathematic models of great complexity ar employed with no regard to error analysis. A brief digression: if you measure something , say a temperature, or weigh something, your measurement is only as good as the instrument you are using so for your last digit it's +/- 0.5 and if you are adding or subtracting these measurements in our model then the total error involved is the sum or these errors. But if you multiply or divide the measurements then you need to add the percentage errors. Suffice to say, that if you are calculating the strength of a bridge or a dam off the wing of an aircraft....then, with the formulas involved, it's pretty easy to arrive at a figure +/- 1000%. And often the basic data (such as infections in a population) are to the nearest 10%. So, even if your theoretical model is perfect the output from a number of variables has a huge potential error component built into it. Yes, sometimes these errors might cancel out but sometimes they will be cumulative.
Erica Thompson has put together a really good piece of work here in drawing attention to the ubiquitous use of models today and also emphasising that the model is not reality. the other thing she does extremely well is draw attention to the fact that even if models give use incorrect answers they can provide us with new insights and suggest further evanues for research. But, most illuminating of all, she draws attention to the fact that the output from models has to be interpreted by society and that involves certain value judgements.

I really, enjoyed the book and learned a lot from it. Here are a number of quotes from the book that made an impact on me or that summarise some of the lines of her arguments:
Climate tipping points are absolutely on the radar of mainstream scientific research. It's not that we think these kinds of events can't happen, it's that we haven't developed an effective way of dealing with or formalising our understanding that they could happen. One premise of this book is that unquantifiable uncertainties are important, are ubiquitous, are potentially accessible to us and should figure in our decision-making.

You cannot avoid Model Land by working 'only with data’. Data, that is, measured quantities, do not speak for themselves: they are given meaning only through the context and framing provided by models....... Though Model Land is easy to enter, it is not so easy to leave.. Having constructed a beautiful, internally consistent model and a set of analysis methods that describe the model in detail, it can be emotionally difficult to acknowledge that the initial assumptions on which the whole thing is built are not literally true.

Phillips would not have argued that his hydraulic model of the British economy was 'true' or 'false', only that it provided a helpful scaffold for thinking, pursuing the consequences of assumptions and seeing the relations of different parts of the economy from a new perspective.

Box's aphorism has a second part: 'All models are wrong, but some are useful.' Even if we take away any philosophical or mathematical justification, we can of course still observe that many models make useful predictions, which can be used to inform actions in the real world with positive outcomes. Rather than claiming, however, that this gives them some truth value, it may be more appropriate to make the lesser claim that a model has been consistent with observation or adequate for a given purpose.

Depending on the type of model, we may have to ask questions like:
1. What kinds of behaviour could lead to another financial crisis, and under what circumstances might they happen?
2. Will the representations of sea ice behaviour in our climate models still be effective representations in a 2°C-warmer world?
3. What spontaneous changes to social behaviour will occur in the wake of a pandemic?
These are questions that cannot be answered either solely in Model Land or solely by observation (until after the fact): they require a judgement about the relation of a model with the real world, in a situation that has not yet come to pass.

We want to be able to give a narrative of how the model arrived at its outcomes. That might be an explanation that the tank detector is looking for edges, or a certain pattern of sky, or a gun turret. It might be an explanation that a criminal-sentencing algorithm looks at previous similar cases and takes a statistical average. If we cannot explain, then we don't know whether we are getting a right answer for 'the right reasons' or whether we are actually detecting sunny days instead of tanks.

The need for algorithmic explainability and the relation with fairness and accountability, described by Cathy O'Neil in Weapons of Math Destruction, is now acknowledged as being of critical importance for any decision-making structures..... I want to extend this thought to more complex models like climate and economic models, and show that, in these contexts, the value of explainability is not nearly so clear cut.

Most real-world objects...are not close to being mathematical idealisations..... In these cases, we resort to statistics of things that can be observed to infer the properties of the one that cannot be observed or that has not happened yet:
• Which people are sufficiently like me to give a good estimate of my risk of death if I contract influenza?
• Which people are sufficiently like me to give a good estimate of my risk of death if I contract Covid-19?
• Which bicycles are sufficiently like my bicycle to give a good estimate of how many more miles it will go before it needs a new chain?

If I can't make a reasonable model without requiring that π =4 or without violating conservation of mass, then there must be something seriously wrong with my other assumptions. In effect, we are encoding a very strong assumption (or Bayesian prior) that π really should be 3.14159 and mass really should be conserved - and we would be willing to trade off almost anything else to make it so....... Our cultural frame for mathematical modelling tends to mean that we start with the mathematics and work towards a representation of the world, but it could be the other way around.
After all, who said there were any real laws in the first place? Even the most concrete formulations of natural order are only observationally determined and only statements of our best knowledge at the current time...... In this sense 'real' laws are only model laws themselves. Nancy Cartwright has written in detail about how scientific laws, when they apply, apply only with all other things being equal. No wind resistance, no measurement biases, no confounding interactions... all predictions are conditional predictions, or what some sciences prefer to call projections. Conditional predictions are only predictions if a certain set of conditions are true.

Complex models, however, have numerous outputs, so to make an ordered ranking we have to find some way to collapse all of this complex output to a single number representing 'how good it is'..... This will give a single value for each model which can then be compared with the other models......But... Which variables are important? If more than one, are they equally important.... How much does being slightly wrong matter? Should there be a big penalty for getting the prediction wrong, or a small one?...etc..

Large and complex models may have many parameters that can be varied, and the complexity of doing so increases combinatorially. Taking a basic approach and varying each parameter with just a 'high', 'low' and 'central' value, with one parameter we need to do three model runs, with two parameters nine, with three parameters twenty-seven; by the time you have twenty parameters you would need 3,486,784,401 runs.

For some time the guidance offered by the Bank of England probability forecasts of economic growth noted a conditionality on Greece remaining a member of the eurozone, so by implication they would be uninformative if Greece were to have exited.
What this means is that attempting to provide a full Bayesian analysis of uncertainty in a "climate-like' situation is a waste of time if you do not also and at the same time issue guidance about the possible limitations.

The question is whether the models (books, movies, assumptions, scientific processes) we are exposed to are sufficiently varied to achieve that, or whether they have an opposite effect of making us see only through the eyes of one group of people. As David Davies notes, 'films that present geopolitical events as clashes between forces of good and forces of evil do not furnish us with cognitively useful resources if we are to understand and negotiate the nuanced nature of geopolitical realities’.
Simplified historical and political models are manipulative in similar ways, embedding sweeping value judgements that not only reflect the prejudices of their creators, but also serve to reinforce the social consensus of those prejudices.

Other shared models (or metaphors) include the household budget model of a national economy, which suggests prudent spending and saving but does not reflect the money-creating abilities of national governments..... Sociologist Donald MacKenzie described the Black-Scholes model as 'an engine, not a camera' for the way that it was used not just to describe prices but directly to construct them. This is a strong form of performativity, more like a self-fulfilling prophecy, where the use of the model directly shapes the real-world outcome in its own image..... Counter-performative models might, for example, forecast some kind of bad outcome in a 'business-as-usual' or no-intervention scenario, with the aim of motivating change that will avoid that outcome. Examples include forecasts of the unrestricted spread of Covid-19 in the spring of 2020, which motivated lockdown and social-distancing policies that reduced the spread....... If a central bank were to predict a financial crisis, under any conditions we can be pretty certain that one would immediately occur...... The forecast is a part of a narrative, and is part of the policy and intervention itself rather than being a disinterested observer. An engine, not a camera; a co-creator of truth, not a predictor of truth.

My point is that, regardless of any justification or truth value, the framework by itself can be a positive influence on the actions and outcomes. Use of such a framework can convey complex information in a simple and memorable format, systematise potential actions so that they can be confidently undertaken even given uncertainty about the future.

Who makes models? Most of the time, experts make models.....Hopefully, they are experts with genuine expertise in the relevant domain: a volcanologist making a model of Mount Pinatubo; a marketer making a model of how people respond to different kinds of advertisements; a paediatrician making a model of drug interactions in child patients..... The first-guess model is very rarely the stopping point. Although it is a direct product of their expertise, it might in some ways think quite differently from the expert..... The expert shaped the model, and now the model is beginning to shape the expert. Interacting with the model starts to influence how the expert thinks about the real system...... If nine out of ten models do a particular thing, does that mean they are 90% certain to be correct? To make an inference like that, when models are so interconnected, would be, as Ludwig Wittgenstein put it, 'as if someone were to buy several copies of the morning newspaper to assure himself that what it said was true'. A more useful comparison would be to take it as though nine out of ten experts had agreed on that thing...... Why might nine out of ten experts agree on something? It may indeed be because they are all independently accessing truth, and this will give us more confidence in the outcome. Or it might be that they are all paid by the same funder, who has exerted some kind of (nefarious or incidental) influence over the results. Or it may be that they all come from the same kind of background and training.

If someone says that climate change is not happening or that Covid-19 does not exist, they are contradicting observation. If they say that action to prevent climate change or stop the spread of disease is not warranted, they are only contradicting my value judgements...... As such, most of these are social disagreements, not scientific disagreements, although they may be couched in the language of science and framed (incorrectly) as a dispute about Truth...... we need to construct a system that promotes more widespread feeling that experts are trustworthy, including addressing the possibility of conflict of interest directly and ensuring that experts do not all come from the same political and social fold.

If economic models fail to encompass even the possibility of a financial crisis, is nobody responsible for it? Who will put their name to modelled projections?
In my view, institutions such as the IPCC should be able to bridge this accountability gap by offering an expert bird's-eye perspective from outside Model Land.

As we are talking about decision-making, I want again to distinguish models from algorithms, such as those described in Cathy O'Neil's great book Weapons of Math Destruction. Algorithms make decisions and as such they directly imply value judgements.

The models for climate policy which assume that individuals are financial maximisers, and cannot be expected to do anything for others or for the future that is not in their own narrow short-term self-interest, are self-fulfilling prophecies. They limit the kinds of decisions we are even able to consider as possibilities, let alone model in detail.

Mathematical modelling is a hobby pursued most enthusiastically by the Western, Educated, Industrialised, Rich, Democratic nations: WEIRD for short...... The kinds of modelling methods that are most used are also those that are easiest to find funding for, and those that are easiest to get published in a prestigious journal. In this way, formal and informal scientific gatekeeping enforces WEIRD values onto anyone who wants to do science:

One of the traps of Model Land is assuming that the data we have are relevant for the future we expect and that the model can therefore predict. In October 1987, the stock market crashed and market outcomes were not at all the future that was expected, nor were they consistent with any widely used model or prediction based on previously observed data. In particular, it became clear that levels of volatility in stocks are not constant,

The 99th percentile is a useful boundary for what might happen on the worst of the good days, but if a bad day happens, you're on your own. David Einhorn, manager of another hedge fund, described this as like an air bag that works all the time, except when you have a car accident'.

If you want to trade but a risk manager says there's too much risk, that you can't - well, there goes your fee.' So the incentives are not clearly aligned in the direction of assessing risk correctly.

Which company came off better? Alana's company priced risk "correctly' and quietly went out of business. Beth's company, underpricing the risk, went from strength to strength. After the catastrophic event occurred, the government deemed that it was unreasonable for policyholders to suffer for the company's failure and bailed out the fund........ Self-interest in principle should include longer-term sustainability of the system, but in practice those market participants who do not price in longer-term sustainability can be more competitive and put the rest out of business.

An increasing number of economists think of themselves as modellers, 'simplifying' reality through models and invoking the necessary assumptions regarding equilibrium, representative agents, and optimisation..... The attachment to a certain way of working means that failures, instead of prompting a rethink of the model, result in a move towards more complexity and elaboration...... Where the modeller endows their model with their own values, priorities and blind spots, the model then reflects those values, priorities and blind spots back....... In this fast-moving field [of economic and financial situations] models are useful for a time - sometimes extremely useful - and then fail dramatically.

Winsberg ......notes the possibility of abrupt and disruptive changes that are not modelled adequately and that could happen over a very short period of time. In failing to model 'known unknowns' like methane release or ecosystem collapse, climate modellers are indeed writing a zero-order polynomial when we know for sure that the truth is something more complex.

If the target of climate policy remains couched in the terms of global average temperature, then stratospheric aerosol geoengineering seems to me to be now an almost unavoidable consequence and its inclusion in Integrated Assessment Models will happen in parallel with the political shift to acceptability.

If we take the Nordhaus model at face value (and, to be clear, I do not think we should), what it implies is that the Paris Agreement is founded on a political belief that less climate change is better even if it costs more. Personally, even if l were to accept DICE, I would still take the stance that the distribution of GDP matters as well as the total amount..... models is that the first-order economic impact of losing coral reefs, mountain glaciers, sea turtles and people in poor countries is zero compared to the financial benefits of burning fossil fuels in rich countries..... If willingness to pay reflected value, we would find that oxygen is worth much more to an American than to an Ethiopian..... As Mike Hulme has described, concepts of 'optimal' climatic conditions have varied over time. They are invariably produced by dominant groups who cast their own original climate as being optimal' for human development, on the grounds that it produced such a wonderful civilisation as their own...... A more modern application of a similar approach is the statistical regression of economic productivity, measured by GDP per capita, against regional climatic variations. Needless to say, this shows that the temperate climates of Europe and North America are the most conducive to economic prosperity...... The major systemic risks they [Chatham House experts] identify as indirect consequences of climatic change include multiple crop failures, food crises, migration and displacement, state failure and armed conflict, none of which was mentioned even by the second academic study I described earlier, in which poorer nations lost up to 80% of their GDP.

The most recent report, published in April 2022, .....concludes that the global economic benefits of limiting warming to 2 degrees C do indeed outweigh the costs of mitigation.

If we want the future to look different from the present, and not just a continuation of all of today's trends, then we have to construct models that are able to imagine something more.

Dame Deirdre Hine wrote a retrospective official review of the UK's response to the 2009 swine flu pandemic, in which she notes the importance and influence of modelling in informing action by forecasting possibilities, developing planning assumptions for operational decisions and suggesting reasonable worst-case scenarios...... The possibility that something very bad could have happened and in the event be shown to have been avoidable was worse than the prospect of being perceived to have overreacted..... there are compounding impacts which mean that a doubly large event (twice as much climatic change or twice as many people affected by a pandemic) incurs more than double the costs..... A further problem is the reflexivity already mentioned: if models are used to change behaviours, they change the outcomes.

As SAGE scientist Neil Ferguson was quoted as saying in 2020, 'we're building simplified representations of reality. Models are not crystal balls.'
Models in public health refer clearly to benefits and harms of different courses of action, but no model can 'decide' what to do until the relative weightings of different kinds of benefits and harms are specified..... But in order to come to a decision about action, we must decide on some set of values, whether that is by default, diktat or democracy...... These political processes lie outside Model Land, but they are at least as important as the mathematics; generally much more important.

Real people live in the real world, not in Model Land......With the Marshmallow Test..... In the real world, perhaps their mother will turn up and declare it to be time to leave, or the experiment will turn out to be a trick, or the experimenter will never return. So might be entirely rational to take the marshmallow.

Sometimes, staying out of Model Land might mean actively choosing the decisions that are less sensitive to future outcomes..... The common theme in these cases is taking action to reduce risk which is not optimised to a perfect model prediction of the future..... Working without computers, humans can often successfully reason their way through problems of deep uncertainty..... Artificial intelligences are not good at robust decision-making: without a human in the loop.

The point is not to throw away the insight that has been gained by modelling so far, but to reinforce the foundations of the model as an edifice rather than to continue to balance more and more blocks on top of its shaky spires.

Here are five principles for responsible modelling:
1. Define the purpose...what kind of questions can model the answer?
2. Don't say 'I don't know'...if the output is wrong what else can the model show us ?
3. Make value judgements...who might disagree?
4. Write about the real world...in what ways is the model inadequate or misinformative
5. Use many models...insights from a diverse range of perspectives

Although all models are wrong, many are useful..... One way to escape from Model Land is through the quantitative exit, but this can only be applied in a very limited set of circumstances. Almost all of the time, our escape from Model Land must use the qualitative exit: expert judgement...... And if models are, very much engines of scientific knowledge and social decision-making rather than simple prediction tools, we also have to consider how they interact with politics and the ways in which we delegate some people to make decisions on behalf of others. The future is unknow-able, but it is not ungraspable.
Easily worth five stars from me.
… (mehr)
 
Gekennzeichnet
booktsunami | 2 weitere Rezensionen | Mar 4, 2024 |
Ever since I was taught about "error analysis" with experiments, I've had a guarded view of mathematical models. And I've continued to be surprised at the way mathematic models of great complexity ar employed with no regard to error analysis. A brief digression: if you measure something , say a temperature, or weigh something, your measurement is only as good as the instrument you are using so for your last digit it's +/- 0.5 and if you are adding or subtracting these measurements in our model then the total error involved is the sum or these errors. But if you multiply or divide the measurements then you need to add the percentage errors. Suffice to say, that if you are calculating the strength of a bridge or a dam off the wing of an aircraft....then, with the formulas involved, it's pretty easy to arrive at a figure +/- 1000%. And often the basic data (such as infections in a population) are to the nearest 10%. So, even if your theoretical model is perfect the output from a number of variables has a huge potential error component built into it. Yes, sometimes these errors might cancel out but sometimes they will be cumulative.
Erica Thompson has put together a really good piece of work here in drawing attention to the ubiquitous use of models today and also emphasising that the model is not reality. the other thing she does extremely well is draw attention to the fact that even if models give use incorrect answers they can provide us with new insights and suggest further evanues for research. But, most illuminating of all, she draws attention to the fact that the output from models has to be interpreted by society and that involves certain value judgements.

I really, enjoyed the book and learned a lot from it. Here are a number of quotes from the book that made an impact on me or that summarise some of the lines of her arguments:
Climate tipping points are absolutely on the radar of mainstream scientific research. It's not that we think these kinds of events can't happen, it's that we haven't developed an effective way of dealing with or formalising our understanding that they could happen. One premise of this book is that unquantifiable uncertainties are important, are ubiquitous, are potentially accessible to us and should figure in our decision-making.

You cannot avoid Model Land by working 'only with data’. Data, that is, measured quantities, do not speak for themselves: they are given meaning only through the context and framing provided by models....... Though Model Land is easy to enter, it is not so easy to leave.. Having constructed a beautiful, internally consistent model and a set of analysis methods that describe the model in detail, it can be emotionally difficult to acknowledge that the initial assumptions on which the whole thing is built are not literally true.

Phillips would not have argued that his hydraulic model of the British economy was 'true' or 'false', only that it provided a helpful scaffold for thinking, pursuing the consequences of assumptions and seeing the relations of different parts of the economy from a new perspective.

Box's aphorism has a second part: 'All models are wrong, but some are useful.' Even if we take away any philosophical or mathematical justification, we can of course still observe that many models make useful predictions, which can be used to inform actions in the real world with positive outcomes. Rather than claiming, however, that this gives them some truth value, it may be more appropriate to make the lesser claim that a model has been consistent with observation or adequate for a given purpose.

Depending on the type of model, we may have to ask questions like:
1. What kinds of behaviour could lead to another financial crisis, and under what circumstances might they happen?
2. Will the representations of sea ice behaviour in our climate models still be effective representations in a 2°C-warmer world?
3. What spontaneous changes to social behaviour will occur in the wake of a pandemic?
These are questions that cannot be answered either solely in Model Land or solely by observation (until after the fact): they require a judgement about the relation of a model with the real world, in a situation that has not yet come to pass.

We want to be able to give a narrative of how the model arrived at its outcomes. That might be an explanation that the tank detector is looking for edges, or a certain pattern of sky, or a gun turret. It might be an explanation that a criminal-sentencing algorithm looks at previous similar cases and takes a statistical average. If we cannot explain, then we don't know whether we are getting a right answer for 'the right reasons' or whether we are actually detecting sunny days instead of tanks.

The need for algorithmic explainability and the relation with fairness and accountability, described by Cathy O'Neil in Weapons of Math Destruction, is now acknowledged as being of critical importance for any decision-making structures..... I want to extend this thought to more complex models like climate and economic models, and show that, in these contexts, the value of explainability is not nearly so clear cut.

Most real-world objects...are not close to being mathematical idealisations..... In these cases, we resort to statistics of things that can be observed to infer the properties of the one that cannot be observed or that has not happened yet:
• Which people are sufficiently like me to give a good estimate of my risk of death if I contract influenza?
• Which people are sufficiently like me to give a good estimate of my risk of death if I contract Covid-19?
• Which bicycles are sufficiently like my bicycle to give a good estimate of how many more miles it will go before it needs a new chain?

If I can't make a reasonable model without requiring that π =4 or without violating conservation of mass, then there must be something seriously wrong with my other assumptions. In effect, we are encoding a very strong assumption (or Bayesian prior) that π really should be 3.14159 and mass really should be conserved - and we would be willing to trade off almost anything else to make it so....... Our cultural frame for mathematical modelling tends to mean that we start with the mathematics and work towards a representation of the world, but it could be the other way around.
After all, who said there were any real laws in the first place? Even the most concrete formulations of natural order are only observationally determined and only statements of our best knowledge at the current time...... In this sense 'real' laws are only model laws themselves. Nancy Cartwright has written in detail about how scientific laws, when they apply, apply only with all other things being equal. No wind resistance, no measurement biases, no confounding interactions... all predictions are conditional predictions, or what some sciences prefer to call projections. Conditional predictions are only predictions if a certain set of conditions are true.

Complex models, however, have numerous outputs, so to make an ordered ranking we have to find some way to collapse all of this complex output to a single number representing 'how good it is'..... This will give a single value for each model which can then be compared with the other models......But... Which variables are important? If more than one, are they equally important.... How much does being slightly wrong matter? Should there be a big penalty for getting the prediction wrong, or a small one?...etc..

Large and complex models may have many parameters that can be varied, and the complexity of doing so increases combinatorially. Taking a basic approach and varying each parameter with just a 'high', 'low' and 'central' value, with one parameter we need to do three model runs, with two parameters nine, with three parameters twenty-seven; by the time you have twenty parameters you would need 3,486,784,401 runs.

For some time the guidance offered by the Bank of England probability forecasts of economic growth noted a conditionality on Greece remaining a member of the eurozone, so by implication they would be uninformative if Greece were to have exited.
What this means is that attempting to provide a full Bayesian analysis of uncertainty in a "climate-like' situation is a waste of time if you do not also and at the same time issue guidance about the possible limitations.

The question is whether the models (books, movies, assumptions, scientific processes) we are exposed to are sufficiently varied to achieve that, or whether they have an opposite effect of making us see only through the eyes of one group of people. As David Davies notes, 'films that present geopolitical events as clashes between forces of good and forces of evil do not furnish us with cognitively useful resources if we are to understand and negotiate the nuanced nature of geopolitical realities’.
Simplified historical and political models are manipulative in similar ways, embedding sweeping value judgements that not only reflect the prejudices of their creators, but also serve to reinforce the social consensus of those prejudices.

Other shared models (or metaphors) include the household budget model of a national economy, which suggests prudent spending and saving but does not reflect the money-creating abilities of national governments..... Sociologist Donald MacKenzie described the Black-Scholes model as 'an engine, not a camera' for the way that it was used not just to describe prices but directly to construct them. This is a strong form of performativity, more like a self-fulfilling prophecy, where the use of the model directly shapes the real-world outcome in its own image..... Counter-performative models might, for example, forecast some kind of bad outcome in a 'business-as-usual' or no-intervention scenario, with the aim of motivating change that will avoid that outcome. Examples include forecasts of the unrestricted spread of Covid-19 in the spring of 2020, which motivated lockdown and social-distancing policies that reduced the spread....... If a central bank were to predict a financial crisis, under any conditions we can be pretty certain that one would immediately occur...... The forecast is a part of a narrative, and is part of the policy and intervention itself rather than being a disinterested observer. An engine, not a camera; a co-creator of truth, not a predictor of truth.

My point is that, regardless of any justification or truth value, the framework by itself can be a positive influence on the actions and outcomes. Use of such a framework can convey complex information in a simple and memorable format, systematise potential actions so that they can be confidently undertaken even given uncertainty about the future.

Who makes models? Most of the time, experts make models.....Hopefully, they are experts with genuine expertise in the relevant domain: a volcanologist making a model of Mount Pinatubo; a marketer making a model of how people respond to different kinds of advertisements; a paediatrician making a model of drug interactions in child patients..... The first-guess model is very rarely the stopping point. Although it is a direct product of their expertise, it might in some ways think quite differently from the expert..... The expert shaped the model, and now the model is beginning to shape the expert. Interacting with the model starts to influence how the expert thinks about the real system...... If nine out of ten models do a particular thing, does that mean they are 90% certain to be correct? To make an inference like that, when models are so interconnected, would be, as Ludwig Wittgenstein put it, 'as if someone were to buy several copies of the morning newspaper to assure himself that what it said was true'. A more useful comparison would be to take it as though nine out of ten experts had agreed on that thing...... Why might nine out of ten experts agree on something? It may indeed be because they are all independently accessing truth, and this will give us more confidence in the outcome. Or it might be that they are all paid by the same funder, who has exerted some kind of (nefarious or incidental) influence over the results. Or it may be that they all come from the same kind of background and training.

If someone says that climate change is not happening or that Covid-19 does not exist, they are contradicting observation. If they say that action to prevent climate change or stop the spread of disease is not warranted, they are only contradicting my value judgements...... As such, most of these are social disagreements, not scientific disagreements, although they may be couched in the language of science and framed (incorrectly) as a dispute about Truth...... we need to construct a system that promotes more widespread feeling that experts are trustworthy, including addressing the possibility of conflict of interest directly and ensuring that experts do not all come from the same political and social fold.

If economic models fail to encompass even the possibility of a financial crisis, is nobody responsible for it? Who will put their name to modelled projections?
In my view, institutions such as the IPCC should be able to bridge this accountability gap by offering an expert bird's-eye perspective from outside Model Land.

As we are talking about decision-making, I want again to distinguish models from algorithms, such as those described in Cathy O'Neil's great book Weapons of Math Destruction. Algorithms make decisions and as such they directly imply value judgements.

The models for climate policy which assume that individuals are financial maximisers, and cannot be expected to do anything for others or for the future that is not in their own narrow short-term self-interest, are self-fulfilling prophecies. They limit the kinds of decisions we are even able to consider as possibilities, let alone model in detail.

Mathematical modelling is a hobby pursued most enthusiastically by the Western, Educated, Industrialised, Rich, Democratic nations: WEIRD for short...... The kinds of modelling methods that are most used are also those that are easiest to find funding for, and those that are easiest to get published in a prestigious journal. In this way, formal and informal scientific gatekeeping enforces WEIRD values onto anyone who wants to do science:

One of the traps of Model Land is assuming that the data we have are relevant for the future we expect and that the model can therefore predict. In October 1987, the stock market crashed and market outcomes were not at all the future that was expected, nor were they consistent with any widely used model or prediction based on previously observed data. In particular, it became clear that levels of volatility in stocks are not constant,

The 99th percentile is a useful boundary for what might happen on the worst of the good days, but if a bad day happens, you're on your own. David Einhorn, manager of another hedge fund, described this as like an air bag that works all the time, except when you have a car accident'.

If you want to trade but a risk manager says there's too much risk, that you can't - well, there goes your fee.' So the incentives are not clearly aligned in the direction of assessing risk correctly.

Which company came off better? Alana's company priced risk "correctly' and quietly went out of business. Beth's company, underpricing the risk, went from strength to strength. After the catastrophic event occurred, the government deemed that it was unreasonable for policyholders to suffer for the company's failure and bailed out the fund........ Self-interest in principle should include longer-term sustainability of the system, but in practice those market participants who do not price in longer-term sustainability can be more competitive and put the rest out of business.

An increasing number of economists think of themselves as modellers, 'simplifying' reality through models and invoking the necessary assumptions regarding equilibrium, representative agents, and optimisation..... The attachment to a certain way of working means that failures, instead of prompting a rethink of the model, result in a move towards more complexity and elaboration...... Where the modeller endows their model with their own values, priorities and blind spots, the model then reflects those values, priorities and blind spots back....... In this fast-moving field [of economic and financial situations] models are useful for a time - sometimes extremely useful - and then fail dramatically.

Winsberg ......notes the possibility of abrupt and disruptive changes that are not modelled adequately and that could happen over a very short period of time. In failing to model 'known unknowns' like methane release or ecosystem collapse, climate modellers are indeed writing a zero-order polynomial when we know for sure that the truth is something more complex.

If the target of climate policy remains couched in the terms of global average temperature, then stratospheric aerosol geoengineering seems to me to be now an almost unavoidable consequence and its inclusion in Integrated Assessment Models will happen in parallel with the political shift to acceptability.

If we take the Nordhaus model at face value (and, to be clear, I do not think we should), what it implies is that the Paris Agreement is founded on a political belief that less climate change is better even if it costs more. Personally, even if l were to accept DICE, I would still take the stance that the distribution of GDP matters as well as the total amount..... models is that the first-order economic impact of losing coral reefs, mountain glaciers, sea turtles and people in poor countries is zero compared to the financial benefits of burning fossil fuels in rich countries..... If willingness to pay reflected value, we would find that oxygen is worth much more to an American than to an Ethiopian..... As Mike Hulme has described, concepts of 'optimal' climatic conditions have varied over time. They are invariably produced by dominant groups who cast their own original climate as being optimal' for human development, on the grounds that it produced such a wonderful civilisation as their own...... A more modern application of a similar approach is the statistical regression of economic productivity, measured by GDP per capita, against regional climatic variations. Needless to say, this shows that the temperate climates of Europe and North America are the most conducive to economic prosperity...... The major systemic risks they [Chatham House experts] identify as indirect consequences of climatic change include multiple crop failures, food crises, migration and displacement, state failure and armed conflict, none of which was mentioned even by the second academic study I described earlier, in which poorer nations lost up to 80% of their GDP.

The most recent report, published in April 2022, .....concludes that the global economic benefits of limiting warming to 2 degrees C do indeed outweigh the costs of mitigation.

If we want the future to look different from the present, and not just a continuation of all of today's trends, then we have to construct models that are able to imagine something more.

Dame Deirdre Hine wrote a retrospective official review of the UK's response to the 2009 swine flu pandemic, in which she notes the importance and influence of modelling in informing action by forecasting possibilities, developing planning assumptions for operational decisions and suggesting reasonable worst-case scenarios...... The possibility that something very bad could have happened and in the event be shown to have been avoidable was worse than the prospect of being perceived to have overreacted..... there are compounding impacts which mean that a doubly large event (twice as much climatic change or twice as many people affected by a pandemic) incurs more than double the costs..... A further problem is the reflexivity already mentioned: if models are used to change behaviours, they change the outcomes.

As SAGE scientist Neil Ferguson was quoted as saying in 2020, 'we're building simplified representations of reality. Models are not crystal balls.'
Models in public health refer clearly to benefits and harms of different courses of action, but no model can 'decide' what to do until the relative weightings of different kinds of benefits and harms are specified..... But in order to come to a decision about action, we must decide on some set of values, whether that is by default, diktat or democracy...... These political processes lie outside Model Land, but they are at least as important as the mathematics; generally much more important.

Real people live in the real world, not in Model Land......With the Marshmallow Test..... In the real world, perhaps their mother will turn up and declare it to be time to leave, or the experiment will turn out to be a trick, or the experimenter will never return. So might be entirely rational to take the marshmallow.

Sometimes, staying out of Model Land might mean actively choosing the decisions that are less sensitive to future outcomes..... The common theme in these cases is taking action to reduce risk which is not optimised to a perfect model prediction of the future..... Working without computers, humans can often successfully reason their way through problems of deep uncertainty..... Artificial intelligences are not good at robust decision-making: without a human in the loop.

The point is not to throw away the insight that has been gained by modelling so far, but to reinforce the foundations of the model as an edifice rather than to continue to balance more and more blocks on top of its shaky spires.

Here are five principles for responsible modelling:
1. Define the purpose...what kind of questions can model the answer?
2. Don't say 'I don't know'...if the output is wrong what else can the model show us ?
3. Make value judgements...who might disagree?
4. Write about the real world...in what ways is the model inadequate or misinformative
5. Use many models...insights from a diverse range of perspectives

Although all models are wrong, many are useful..... One way to escape from Model Land is through the quantitative exit, but this can only be applied in a very limited set of circumstances. Almost all of the time, our escape from Model Land must use the qualitative exit: expert judgement...... And if models are, very much engines of scientific knowledge and social decision-making rather than simple prediction tools, we also have to consider how they interact with politics and the ways in which we delegate some people to make decisions on behalf of others. The future is unknow-able, but it is not ungraspable.
Easily worth five stars from me.
… (mehr)
 
Gekennzeichnet
booktsunami | 2 weitere Rezensionen | Feb 24, 2024 |
This short book takes several hours to read. It canvasses the uses of models - particularly mathematical models of physical, biological, medical, epidemiological, atmospheric, ecological, financial, business and social science processes and the way models are used to support decisions or mobilize support for decisions. The title and tone are consistent with an attemmpt to modelling to people who don't undertand how models are made and used. It falters between being simple and being accurate and current. It does not actually define a mathematical model, although it provides a few example of simple models. It provides explanations of how weather forecasts, climate change models, epidemiological models and market models are made, and notes the assumptions and biases built into most models. It discusses the failure of financial models to assess and risks that caused the Crash of 2008. It addresses the way models are used to create and sell financial products that that charge interest to vulnerable countries but duck risk: catastrophe bonds for hurricane cleanup. the bonds that underlie World Bank Pandemic Emergency Financing. It is a warning that technocrats and politicians are using models as if models can predict how the world will behave, without discussion of assumptions, bias, and alternatives. The author cautions, referring to climate change, that the abuse of models delay measures to stop carbon emissions, by underpricing the cost of geoengineering measures that science fiction writers (near-future climate fiction by Kim Stanley Robinson and Neal Stephenson) and climate change deniers propose to avert climate disasters.… (mehr)
 
Gekennzeichnet
BraveKelso | 2 weitere Rezensionen | Feb 23, 2023 |

Listen

Statistikseite

Werke
1
Mitglieder
54
Beliebtheit
#299,230
Bewertung
½ 4.3
Rezensionen
3
ISBNs
5

Diagramme & Grafiken