Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
Your viewpoint is slightly coloured by the fact you don't want a lockdown.
I don't know what to think anymore, I don't know who I can trust
I'm going to be honest here and maybe disappoint a few people - I'm actually still open to the idea of a lockdown if there is data or evidence that can be presented to support it. The issue is there is a lack of evidence to support one. If the data changes to support one - specifically the non-incidental COVID hospital admissions and the 1 week non incidental in hospital rate and 1 week total bed occupancy rate.
What we have at the moment is data models with slightly odd inputs on average infection severity and vaccine efficacy. That's not evidence. The same models said we'd have up to 7k hospitalisations per day if we went ahead with step 4 unlockdown and we know that didn't happen.
Monday update so 3 days worth of hospital data and the answer is... no surge!
Hmm, not so sure about that. The key one to watch in London. 193, 220, 210 against 142, 162, 166 last week. That's not as bad as it might be, but it's still a significant increase. What matters is whether it's just the beginning of a larger uptick which is going to get a lot worse, or part of a manageable increase.
IF the hospitalisation rate remains constant, then we should see something like 300-330 for the 19th, 440-470 for the 20th, and 600+ for the 21st and 22nd.
And we're now looking at the period where Omicron should be the biggest contributor, so that's a useful predictive check.
Low 300s reported tomorrow for London admissions from the 19th, mid 400s reported on Wednesday for the 20th, and 600+ reported on Thursday for the 21st and we should get concerned that severity is NOT lower.
We're told that the more likely scenario - as disclosed by The Times at the weekend - is a two-week circuit breaker after Christmas
The 28th has been pencilled in by officials as the starting point for the new curbs - again taking into account the 48 hours needed for recall
What an utter shitshow
I think we can expect widespread disobedience this time around. Maybe enough to make the efforts entirely futile.
There won't be, though. My patents will follow it. So I won't be able to see them. And there will be the issue from last time of 'does family x feel the same way or will they be horrified if we suggest the children see their friends'? Some people will be able to break it with impunity, but most will just sit inside and seethe.
I think that's overly optimistic/pessimistic. Talking to people in my own group (I am of course aware that they may not be representative, but I don't have much else to go on):
- My parents are in their 60s, one of them quite vulnerable, they've been pretty obedient, now saying they won't follow it. - Friend in his late 50s, diabetic, has been off out almost every night for the last week. He was very careful pre-vaccination and has said he won't lock himself down against post-vaccination. - Friends more my age (late 20s and 30s), pretty much regardless of where they sat on the compliance spectrum for past restrictions have said not a chance in hell this time.
All triple vaxxed or about to be for what it's worth.
FWIW the people I know are mostly planning to see family at Xmas, though a few with very elderly relatives have cancelled. Everyone has cancelled any events this week, and everyone is postponing decision after Xmas to see what the disease is doing. General feeling is that the rules are too lax so they need to err on the side of caution. Like Cookie, though, I think a post-Xmas lockdown will be seen by most as an acceptable compromise.
The contrasting experiences show we can't really generalise. But I agree that a ban on Christmas meetings would have had real resistance.
Nick please engage your brain on this. It’s not an acceptable compromise. It’s a total fucking travesty.
We are drowning in too much poor quality information with insufficient context. Imagine if in normal times South African doctors reported that a new cold virus was circulating there. It was interesting because it seemed to have quite fast transmission rates. But not to worry, it doesn’t seem to really be causing much in the way of serious illness or impacting hospitals much.
Then a bunch of scientists in other countries all said, “huh yeah we have that too. Oh well. Doesn’t seem to be doing much here either”. And the head of the CDC in the US said “yeah we’ve looked at that, don’t worry about that”.
It wouldn’t have made the news. And interestingly if you have been reading the South African news the last couple of weeks, you’ll have noticed that it’s not in the news now! Try the US. Where even Biden is downplaying the need for any major reaction.
And then see here. We as a nation have gone completely fucking gaga that you think criminalising normal social and economic activity is an appropriate measure against this. And sensible people nod along.
I don't personally think it's an acceptable compromise - I'd have locked down 10 days ago and awaitied hard evidence, rather than relying on what South African newspapers say. And like others I feel that hanging about while it spreads and then locking down is probably the worst of both worlds.
What I said, though, is that most people would probably see it as an acceptable compromise. I try to be realistic even when I don't like it. You can see the mixed reaction on this thread alone, and PB tends to be more anti-restrictions than the general public.
If people see locking down 3 weeks late as an acceptable compromise we need to redesign our education system.
There is no acceptable compromise. As Yoda said: Do or Do Not.
Though waiting three weeks for accurate data as opposed to making a rushed decision out of fear and a lack of knowledge is entirely appropriate. That's not late, that's acting when you've got the relevant data. Better is to never act though.
But 3 weeks later with an R0 of 2 or more and a incubation period of 3 days and you have a starting point of omicron infections that is 128 times bigger than it would have been if you locked down 3 weeks early.
and if omicron is as infectious as it appears to be it will have burnt itself out with no viable people left to infect within 4-6 weeks max.
Far, far better to have 128 times bigger than to act when acting isn't necessary.
Monday update so 3 days worth of hospital data and the answer is... no surge!
Hmm, not so sure about that. The key one to watch in London. 193, 220, 210 against 142, 162, 166 last week. That's not as bad as it might be, but it's still a significant increase. What matters is whether it's just the beginning of a larger uptick which is going to get a lot worse, or part of a manageable increase.
IF the hospitalisation rate remains constant, then we should see something like 300-330 for the 19th, 440-470 for the 20th, and 600+ for the 21st and 22nd.
And we're now looking at the period where Omicron should be the biggest contributor, so that's a useful predictive check.
Low 300s reported tomorrow for London admissions from the 19th, mid 400s reported on Wednesday for the 20th, and 600+ reported on Thursday for the 21st and we should get concerned that severity is NOT lower.
Below that - good news.
It depends on how much of that is incidental, if 1 in 10 Londoners have COVID then 1 in 10 admissions get recorded in the statistic.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
No, because what the decision makers need to know more than what the worst-case scenario looks like is how likely the worst case scenario is. If an asteroid was heading towards us, that's what we would want to know - what are the chances it'll hit us? Whether it wipes out humans completely or just reduces us to hunter-gatherer societies seems a secondary question.
But that's not what Nelson was complaining about, was it? He was complaining about a lack of a model for it which meant the government would have to do nothing. And we don't know enough to make that judgement at the moment - although thankfully it's appearing like the 'best' scenarios might be there.
But the politicians do need to know how bad it *could* be. Say the asteroid might hit the sea, in which case coastal areas might be affected - they can act on that. Say the asteroid hits on land: they can start mobilising to deal with the affected area. (and your last line are not the only two scenarios in this - they'd be interested in ones they cn actually do something about).
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
No, because what the decision makers need to know more than what the worst-case scenario looks like is how likely the worst case scenario is. If an asteroid was heading towards us, that's what we would want to know - what are the chances it'll hit us? Whether it wipes out humans completely or just reduces us to hunter-gatherer societies seems a secondary question.
Although TBH I don't think this is a great example. Given the almost zero options we would have to actually do anything about an asteroid heading for us I don't see what difference it would actually make whether we knew in advance if it would hit us or not.
If we knew early enough, there might well be a chance of diverting it. Hence the current test being conducted. And if you look far enough ahead, there is a degree of uncertainty about various asteroid orbits, so modelling does come in to it. The lead times are, of course, a great deal longer, and the parameters rather better understood.
Monday update so 3 days worth of hospital data and the answer is... no surge!
Hmm, not so sure about that. The key one to watch in London. 193, 220, 210 against 142, 162, 166 last week. That's not as bad as it might be, but it's still a significant increase. What matters is whether it's just the beginning of a larger uptick which is going to get a lot worse, or part of a manageable increase.
IF the hospitalisation rate remains constant, then we should see something like 300-330 for the 19th, 440-470 for the 20th, and 600+ for the 21st and 22nd.
And we're now looking at the period where Omicron should be the biggest contributor, so that's a useful predictive check.
Low 300s reported tomorrow for London admissions from the 19th, mid 400s reported on Wednesday for the 20th, and 600+ reported on Thursday for the 21st and we should get concerned that severity is NOT lower.
Below that - good news.
And then on Thursday we get to find out how many of them are really covid at all.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
Your viewpoint is slightly coloured by the fact you don't want a lockdown.
I don't know what to think anymore, I don't know who I can trust
I'm going to be honest here and maybe disappoint a few people - I'm actually still open to the idea of a lockdown if there is data or evidence that can be presented to support it. The issue is there is a lack of evidence to support one. If the data changes to support one - specifically the non-incidental COVID hospital admissions and the 1 week non incidental in hospital rate and 1 week total bed occupancy rate.
What we have at the moment is data models with slightly odd inputs on average infection severity and vaccine efficacy. That's not evidence. The same models said we'd have up to 7k hospitalisations per day if we went ahead with step 4 unlockdown and we know that didn't happen.
Oh I'm happy for a lockdown if there is a valid reason to have one.
But there is no valid reason for one given what we currently know, the implementation delay and the fact that cases are already being reported at record levels.
The number of people on ventilators in London has fallen from 208 to 206 over the weekend. I presume this is still the treatment of choice for the most serious cases?
For Omicron it might not be as necessary as it is for Delta so we could see an odd scenario of the headline numbers rising very rapidly because of incidental admissions (that is someone who comes in with a broken leg but also has COVID) but the actual severe cases that need mechanical ventilation falling.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The chair of the Sage modelling programme isn't good at explaining things to non-experts*, which is a weakness, but he does seem to know what he's doing ( as you would hope). And Fraser Nelson isn't as clever as he thinks he is. His questions are good ones, but if you are not trying to understand the responses of the expert but are instead looking for gotchas, you are likely to go astray, as he does. He fundamentally misunderstands the purpose of the models when he could help to get a better understanding of them, including their limitations, assumptions and any flaws.
* Chris Whitty is very good at explaining things to non-experts. He treats them as intelligent people who want to find about the subject and doesn't patronise them.
To be fair, some of the concepts are hard to explain within the time limit of the modern sound bite. Hats off to those who are good at it. Science really needs more of them.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
In the case we're in, we need to now, but we don't, and decisions may still need to be made. Hence the difficulty of the decisions. That's why I put the second line in.
Only up 8000 since Friday. Has the exponential increase stopped?
I've always been suspicious that the doubling time would remain constant. Surely as more are infected there's fewer left to infect.
Surely the 91000 is a combination of delta + omicron?
Total number of case according to Sky is 45145, which is 8000 more than yesterday. on 2 days figures only this represents doubling every 3.5 days. I'm sure that if we were to use a weeks worth of figures we'd get a more accurate doubling rate.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
No, because what the decision makers need to know more than what the worst-case scenario looks like is how likely the worst case scenario is. If an asteroid was heading towards us, that's what we would want to know - what are the chances it'll hit us? Whether it wipes out humans completely or just reduces us to hunter-gatherer societies seems a secondary question.
But that's not what Nelson was complaining about, was it? He was complaining about a lack of a model for it which meant the government would have to do nothing. And we don't know enough to make that judgement at the moment - although thankfully it's appearing like the 'best' scenarios might be there.
But the politicians do need to know how bad it *could* be. Say the asteroid might hit the sea, in which case coastal areas might be affected - they can act on that. Say the asteroid hits on land: they can start mobilising to deal with the affected area. (and your last line are not the only two scenarios in this - they'd be interested in ones they cn actually do something about).
I think what Nelson was complaining about was that we were modelling what a worst-case scenario looked like without modelling how likely that was, or what the most likely case looked like.
It's as well to know what the worst is. But you also need to know how likely it is. What if there is a tidal wave in the Atlantic, for example? Cardiff might be destroyed. So lets move Cardiff to Ebbw Vale. But in reality the chances of the worst case scenario coming to pass are so slim that we leave Cardiff where it is. [Interesting fact: Cardiff is the city in the UK most at risk from a tsunami].
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
Your viewpoint is slightly coloured by the fact you don't want a lockdown.
I don't know what to think anymore, I don't know who I can trust
I'm going to be honest here and maybe disappoint a few people - I'm actually still open to the idea of a lockdown if there is data or evidence that can be presented to support it. The issue is there is a lack of evidence to support one. If the data changes to support one - specifically the non-incidental COVID hospital admissions and the 1 week non incidental in hospital rate and 1 week total bed occupancy rate.
What we have at the moment is data models with slightly odd inputs on average infection severity and vaccine efficacy. That's not evidence. The same models said we'd have up to 7k hospitalisations per day if we went ahead with step 4 unlockdown and we know that didn't happen.
Oh I'm happy for a lockdown if there is a valid reason to have one.
But there is no valid reason for one given what we currently know, the implementation delay and the fact that cases are already being reported at record levels.
And yet you'd be happy to have one, even when it isn't necessary, based on incomplete data?
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
In the case we're in, we need to now, but we don't, and decisions may still need to be made. Hence the difficulty of the decisions. That's why I put the second line in.
Using incorrect/out of date in one model would suggest it is an unlikely outcome, don't you think? In your analogy, it'd be like using incorrect ranging data to the asteroid when predicting its trajectory.
I've studied your charts quite closely and I think I have spotted an important trend (this is tentative but I thought I would say it).
Here it is, mock me if you like:
Cases are growing quite fast in London.
Indeed, but look closer, the R rate rise had already begun to slow down at the end of last week. It's possible that Omicron is already running out of gas.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
I agree. If NHS does need action to help it though tsunami surge, there’s right time and wrong time to take that action.
I have read through so much stuff on PB about Omicron, but also read here that circuit breakers don’t work. However, if PB collective mind is right, the difference in Omicron is rampant transmissibility blowing in fast and huge but then out quick and hopefully less lung and organ damage issue in waves wake compared to delta, that in this particular Omicron instance, PB has actually made a case for a short circuit break to protect NHS during rapid surge and peak of this wave?
Do you see my point. Variant different, wave different, assumed wisdom how to manage different?
The key questions are:
1. What would restrictions be trying to achieve? 2. Would they work? 3. Does the collateral damage of the restrictions mean they are worse than doing nothing?
The answer to 1 is very clear: they'd be trying to limit the possible hit to the NHS from large numbers of admissions crowded into a short peak, and also they'd be trying to buy time for more boosters to go into arms, and for more already-administered boosters to become fully effective.
The answers to 2 and 3 are much more difficult, especially given the uncertainties on how bad Omicron will be.
What we do know, though, is that, if they are to work, they need to be done very quickly indeed.
My view is that it's probably too late already, but one can't have too much confidence in any conclusion, given the uncertainties.
I agree with you. And if history books are hard on government for never acting timely enough and having their blocks of measures on the wrong part of the gannt chart, the scientists have to share the blame too.
I actually feel quite optimistic to be honest with you all.
If they say 2 weeks and you are thinking that’s three months then, like a long business meeting where first 10 minutes may have been useful the rest pointless, sure you will have a negative view. Negative Without really knowing if it’s as bad as three months not 2 weeks. What if it was just that first useful bit, and that short useful bit timed perfectly and then stopped?
The experts seem to say covid will likely mutate into something less and less severe and end up like a cold. Whittey said it’s not over, but break next 18 month into 3 blocks and each 6th month will be better than last.
Now to feel so positive like coming out the back of something not starting into it? Like when the bad weather blows away and you can see the sun breaking through the direction weather is coming from?
If you close your eyes, can you see that? Can you see those rays of hope?
On that note, we’re walking to the pub whilst it’s still open. I’m horse riding tomorrow 🙋♀️
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The chair of the Sage modelling programme isn't good at explaining things to non-experts*, which is a weakness, but he does seem to know what he's doing ( as you would hope). And Fraser Nelson isn't as clever as he thinks he is. His questions are good ones, but if you are not trying to understand the responses of the expert but are instead looking for gotchas, you are likely to go astray, as he does. He fundamentally misunderstands the purpose of the models when he could help to get a better understanding of them, including their limitations, assumptions and any flaws.
* Chris Whitty is very good at explaining things to non-experts. He treats them as intelligent people who want to find about the subject and doesn't patronise them.
Agreed. Listening to Whitty is to attend a masterclass in the art of communication.
- Cases rising. In London skyrocketing. But still massively biased towards rises in the younger, less vulnerable (40 or less) groups. Interesting upticks in several areas outside London - but London is in a league of it's own - see the regional R numbers below. - Hospitalisations still rising but very slowly. - Deaths are still trending down.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
No, because what the decision makers need to know more than what the worst-case scenario looks like is how likely the worst case scenario is. If an asteroid was heading towards us, that's what we would want to know - what are the chances it'll hit us? Whether it wipes out humans completely or just reduces us to hunter-gatherer societies seems a secondary question.
But that's not what Nelson was complaining about, was it? He was complaining about a lack of a model for it which meant the government would have to do nothing. And we don't know enough to make that judgement at the moment - although thankfully it's appearing like the 'best' scenarios might be there.
But the politicians do need to know how bad it *could* be. Say the asteroid might hit the sea, in which case coastal areas might be affected - they can act on that. Say the asteroid hits on land: they can start mobilising to deal with the affected area. (and your last line are not the only two scenarios in this - they'd be interested in ones they cn actually do something about).
But the problem is by not having scenarios with lower inputs, you get statements that imply the best case scenario from Sage is peak of 3000 hospitalisation a day. And that is not correct.
I've studied your charts quite closely and I think I have spotted an important trend (this is tentative but I thought I would say it).
Here it is, mock me if you like:
Cases are growing quite fast in London.
Indeed, but look closer, the R rate rise had already begun to slow down at the end of last week. It's possible that Omicron is already running out of gas.
Even with UK wide number bouncing back up today, London's by reporting date cases barely rose.
The point is to compare the admissions ratio graph (left plot above) with this case ratio graph. The last 9 values here ⬇️ have all been huge (1.8+) - and I think if omicron had the same hospitalization rate as delta then the admissions ratio should be heading up similarly by now
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Only up 8000 since Friday. Has the exponential increase stopped?
I've always been suspicious that the doubling time would remain constant. Surely as more are infected there's fewer left to infect.
Cases have been roughly level since 16th December - that is five days in a row. They were supposed to be growing with a doubling time of two days or less. Cases should therefore now be over 360,000 reported cases per day - two doublings. Today's actual figure is 91,743. This is simply incompatible with the advice of SAGE's very eminent scientists that cases are exploding out of control. Could these eminent scientists have made incorrect assumptions? Science history shows that this has happened many times in the past.
Why should the evidence of the epidemic in South Africa be discounted, as SAGE have done? Until SAGE have fully explained and justified their case, it should not be accepted.
Somebody has told Javid that infections are x7 cases. Now are they trying to say that testing just can't keep up with the actual rate of infections, or that massive numbers are asymptomatic or that loads of people aren't testing / reporting themselves?
That x7 multipler is enormous.
If cases stay level for just two more days, they will need to invoke a multiplier of x28 = x(7x4) to explain it. The much simpler explanation is that cases look level because they are level at the moment. For how long this will last is another question. We do know for a fact however that there has been very limited mortality from Omicron in SA compared to mortality from Delta.
I increasingly believe that there will need to be a public inquiry in due course into the scientific advisory and decision-making process in this pandemic in the UK. The quality of the scientific debate has not lived up to the standards reached in other areas of the response to the pandemic.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
It's not (see TimT's reply).
It is.
If the government is weighing up their response then they need the full information. ' If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is - then do you seriously think the government should only be shown the 0.1% scenario without any qualification of caveat or rating of how likely it is?
They should get the full information, and be allowed to judge with full knowledge whether the risk of these so-called "never events" are worth acting over or not. If they don't have the full information, then they can't weigh that up.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The chair of the Sage modelling programme isn't good at explaining things to non-experts*, which is a weakness, but he does seem to know what he's doing ( as you would hope). And Fraser Nelson isn't as clever as he thinks he is. His questions are good ones, but if you are not trying to understand the responses of the expert but are instead looking for gotchas, you are likely to go astray, as he does. He fundamentally misunderstands the purpose of the models when he could help to get a better understanding of them, including their limitations, assumptions and any flaws.
* Chris Whitty is very good at explaining things to non-experts. He treats them as intelligent people who want to find about the subject and doesn't patronise them.
To be fair, some of the concepts are hard to explain within the time limit of the modern sound bite. Hats off to those who are good at it. Science really needs more of them.
There are some very good scientist communicators out there, but we haven’t seen many of them during the pandemic.
I recall one university (Bristol?) a few years ago having a lady with a job title that was something like Professor of Science Engagement, who was on TV a lot, very good at explaining complex concepts in simple language.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
No, because what the decision makers need to know more than what the worst-case scenario looks like is how likely the worst case scenario is. If an asteroid was heading towards us, that's what we would want to know - what are the chances it'll hit us? Whether it wipes out humans completely or just reduces us to hunter-gatherer societies seems a secondary question.
Although TBH I don't think this is a great example. Given the almost zero options we would have to actually do anything about an asteroid heading for us I don't see what difference it would actually make whether we knew in advance if it would hit us or not.
This is one of the great myths about asteroids: that a hit will be another event such as the one that may have ended the dinosaurs' reign on Earth (thanks, Hollywood!). It isn't: you might have (say) something the size of the Tunguska event (or a few times larger) - something that will devastate part of a region, or all of a region, but not the Earth. In which case, even with warning, and even with the possibility of diverting it before it hits, politicians may want to consider what they can do if it does hit near them.
Stockpile food and medicine. Mobilise the military, to help other countries or ourselves. Move people away from coastal regions before it hits. Try to make power and communications more rugged (and good luck with that!)
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
That's what confidence intervals etc are for.
Considering the cost of what is being proposed, the scenarios where action isn't necessary absolutely should be included in the information before acting.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
It's not (see TimT's reply).
It is.
If the government is weighing up their response then they need the full information. ' If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is - then do you seriously think the government should only be shown the 0.1% scenario without any qualification of caveat or rating of how likely it is?
They should get the full information, and be allowed to judge with full knowledge whether the risk of these so-called "never events" are worth acting over or not. If they don't have the full information, then they can't weigh that up.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The chair of the Sage modelling programme isn't good at explaining things to non-experts*, which is a weakness, but he does seem to know what he's doing ( as you would hope). And Fraser Nelson isn't as clever as he thinks he is. His questions are good ones, but if you are not trying to understand the responses of the expert but are instead looking for gotchas, you are likely to go astray, as he does. He fundamentally misunderstands the purpose of the models when he could help to get a better understanding of them, including their limitations, assumptions and any flaws.
* Chris Whitty is very good at explaining things to non-experts. He treats them as intelligent people who want to find about the subject and doesn't patronise them.
To be fair, some of the concepts are hard to explain within the time limit of the modern sound bite. Hats off to those who are good at it. Science really needs more of them.
There are some very good scientist communicators out there, but we haven’t seen many of them during the pandemic.
I recall one university (Bristol?) a few years ago having a lady with a job title that was something like Professor of Science Engagement, who was on TV a lot, very good at explaining complex concepts in simple language.
Jim Al-Khalili is great at this in a non-patronising way. Brian Cox is ok too but a little too "wow science, amazing" for me.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
It's not (see TimT's reply).
It is.
If the government is weighing up their response then they need the full information. ' If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is - then do you seriously think the government should only be shown the 0.1% scenario without any qualification of caveat or rating of how likely it is?
They should get the full information, and be allowed to judge with full knowledge whether the risk of these so-called "never events" are worth acting over or not. If they don't have the full information, then they can't weigh that up.
And the NHS “getting overwhelmed” is not an ELE.
The NHS "getting overwhelmed" should have "triage" as a solution and not just lockdown.
Nearly half of London's top theatres had to cancel performances this weekend due to Covid cases, as Omicron disrupts live events. Of the 46 full members of the Society of London Theatre that had shows running, 22 of them scrapped performances.
Will the last person in London without COVID please remember to pick up some milk from the shops.....
Only up 8000 since Friday. Has the exponential increase stopped?
I've always been suspicious that the doubling time would remain constant. Surely as more are infected there's fewer left to infect.
Cases have been roughly level since 16th December - that is five days in a row. They were supposed to be growing with a doubling time of two days or less. Cases should therefore now be over 360,000 reported cases per day - two doublings. Today's actual figure is 91,743. This is simply incompatible with the advice of SAGE's very eminent scientists that cases are exploding out of control. Could these eminent scientists have made incorrect assumptions? Science history shows that this has happened many times in the past.
Why should the evidence of the epidemic in South Africa be discounted, as SAGE have done? Until SAGE have fully explained and justified their case, it should not be accepted.
I hope, I hope, you are not trying to fit Reporting Date data to reality there.
How long have we been doing this? When cases are rising they rise in weekly steps starting on Wednesday when you look at the By Reporting Date figures
This week (starting Wednesday) so far is an average of 87513 Last week the average was 53943 The week before: 48127 The week before: 42936
And each of those weeks were flat often with the Tuesday figure lower than the preceding Wednesday figure.
Only up 8000 since Friday. Has the exponential increase stopped?
I've always been suspicious that the doubling time would remain constant. Surely as more are infected there's fewer left to infect.
Cases have been roughly level since 16th December - that is five days in a row. They were supposed to be growing with a doubling time of two days or less. Cases should therefore now be over 360,000 reported cases per day - two doublings. Today's actual figure is 91,743. This is simply incompatible with the advice of SAGE's very eminent scientists that cases are exploding out of control. Could these eminent scientists have made incorrect assumptions? Science history shows that this has happened many times in the past.
Why should the evidence of the epidemic in South Africa be discounted, as SAGE have done? Until SAGE have fully explained and justified their case, it should not be accepted.
Omicron as a component of the total has been growing far more quickly.
COVID-19 hospital admissions in London are still increasing rapidly with 7-day average up 38% week-on-week. North West is up 14% while other English regions saw a fall in admissions.
For England overall, admissions are up 4%, but London gives an indication of what to expect.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The chair of the Sage modelling programme isn't good at explaining things to non-experts*, which is a weakness, but he does seem to know what he's doing ( as you would hope). And Fraser Nelson isn't as clever as he thinks he is. His questions are good ones, but if you are not trying to understand the responses of the expert but are instead looking for gotchas, you are likely to go astray, as he does. He fundamentally misunderstands the purpose of the models when he could help to get a better understanding of them, including their limitations, assumptions and any flaws.
* Chris Whitty is very good at explaining things to non-experts. He treats them as intelligent people who want to find about the subject and doesn't patronise them.
To be fair, some of the concepts are hard to explain within the time limit of the modern sound bite. Hats off to those who are good at it. Science really needs more of them.
But the problem here is not how they are explaining it in the news conference or twitter soundbites but how they are explaining it to the decision makers. And it seems from what has been said in public that they are making all their assumptions - not planning but action - based on the unlikely worst case scenarios. Planning for the worst is understandable. Actually enacting that planning before the worst has materialised and when there is reasonable evidence that it never will is unacceptable given the costs it incurs.
Plus the fact it is largely pointless. I, and millions like me, are now at the stage where we will simply ignore whatever the Government tells us to do. I have my plans for Christmas and New Year which involve seeing plenty of people in different households and I will be going ahead with those no matter what the status of lockdowns.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The chair of the Sage modelling programme isn't good at explaining things to non-experts*, which is a weakness, but he does seem to know what he's doing ( as you would hope). And Fraser Nelson isn't as clever as he thinks he is. His questions are good ones, but if you are not trying to understand the responses of the expert but are instead looking for gotchas, you are likely to go astray, as he does. He fundamentally misunderstands the purpose of the models when he could help to get a better understanding of them, including their limitations, assumptions and any flaws.
* Chris Whitty is very good at explaining things to non-experts. He treats them as intelligent people who want to find about the subject and doesn't patronise them.
To be fair, some of the concepts are hard to explain within the time limit of the modern sound bite. Hats off to those who are good at it. Science really needs more of them.
There are some very good scientist communicators out there, but we haven’t seen many of them during the pandemic.
I recall one university (Bristol?) a few years ago having a lady with a job title that was something like Professor of Science Engagement, who was on TV a lot, very good at explaining complex concepts in simple language.
Jim Al-Khalili is great at this in a non-patronising way. Brian Cox is ok too but a little too "wow science, amazing" for me.
There was a science-based comedy panel show a few years ago, called “Duck Quacks Don’t Echo”, which also featured scientists good at explaining things.
COVID-19 hospital admissions in London are still increasing rapidly with 7-day average up 38% week-on-week. North West is up 14% while other English regions saw a fall in admissions.
For England overall, admissions are up 4%, but London gives an indication of what to expect.
It would be really helpful to have the stats divided into "despite covid" and "because of covid", especially given how prevalent it is in the capital.
COVID-19 hospital admissions in London are still increasing rapidly with 7-day average up 38% week-on-week. North West is up 14% while other English regions saw a fall in admissions.
For England overall, admissions are up 4%, but London gives an indication of what to expect.
It would be really helpful to have the stats divided into "despite covid" and "because of covid", especially given how prevalent it is in the capital.
Yes, if 1/10 Londoners currently has COVID then 1/10 admissions will have it too regardless of why they actually go to hospital (for COVID or broken arms)
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
COVID-19 hospital admissions in London are still increasing rapidly with 7-day average up 38% week-on-week. North West is up 14% while other English regions saw a fall in admissions.
For England overall, admissions are up 4%, but London gives an indication of what to expect.
It would be really helpful to have the stats divided into "despite covid" and "because of covid", especially given how prevalent it is in the capital.
Yes, if 1/10 Londoners currently has COVID then 1/10 admissions will have it too regardless of why they actually go to hospital (for COVID or broken arms)
Quite a simple point this and surprising some people here haven’t clocked it.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
I still can't get my head around the fact you'd want to base policy decisions on a model that is using incorrect input data/assumptions.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
That's completely ridiculous and why governments make frankly stupid decisions.
So covid goes the way of Brexit - decisions on how to handle it are now being made entirely with just three constituencies in mind: 1. The Tory supporting press 2. Tory backbenchers 3. Tory party members The interests of the country are not of the remotest relevance.
I've studied your charts quite closely and I think I have spotted an important trend (this is tentative but I thought I would say it).
Here it is, mock me if you like:
Cases are growing quite fast in London.
One - Mock me not, sir. No man mocks me, sir. Two - The wise man mocks the man. The mocked man mocks the mocker.
The question is the rate of increase in R - is it slowing?
It seems to be.... but we need more data to be sure.
A lot will depend on how quickly the R drops back to 1 and how far below 1 it drops or if it does at all. That plus the non-incidental admissions rate.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
I still can't get my head around the fact you'd want to base policy decisions on a model that is using incorrect input data/assumptions.
Not incorrect. Worst case.
I guess where we have a slight misunderstanding between us is that I am assuming that, at the time of decision-making, we do not know which assumptions are correct and which are not. I think that is still the case with the Sage assumptions, even if evidence is pointing all in one direction at this time.
If we need restrictions, put them in now. If we need them next week, do we really need them?
It’s not clear we need them.
Given they are not cost free that argues for watchful waiting
But if we wait until it does become clear, and we need to do something, then it is too late, and the costs of doing something are much greater (in terms of life and to the economy, as more people will die and we may need to be in lockdown for longer).
It's a really hideous choice to have to make. I don't envy the politicians.
This is connected to Javid's 1 million vaccinations tweet - I didn't notice that it has the Tory party logo on the tweet. Tom Peck @tompeck Honestly think putting their party logo on the national vaccination effort and its thousands of volunteers is the most shameless thing this lot have ever done.
If we need restrictions, put them in now. If we need them next week, do we really need them?
It’s not clear we need them.
Given they are not cost free that argues for watchful waiting
There’s a madness among the people that want to implement severe restrictions “just in case”. Their logic means we will be facing these interventions most years for the rest of our lives, as there will always be a “just in case” argument.
We're getting the counterfactual. There's no appetite, in government if not within the population, for further restrictions at this point. It will be whatever Omicron throws at us. Che sera, sera, as someone once sang.
Goes away to check his tins of baked beans...
Surely you'd only need to check your tins if there was a lockdown?
Raises an interesting point. We hear a lot on these pages about people who will refuse to follow restrictions. What about those that think there should be restrictions in the absence of government instruction and acting accordingly - all the people cancelling restaurant bookings etc? Not a small number of people. They won't be happy if they restrict themselves and the health service collapses or avoidably stringent restrictions come in later. There's a potential political price to pay for getting this wrong - not just economic and in health outcomes.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
I still can't get my head around the fact you'd want to base policy decisions on a model that is using incorrect input data/assumptions.
Not incorrect. Worst case.
I guess where we have a slight misunderstanding between us is that I am assuming that, at the time of decision-making, we do not know which assumptions are correct and which are not. I think that is still the case with the Sage assumptions, even if evidence is pointing all in one direction at this time.
No, they are incorrect. I think these were presented as the central prediction, not the absolute worst case scenario. It was 6,000 deaths per day at the peak if nothing is done.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
To decide to do nothing is in itself both a decision and an action.
If the data says to do nothing then that is useful information. It is useful to know you shouldn't be acting.
To know when not to act is just as important as to know when you need to do so.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
That's completely ridiculous and why governments make frankly stupid decisions.
We normally agree on much, Max. But this is not ridiculous. This is decision-making in conditions of ignorance (i.e. low confidence in predictions of probability and impact). If you want some scientific articles on this, look up Andy Stirling's work, or anything on HROs (Sutcliffe and Wieck, La Porte, and many more)
13,856,823 third doses left to give in England and Scotland, as of yesterday...
The target, regardless of whether or not we hit it, has been really useful. I think we will get another 4-6m knocked off that figure before Xmas day and then maybe another 2m knocked off until NYD leaving just 8m or so for the first and second week of January.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
It's not (see TimT's reply).
It is.
If the government is weighing up their response then they need the full information. ' If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is - then do you seriously think the government should only be shown the 0.1% scenario without any qualification of caveat or rating of how likely it is?
They should get the full information, and be allowed to judge with full knowledge whether the risk of these so-called "never events" are worth acting over or not. If they don't have the full information, then they can't weigh that up.
"If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is"
But that's not what Nelson was talking about (although he moved onto that at the end). He was talking about a lack of a model that replicated some of JP Morgan's modelling, not the probabilities of any scenario.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
The decision makers don't need to know what the relative chance of it hitting or missing is?
No. Not when the data is not good enough.
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
So it's okay to use one set of data that isn't good enough, but not another? That's the central issue, that the assumptions being used by the models are on the pessimistic side.
Yes, because one set of data contains no useful information even if the data turn out to be correct; whereas the other set does contain useful information even if the data turn out to be incorrect.
That's completely ridiculous and why governments make frankly stupid decisions.
We normally agree on much, Max. But this is not ridiculous. This is decision-making in conditions of ignorance (i.e. low confidence in predictions of probability and impact). If you want some scientific articles on this, look up Andy Stirling's work, or anything on HROs (Sutcliffe and Wieck, La Porte, and many more)
This has been a very useful discussion - exposing the misunderstandings and incomprehensions.
Fraser Nelson: "So we have an asteroid that may hit the Earth?" Scientist: "Yes. And we have no idea how likely it is - only that it's heading towards us. NORAD were too busy tracking Santa Claus." Fraser Nelson: "But it may miss." Scientist: "Yes." Fraser Nelson: "So why are you only modelling what will happen if it hits?" Scientist: "Because the decision-makers need to consider what to do if the worst comes to the worst." Fraser Nelson: "But they might not have to do anything if it doesn't hit." Scientist: "But it may. And they need to think about what they'd do." Fraser Nelson: "Why didn't you model the fact it might miss?" Scientist: "Because that doesn't really help the decision-makers."
That Fraser Nelson article in the Spectator is really a whole load of nothing IMO. What the scientist said makes sense.
What a load of crap.
If the government are considering locking us down because of the virus then they need to know what's likely to happen with the virus. If the models say that the NHS isn't likely to be overwhelmed but those models are disregarded in favour of those that say it is, then that's operating with false information.
It's not (see TimT's reply).
It is.
If the government is weighing up their response then they need the full information. ' If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is - then do you seriously think the government should only be shown the 0.1% scenario without any qualification of caveat or rating of how likely it is?
They should get the full information, and be allowed to judge with full knowledge whether the risk of these so-called "never events" are worth acting over or not. If they don't have the full information, then they can't weigh that up.
"If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is"
But that's not what Nelson was talking about (although he moved onto that at the end). He was talking about a lack of a model that replicated some of JP Morgan's modelling, not the probabilities of any scenario.
Nelson was talking about modelling that used data that the scientists had recognised. But since this model didn't give "the right" answer it was disregarded.
If you decide in advance to disregard all models that don't give a certain outcome, then you've prejudiced your work in advance.
Comments
What we have at the moment is data models with slightly odd inputs on average infection severity and vaccine efficacy. That's not evidence. The same models said we'd have up to 7k hospitalisations per day if we went ahead with step 4 unlockdown and we know that didn't happen.
And we're now looking at the period where Omicron should be the biggest contributor, so that's a useful predictive check.
Low 300s reported tomorrow for London admissions from the 19th, mid 400s reported on Wednesday for the 20th, and 600+ reported on Thursday for the 21st and we should get concerned that severity is NOT lower.
Below that - good news.
If its burnt itself out then there's no problem.
If it was 2008-10 or 2016-19, I wouldn't be that surprised.
But if Bozza or NutNut hear that the patio looks cheap, we know where that will lead...
But the politicians do need to know how bad it *could* be. Say the asteroid might hit the sea, in which case coastal areas might be affected - they can act on that. Say the asteroid hits on land: they can start mobilising to deal with the affected area. (and your last line are not the only two scenarios in this - they'd be interested in ones they cn actually do something about).
Hence the current test being conducted.
And if you look far enough ahead, there is a degree of uncertainty about various asteroid orbits, so modelling does come in to it. The lead times are, of course, a great deal longer, and the parameters rather better understood.
But there is no valid reason for one given what we currently know, the implementation delay and the fact that cases are already being reported at record levels.
Here it is, mock me if you like:
Cases are growing quite fast in London.
Total number of case according to Sky is 45145, which is 8000 more than yesterday. on 2 days figures only this represents doubling every 3.5 days. I'm sure that if we were to use a weeks worth of figures we'd get a more accurate doubling rate.
It's as well to know what the worst is. But you also need to know how likely it is.
What if there is a tidal wave in the Atlantic, for example? Cardiff might be destroyed. So lets move Cardiff to Ebbw Vale. But in reality the chances of the worst case scenario coming to pass are so slim that we leave Cardiff where it is.
[Interesting fact: Cardiff is the city in the UK most at risk from a tsunami].
Low confidence in probability prediction and low confidence in impact prediction OR probability approaching 0 and impact approaching infinity BOTH mean that any numbers are meaningless and we have to adopt qualitative as opposed to quantitative approaches to risk management.
I actually feel quite optimistic to be honest with you all.
If they say 2 weeks and you are thinking that’s three months then, like a long business meeting where first 10 minutes may have been useful the rest pointless, sure you will have a negative view. Negative Without really knowing if it’s as bad as three months not 2 weeks. What if it was just that first useful bit, and that short useful bit timed perfectly and then stopped?
The experts seem to say covid will likely mutate into something less and less severe and end up like a cold. Whittey said it’s not over, but break next 18 month into 3 blocks and each 6th month will be better than last.
Now to feel so positive like coming out the back of something not starting into it? Like when the bad weather blows away and you can see the sun breaking through the direction weather is coming from?
If you close your eyes, can you see that? Can you see those rays of hope?
On that note, we’re walking to the pub whilst it’s still open. I’m horse riding tomorrow 🙋♀️
- Cases rising. In London skyrocketing. But still massively biased towards rises in the younger, less vulnerable (40 or less) groups. Interesting upticks in several areas outside London - but London is in a league of it's own - see the regional R numbers below.
- Hospitalisations still rising but very slowly.
- Deaths are still trending down.
https://twitter.com/BristOliver/status/1472960896171483144?s=20
The point is to compare the admissions ratio graph (left plot above) with this case ratio graph. The last 9 values here ⬇️ have all been huge (1.8+) - and I think if omicron had the same hospitalization rate as delta then the admissions ratio should be heading up similarly by now
https://twitter.com/BristOliver/status/1472964730369228803?s=20
https://www.theguardian.com/world/2021/dec/20/the-croatian-roots-of-chiles-leftist-president-gabriel-boric
I had not realised the Croatian diaspora in Chile is almost as large as that in the US.
I increasingly believe that there will need to be a public inquiry in due course into the scientific advisory and decision-making process in this pandemic in the UK. The quality of the scientific debate has not lived up to the standards reached in other areas of the response to the pandemic.
If the government is weighing up their response then they need the full information.
'
If the full information says for instance there's a 99.9% chance that the NHS won't be overwhelmed, but there's a 0.1% chance that it is - then do you seriously think the government should only be shown the 0.1% scenario without any qualification of caveat or rating of how likely it is?
They should get the full information, and be allowed to judge with full knowledge whether the risk of these so-called "never events" are worth acting over or not. If they don't have the full information, then they can't weigh that up.
I recall one university (Bristol?) a few years ago having a lady with a job title that was something like Professor of Science Engagement, who was on TV a lot, very good at explaining complex concepts in simple language.
Stockpile food and medicine. Mobilise the military, to help other countries or ourselves. Move people away from coastal regions before it hits. Try to make power and communications more rugged (and good luck with that!)
Considering the cost of what is being proposed, the scenarios where action isn't necessary absolutely should be included in the information before acting.
Will the last person in London without COVID please remember to pick up some milk from the shops.....
How long have we been doing this? When cases are rising they rise in weekly steps starting on Wednesday when you look at the By Reporting Date figures
This week (starting Wednesday) so far is an average of 87513
Last week the average was 53943
The week before: 48127
The week before: 42936
And each of those weeks were flat often with the Tuesday figure lower than the preceding Wednesday figure.
https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1042235/20211219_OS_Daily_Omicron_Overview.pdf
COVID-19 hospital admissions in London are still increasing rapidly with 7-day average up 38% week-on-week. North West is up 14% while other English regions saw a fall in admissions.
For England overall, admissions are up 4%, but London gives an indication of what to expect.
Plus the fact it is largely pointless. I, and millions like me, are now at the stage where we will simply ignore whatever the Government tells us to do. I have my plans for Christmas and New Year which involve seeing plenty of people in different households and I will be going ahead with those no matter what the status of lockdowns.
There was a science-based comedy panel show a few years ago, called “Duck Quacks Don’t Echo”, which also featured scientists good at explaining things.
Oh, and Neil DeGrasse Tyson.
One - Mock me not, sir. No man mocks me, sir.
Two - The wise man mocks the man. The mocked man mocks the mocker.
The question is the rate of increase in R - is it slowing?
It seems to be.... but we need more data to be sure.
Anything major happened?
13,856,823 third doses left to give in England and Scotland, as of yesterday...
Given they are not cost free that argues for watchful waiting
1. The Tory supporting press
2. Tory backbenchers
3. Tory party members
The interests of the country are not of the remotest relevance.
I guess where we have a slight misunderstanding between us is that I am assuming that, at the time of decision-making, we do not know which assumptions are correct and which are not. I think that is still the case with the Sage assumptions, even if evidence is pointing all in one direction at this time.
It's a really hideous choice to have to make. I don't envy the politicians.
Tom Peck
@tompeck
Honestly think putting their party logo on the national vaccination effort and its thousands of volunteers is the most shameless thing this lot have ever done.
Autocorrupt more like
If the data says to do nothing then that is useful information. It is useful to know you shouldn't be acting.
To know when not to act is just as important as to know when you need to do so.
But that's not what Nelson was talking about (although he moved onto that at the end). He was talking about a lack of a model that replicated some of JP Morgan's modelling, not the probabilities of any scenario.
If you decide in advance to disregard all models that don't give a certain outcome, then you've prejudiced your work in advance.
(But only because Australia didn’t enforce the follow-on)