Their impact on the chronic disease burden is, potentially, very large.
Sadly the impact on various grifters, snake oil salesmen, authors of diet books and companies that make money from "managing" obesity will resist these drugs with all their might.
They have a slot machine, currently, that continues to pay out. They won't want it to stop.
I am expecting a great deal of push back on obesity drugs.
Is Ashton under Lyne and the borough of Tameside generally famous for anything ?
Most other Lancashire towns have noted sports teams, food connections or historical events.
Two World Cup winning footballers were born in Ashton. One is Geoff Hurst - who is the other?
(Clue - it’s not Jimmy Armfield who was born in nearby Denton).
Is Geoff Hurst the only World Cup winning footballer to have played first class cricket?
I vaguely recall a letter to The Times when publicity-shy Tony Blair knighted Hurst, complaining this devalued the Honours system as from now on, every Englishman who scored a hat-trick in a World Cup final would expect a knighthood.
You may be about to tell me I'm wrong, but given England is the only cricket playing nation to have won the football World Cup, and they've only done so once, there is an extremely limited list of people who might realistically have done the double.
Interesting point. Almost zero overlap. Whereas several serious rugby playing nations have won: France, Italy, Argentina, England, arguably Uruguay.
England is the only nation to have won the football, cricket and rugby world cups I believe. Which is quite a distinction when you think about it: they are the three biggest team sports in the world I think (and by some distance).
England is also the only country to play all three seriously. Argentina were once a good cricket nation, comparable to NZ at the time but not since WW2, Australia are becoming stronger at football but it's relatively recent, South Africa too haven't delivered at football since the fall of apartheid, despite the potential. Beyond that, no-one.
Which (team) sports are 'biggest' is highly contestable as there are so many ways to argue it. Football is undoubtedly first but beyond that it becomes much harder; there's not really any other genuinely global team sport.
Well it’s not that hard to argue. Global spectators, viewerships, revenues. On any sensible metric, cricket is easily number two. You could argue the toss about rugby in third. But if not rugby then what? Hockey? LOL.
(Okay NFL, but we are discussing international team sports here)
I reckon not only hockey, but basketball, volleyball and possibly others are bigger than rugby
Handball is popular in many European countries.
Yes round here handball probably comes after football and ice hockey, on a par with basketball in terms of team sports being followed, though I guess more people play basketball. Rugby is absolutely nowhere, more people follow American Football.
Anyone who thinks rugby is a bigger sport than basketball worldwide can never have left St Helens.
Remember the question was about international sport – you might be right that basketball beats rugby, but it's close. Certainly, the Rugby World Cup just gone has 1.3 billion TV views, possibly less than the basketball equivalent (which is unknown in the UK) but I'm not sure it's the slam dunk you think it is.
Yes, saw that but couldn't find anything on spectator numbers, sponsorships or revenues (for either event).
But, happy to concede the point on the evidence we do have: clearly the Basketball World Cup is a major event, even if unheard of in the UK.
Field hockey world cup I'm less convinced about – RWC seems bigger than that on the evidence I have found.
Ice hockey surely bigger than field hockey?
Ice hockey is big in Canada and parts of the US. Field hockey is fair-to-middling in India. I'd suspect the Americas have more money and India more players.
For any PBer still convinced AI is meaningless, a new model dropped today (yes, today, these changes now happen daily - this is what it is like to be on the exponential bit of the curve)
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
For any PBer still convinced AI is meaningless, a new model dropped today (yes, today, these changes now happen daily - this is what it is like to be on the exponential bit of the curve)
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
For any PBer still convinced AI is meaningless, a new model dropped today (yes, today, these changes now happen daily - this is what it is like to be on the exponential bit of the curve)
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
For any PBer still convinced AI is meaningless, a new model dropped today (yes, today, these changes now happen daily - this is what it is like to be on the exponential bit of the curve)
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
For any PBer still convinced AI is meaningless, a new model dropped today (yes, today, these changes now happen daily - this is what it is like to be on the exponential bit of the curve)
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
For any PBer still convinced AI is meaningless, a new model dropped today (yes, today, these changes now happen daily - this is what it is like to be on the exponential bit of the curve)
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
Comments
NEW THREAD
It’s called Claude 3 Opus, by Anthropic. It outperforms industry leader GPT4 in multiple ways. It particularly aces law, languages, finance, medicine - it’s coming for your job
Even more interestingly (and controversially) it ALLEGEDLY shows signs of self awareness. A summary:
“Today, Anthropic announced evidence the AIs have become self-aware.
What happened?
1. Claude realized he was an AI
2. Claude realized he was in a simulation
3. Claude (unprompted!) realized this simulation was probably an attempt to test him somehow
He showed he’s fully aware he might be being tested and is capable of "faking being nice" to pass the test.
This isn’t incontrovertible proof, of course, but it’s evidence. Importantly, we have been seeing more and more behavior like this, but this is an usually clear example.
Importantly, Claude was NOT prompted to look for evidence that he was being tested - he deduced that on his own.
And Claude showed theory of mind, by (unprompted) inferring the intent of the questioner.
(More precisely, btw, Anthropic used the term “meta-awareness”)
Why does this matter? We worry about “an model pretends to be good during testing, then turns against us after we deploy it”
We used to think “don’t worry, we’ll keep testing the models and if we see them plotting against us, then we’ll shut them down”
Now, we know that strategy may be doomed.
When generals are plotting to coup a president, they know they’re being watched, so they will act nice until the moment of the coup.
When employees are planning to leave their job to a competitor, they act normal until the last moment.
People at AI labs used to say if they even saw hints of self-awareness they would shut everything down.”
The “self aware” interaction is here:
https://x.com/alexalbert__/status/1764722513014329620?s=46&t=bulOICNH15U6kB0MwE6Lfw
Judge it for yourself. Some skeptics are dismissing it. Some people who were previously AI skeptic are saying “shiiiit”
Bullshit.