He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
He claimed it was by demographic experts.
Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
He claimed it was by demographic experts.
Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
Goodwin is asked if the book was peer-reviewed, doesn't answer the question but talks about top demographers.
Academic books generally would be checked in a way that Matt G's clearly wasn't. Which is fine if you accept that it's an ill-informed polemic. His trouble is that he's trying to keep the protective veneer of the academy without adhering to its professional standards.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
He claimed it was by demographic experts.
Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
Goodwin is asked if the book was peer-reviewed, doesn't answer the question but talks about top demographers.
Academic books generally would be checked in a way that Matt G's clearly wasn't. Which is fine if you accept that it's an ill-informed polemic. His trouble is that he's trying to keep the protective veneer of the academy without adhering to its professional standards.
He’s been doing that for years now. He’ll say something which is just echo chamber nonsense like we’re all being replaced which would be brushed off normally but because he’s an “academic” he implies he’s got more knowledge.
In all honesty I just think he’s incredibly unlikeable. I despise Farage but he’s clearly got something. Goodwin does not.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
He claimed it was by demographic experts.
Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
It's common among conspiracy theorists in America - Richard Carrier springs to mind. He claimed one of his books on mathematical probability had been 'peer reviewed' when it turned out that he'd sent it himself to an expert for comment, then doctored the feedback to make it look like a peer review report he then sent to his publisher.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
This is palpable bullshit
I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
Would be useful to have had a fact checker to confirm that the facts he used as the basis of the book actually had something to back them up.
Of course.
My point is that the “gotcha” question in the tweet is designed to mislead - you wouldn’t expect a book to be peer reviewed so “Goodwin can’t even name a single peer reviewer” is a meaningless statement that gives the wrong impression to the unwary
Yeah but it was Goodwin who used that gotcha, by claiming his book was peer reviewed but wasn't able to say who the reviewers are. He then fell back on saying peer reviews are anonymous, which isn't the case and goes against the whole point of peer reviews as public endorsement of the methods used.
Goodwin was doing the misleading and the tweet is relevant.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
This is palpable bullshit
I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
I've read your "journalism" and would concede that, among your good friends, it probably doesn't.
It won't make any difference to the mad Trump though will it ?
Depends if it's as big as expected
No matter how big
How do you remove him ?
Enthuse anti-Trump voters and ensure they vote in November.
He can, and most certainly will, do a whole lot of damage between now and then
Unquestionably, and wherever legal cases can be brought to slow him down they should be.
What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
Cleverly is a donkey with no charisma
As I am not a Tory I would be delighted if Kemi was replaced, especially by Cleverly, as it will reduce the number of seats they will win.
It won't make any difference to the mad Trump though will it ?
Depends if it's as big as expected
No matter how big
How do you remove him ?
Enthuse anti-Trump voters and ensure they vote in November.
He can, and most certainly will, do a whole lot of damage between now and then
Unquestionably, and wherever legal cases can be brought to slow him down they should be.
What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
I agree, I just do not know where his defence chiefs are in the strategy but then he apparently renamed the Strait of Hormuz as the Trump Strait yesterday
You just could not make this up in your wildest dreams
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?
It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
Well I've heard Jews say a Hell of a lot worse about Palestinians including from the Israeli President. What's more if the Telegraph wants to besmirch a political Party they could at least use names and say what the emails actually said rather than their precis'.
It's shit like that that causes ill feeling and people on here should know better.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
This is palpable bullshit
I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
I've read your "journalism" and would concede that, among your good friends, it probably doesn't.
But I know what I'm talking about, and you don't. So there's that
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Here in Birmingham, I'd say the main factors driving the exodus are the bin strike, the bankrupt council and dissatisfaction with Labour's policy on Israel/Gaza.
How many seats do you think the Gaza indies will pick up in Brum ?
What can Brummie's councillors do about Gaza? I realise it's in a terrible, terrible mess, so there are some similarities but that's about as far as it goes.
And Good Morning everyone. Lovely sunshine here, but an unusually cold West wind.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recently
No they're not, not unless they are aimed at an academic audience
Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
Sorry, that I accept entirely. The context of my comment was his apparent confusion over what “peer reviewed” meant and his claim it had been so.
IF Goodwin claimed that - there seems some confusion - then I'd be very surprised he bothered to get "peer review" for a book aimed at the public. No one else does. If he claimed that and he lied, than he's a damn fool
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recently
No they're not, not unless they are aimed at an academic audience
Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
Coincidentally hasn’t Goodwin attributed a quote to Scruton that no one else seems to be able to find? Not sure even without peer review one should think that’ particularly adds anything to his arguments.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?
It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
Well I've heard Jews say a Hell of a lot worse about Palestinians including from the Israeli President. What's more if the Telegraph wants to besmirch a political Party they could at least use names and say what the emails actually said rather than their precis'.
It's shit like that that causes ill feeling and people on here should know better.
Are you saying @ThomasNashe shouldn't be allowed to post links to major newspaper articles, and he should "know better"?
It won't make any difference to the mad Trump though will it ?
Depends if it's as big as expected
No matter how big
How do you remove him ?
Enthuse anti-Trump voters and ensure they vote in November.
He can, and most certainly will, do a whole lot of damage between now and then
Unquestionably, and wherever legal cases can be brought to slow him down they should be.
What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
I agree, I just do not know where his defence chiefs are in the strategy but then he apparently renamed the Strait of Hormuz as the Trump Strait yesterday
You just could not make this up in your wildest dreams
Apparently, according to CNBC, only semi-facetiously. Just looked up Apple Maps; they have Gulf of Mexico, with Gulf of America in brackets and smaller letters underneath.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?
It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
Can’t see the point in replacing Badenoch. Iran War aside she’s doing just fine.
She's playing a poor hand reasonably well. She seems to be growing into the role. Can't see any of the other immediate options doing any better. Which might be damning with faint praise, but it is realistic.
There is a large body of the electorate still inclined to small "c" Conservatism, if not currently to the Conservative Party. They could still come back if the policies of the Conservative Party are seen to be sensible and attuned to the needs of the country. Those needsb could be markedly different by the time of the next election. I still think Labour the largest party is the sensible bet, just because of the vast number of seats they have to lose. But Labour is doing its very best to piss of the very most, so who knows.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
Can’t see the point in replacing Badenoch. Iran War aside she’s doing just fine.
She's playing a poor hand reasonably well. She seems to be growing into the role. Can't see any of the other immediate options doing any better. Which might be damning with faint praise, but it is realistic.
There is a large body of the electorate still inclined to small "c" Conservatism, if not currently to the Conservative Party. They could still come back if the policies of the Conservative Party are seen to be sensible and attuned to the needs of the country. Those needsb could be markedly different by the time of the next election. I still think Labour the largest party is the sensible bet, just because of the vast number of seats they have to lose. But Labour is doing its very best to piss of the very most, so who knows.
I think a lot of people are concluding not even mid way into a Parliament that a party that is extreme (by the historical standards of who has won elections) means that is the settled outcome and therefore the Tories must become the same.
But let’s get some perspective. That extreme party has hit 30% of the vote a few times in polls. Decent, no doubt about that. But the Tories have frequently polled in actual elections over 40%. And indeed during Covid they polled (not in a real election) 50%.
I think we run the risk of concluding what the majority wants is what Reform are selling. And I’m still totally unconvinced that’s true.
The Tories clearly ran out of steam towards the end. But there’s definitely room for a party that aims itself at the 25-50 age bracket. Badenoch has shown some signs of doing that.
If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.
SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.
What rough beast is slouching towards the electorate instead of course remains to be seen.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?
It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
I definitely do not hate Jews as Jews. I do find the present policies of the State of Israel abhorrent, to say the least. I don't hate Americans either, as Americans. But I find the policies and actions off the US Government 'nearly' as abhorrent.
Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....
The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.
The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts are precisely zero.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?
It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
Well I've heard Jews say a Hell of a lot worse about Palestinians including from the Israeli President. What's more if the Telegraph wants to besmirch a political Party they could at least use names and say what the emails actually said rather than their precis'.
It's shit like that that causes ill feeling and people on here should know better.
Are you saying @ThomasNashe shouldn't be allowed to post links to major newspaper articles, and he should "know better"?
Are we seriously saying that 20% of the Tory vote base has always been extreme and wanted to vote for somebody else or is it just that the Tories were perceived to have failed in what they set out to do?
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....
The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.
The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
Who to choose for a heart attack first, Trump or Farage. It's difficult to choose. I wonder if there are odds available
If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.
SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.
What rough beast is slouching towards the electorate instead of course remains to be seen.
The one big issue the Greens may find in the race for first is a swathe of their surge appears to be amongst those least likely to vote in May - youth and usual non voters. Theyd be better off if this were a GE driving up turnout. It might ameliorate an otherwise breakthrough type night.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.
Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.
Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....
The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.
The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
Who to choose for a heart attack first, Trump or Farage. It's difficult to choose. I wonder if there are odds available
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
That’s not the issue.
If an LLM regurgitates 460 words from a book without attribution then that’s a problem.
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
They are certainly hundreds and hundreds of seats below where they were six month ago.
You have to squirm at the quality of Reform councillors we would have had elected if these elections were in mid-late 2025.
Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....
The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.
The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
Who to choose for a heart attack first, Trump or Farage. It's difficult to choose. I wonder if there are odds available
I'd say Farage is a better bet. At least there's a chance he has a heart.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
He claimed it was by demographic experts.
Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
I think there are more successes in self-publishing than there used to be, in fact Goodwin will no doubt get a few sales because people want to believe what he writes, made up or not. S.p. is also a lot cheaper than it used to be. Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
I think there are more successes in self-publishing than there used to be, in fact Goodwin will no doubt get a few sales because people want to believe what he writes, made up or not. S.p. is also a lot cheaper than it used to be. Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
Very sorry to read your third sentence. Sympathies; must be a strain on the family.
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
That’s not the issue.
If an LLM regurgitates 460 words from a book without attribution then that’s a problem.
That’s certainly a problem. It wasn’t the problem I was thinking of initially but it’s certainly one.
I’d not be confident the 460 words would even be accurate. Apparently Matthew Goodwin was.
If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.
SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.
What rough beast is slouching towards the electorate instead of course remains to be seen.
I don't believe YouGov over and above what the average of all the other polls are showing.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.
But Goodwin left serious academia years ago and is away with the far right fairies.
Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
I think there are more successes in self-publishing than there used to be, in fact Goodwin will no doubt get a few sales because people want to believe what he writes, made up or not. S.p. is also a lot cheaper than it used to be. Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
Very sorry to read your third sentence. Sympathies; must be a strain on the family.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
I've posted before that I will probably vote Green in the forthcoming County Council elections, because I know and like the candidate. I will probably vote tactically in the next general election, though, assuming I'm still around.
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
Maybe. Maybe. Cant see it in London (Reform wise) which is a third of seats up, i also think the relative incompetence shown in last years councils will focus minds. I dont see them doing better than (and probably rather worse) in local by elections since 2025 which really are a 'free hit'
Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.
Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.
Is it possible to unravel this?
I'd be very attracted to a party with competing visions, but the problem with Labour is that the competing groups don't seem to go for that - they are all about giving more weight to this or that specific policy, as you say. It's the main reason why I'm drifting away.
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
Books aren’t typically peer reviewed
Would be useful to have had a fact checker to confirm that the facts he used as the basis of the book actually had something to back them up.
Of course.
My point is that the “gotcha” question in the tweet is designed to mislead - you wouldn’t expect a book to be peer reviewed so “Goodwin can’t even name a single peer reviewer” is a meaningless statement that gives the wrong impression to the unwary
Yeah but it was Goodwin who used that gotcha, by claiming his book was peer reviewed but wasn't able to say who the reviewers are. He then fell back on saying peer reviews are anonymous, which isn't the case and goes against the whole point of peer reviews as public endorsement of the methods used.
Goodwin was doing the misleading and the tweet is relevant.
He didn’t quite claim that though.
He said you asked me if it was peer reviewed. I say it was sent to demographic experts and checked (or something like that). Definitely misleading but not the claim presented in the tweet
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
As a committed pro-semite (I'm technically Jewish and I was brought up on accounts of the horrors of 30s Germany and the necessity of Israel), I've really had enough of Netanyahu and current Israeli policy, and that doesn't make me an anti-semite. Obviously burning Jewish ambulances is both wrong and stupid, but I don't think that being critical of Israeli policy qualifies at all.
Typical succinct comment from @NickPalmer that many should take on board
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
As a committed pro-semite (I'm technically Jewish and I was brought up on accounts of the horrors of 30s Germany and the necessity of Israel), I've really had enough of Netanyahu and current Israeli policy, and that doesn't make me an anti-semite. Obviously burning Jewish ambulances is both wrong and stupid, but I don't think that being critical of Israeli policy qualifies at all.
Typical succinct comment from @NickPalmer that many should take on board
Needs a fellow Jew to take him out.
Improve the chance of global peace no end
It didn't work in Iran so no certainty it wouldn't strengthen Israel's resolve
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
As a committed pro-semite (I'm technically Jewish and I was brought up on accounts of the horrors of 30s Germany and the necessity of Israel), I've really had enough of Netanyahu and current Israeli policy, and that doesn't make me an anti-semite. Obviously burning Jewish ambulances is both wrong and stupid, but I don't think that being critical of Israeli policy qualifies at all.
Typical succinct comment from @NickPalmer that many should take on board
Needs a fellow Jew to take him out.
Improve the chance of global peace no end
You would be amazed how few Jews are still sympathetic to Israel which is extraordinary. Politicians who go chasing the 'Jewish' vote are looking in the wrong direction
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
How can you post such a statement! PB contributors ignoring the facts; nonsense, it's just that some of us have different facts.
Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.
Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.
Is it possible to unravel this?
On top of this are the current splits mostly ideological or actually mostly just that half the MPs represent the government so are promoting policy with spending constraints and the other half are free to promote policies without being at all responsible for making them work or funding them by finding cuts or tax raises elsewhere.
If for some reason Streeting was outside the cabinet would he perhaps be seen as centre left rather than right and portray himself differently? I suspect so, and similarly for people outside government now, including Rayner, when inside they were/would be more sympathetic to govt policy.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
One Nation conservatives are not going to win hearts
But neither is being a marginally less offensive version of Reform.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
A couple of points is hundreds and hundreds of seats not dozens.
Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
I've posted before that I will probably vote Green in the forthcoming County Council elections, because I know and like the candidate. I will probably vote tactically in the next general election, though, assuming I'm still around.
I expect I'll still be voting LibDem in the next GE, unless the Tories come up with something good and not Reform-lite. Cleverly would help. In May I will probably vote LibDem for the county (the Tory administration needs an opposition) and Green for the District (the LibDem/localist administration likewise)
One of our local Tory councillors is going on about traffic improvements and even bus services, which I am deeply sceptical about as there is only a month to go and the Tory constituency is people who drive SUVs and can afford new EVs and wouldn't understand why some people need to catch a bus. Anyway I am in neither his District nor County ward so I don't have to decide whether to vote for him or not.
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
Have you ever seen an afternoon quiz show?
My point is that people go to these things to get answers, assuming them to be correct.
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
Have you ever seen an afternoon quiz show?
My point is that people go to these things to get answers, assuming them to be correct.
They are mostly correct, just as human experts are mostly correct (they are already far more correct than average humans on things like capital cities). Anyone with the slightest bit of curiosity knows that LLMs aren't always correct.
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
There is a parameter in LLMs called temperature that can be set by the developer/user. It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice. Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive. At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France. High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.
Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters. You know who I mean.
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
He is UK PM and as such should not take sides
I think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeous
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
He is UK PM and as such should not take sides
I think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeous
Keir Starmer holds a meeting with representatives of the Jewish community in Downing Street after four ambulances belonging to Hatzola, a Jewish community organisation, were set on fire in North London
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
Is it today? You haven't provided a link. In which case it is Shabbat, and in fact a special one as Passover starts next week
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
You think it odd that he's meeting with someone wearing a Yarmulka? Would you find it odd if he met with someone wearing a turban? Some sort of Islamic garb?
It won't make any difference to the mad Trump though will it ?
Depends if it's as big as expected
No matter how big
How do you remove him ?
Enthuse anti-Trump voters and ensure they vote in November.
He can, and most certainly will, do a whole lot of damage between now and then
Unquestionably, and wherever legal cases can be brought to slow him down they should be.
What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
There are a lot of internet rumours about what exactly happened on the Ford.
F1: split a stake evenly between Hulk to beat Bortoleto at 2.8, and Norris to beat Piastri at 3.4. Largely based on suspecting car reliability is a bit shit.
On that note, I've hedged Hadjar (mentioned here at 5.25) to beat Verstappen, backing the Dutchman at 1.66.
Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
Why shouldn't he ?
I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
He is UK PM and as such should not take sides
I think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeous
I really do not care if Starmer wants to meet with Jews or any other religious group
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
Have you ever seen an afternoon quiz show?
My point is that people go to these things to get answers, assuming them to be correct.
They are mostly correct, just as human experts are mostly correct (they are already far more correct than average humans on things like capital cities). Anyone with the slightest bit of curiosity knows that LLMs aren't always correct.
They can with some degree of predictability be accurate. But they are not accurate full stop.
You are curious. But the people shilling these things - like in my company - are not.
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
There is a parameter in LLMs called temperature that can be set by the developer/user. It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice. Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive. At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France. High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.
Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters. You know who I mean.
I don’t believe that’s quite right. Even at 0.0 there’s no way to guarantee it will be correct. The likelihood is just relatively higher that it will choose the most likely next token. That token can still be incorrect.
If it had a poor/non-existent training set for a specialised topic on 0.0 it will still output nonsense.
Here is Gemini with temperature set at 1.0, musing about a few low-probability metaphors for human consciousness:
The Infinite Room of Echoing Pianos: Imagine a hall with ten thousand pianos. Every sensory input—a smell of rain, a distant siren—is a finger hitting a key. Consciousness isn’t the sound of one note; it’s the accidental harmony that emerges when the echoes overlap just right. We aren't the players; we are the shimmering "third tone" created by the interference of a billion vibrations.
The Scavenger's Stitched-Together Kite: Your mind is a massive, flimsy kite made of stolen scraps: a piece of your grandmother’s apron, a line from a cereal commercial, and the fear of falling. It’s held together by the "glue" of your ego. "You" are not the kite itself, but the erratic tension in the string that keeps the whole mess from disintegrating in the wind of the present moment.
The Bioluminescent Algae in a Ship’s Wake: Reality is a massive, dark ocean. The "self" is just the glowing disturbance left behind as the physical body plows through time. The light (thought) only exists because of the friction of the movement; it’s a beautiful, temporary chemical reaction to being shoved through the void, disappearing almost as soon as it's sparked.
The Sentient Dust Bunny in a Library: We are a collection of accidental debris—memories, genetic leftovers, and cultural lint—that has rolled around long enough to become "heavy." Eventually, this clump of static-charged nothingness starts reading the books it's resting on and mistakenly believes it wrote them.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
Cleverly is a donkey with no charisma
As I am not a Tory I would be delighted if Kemi was replaced, especially by Cleverly, as it will reduce the number of seats they will win.
Kemi is currently projected to win about 50 to 70 seats, tactical anti Reform votes could hold more Tory seats
That’s also not true, the books themselves aren’t sitting inside ChatGPT. It’s been given a set of training data that contains these books (I assume) and it has been trained on the basis of them.
The information in those books in inside the LLM. All of it. And can be retrieved, as has been demonstrated multiple times.
An amusing riff - write a prompt to get one LLM to tease out the large chunks of a given work from another LLM and reassemble them.
I don’t believe you are correct as I’ve said.
It has “learned” from a set of training data containing the books. And it has derived information from said data. But that’s not the same as just having the books.
It will still hallucinate and make up things that aren’t there. You cannot trust it to just blurt out a novel without very careful checking. Because it is probabilistic (something I wish the very worst rampers would understand), it CANNOT accurately represent a novel accurately and consistently.
Yet people have demonstrated, repeatedly that you can get entire works back from it. Using automated stitching together of the big chunks of original text you can prod them to regurgitate.
You can, in fact get one LLM to automate the process for you on another.
At which point it’s a philosophical question - the LLM training transforms the information into an internal representation. But the book(s) can be reconstituted.
The cherry on top is that they used pirate electronic versions
I don’t want to keep repeating this point but it’s a probabilistic model.
You cannot guarantee it will ever give you back correct information.
You stated it can give you back a whole novel. I’m not saying it cannot do that but that’s essentially the result of a fluke as opposed to actual knowledge. Because as I explained you can only ever say to some degree of PROBABILITY that what it provides is what we judge to be correct.
I know you and I disagree very strongly about AI but the facts are facts and we’d be good to understand those.
We may or may not disagree.
But if you can get back whole books with simple techniques, to 99%+ accuracy, isn’t that functionally equivalent to… getting the whole book?
To add to the fun - quite a few pirated books are OCR transcriptions from PDF. Complete with errors.
If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.
SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.
What rough beast is slouching towards the electorate instead of course remains to be seen.
Most 2019 Boris voters are now voting Reform, over half 2019 Corbyn voters are now voting Green. Given Boris won a landslide in 2019 on that basis Reform will stay ahead of the Greens
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
There is a parameter in LLMs called temperature that can be set by the developer/user. It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice. Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive. At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France. High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.
Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters. You know who I mean.
Can someone invent a way to set the temperature of pb posters please.
I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.
Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The sense of entitlement from Labour is extreme.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
Cleverly is a donkey with no charisma
As I am not a Tory I would be delighted if Kemi was replaced, especially by Cleverly, as it will reduce the number of seats they will win.
Kemi is currently projected to win about 50 to 70 seats, tactical anti Reform votes could hold more Tory seats
My model, based on the EMA of recent polls, shows the following seats:
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
When you have a machine that learns, how can you know what it does once it's started learning?
Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
That's good, but everybody makes mistakes.
It’s a probabilistic model. It will ALWAYS make mistakes.
Just like humans. Non- determinancy is needed for creativity and innovation. That's how evolution and progress works.
It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
LLMs are directed and controlled by prompts. Some are input by the user. "What are the current poll shares of the main UK political parties" etc. Many are provided by the AI owners/developers and are invisible to the ordinary users. They provide "guardrails" eg "Don't give bomb making instructions". Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user. Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain". I find this substantially reduces incorrect info and made up stories. They are not malicious (yet). They are only trying to please. They are still children.
Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.
It sounds like you understand that. But a lot of people do not.
It's the same with humans. You ask them to be careful and stick to the facts but they still go off. See PB.
You keep comparing it to humans.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
Have you ever seen an afternoon quiz show?
My point is that people go to these things to get answers, assuming them to be correct.
They are mostly correct, just as human experts are mostly correct (they are already far more correct than average humans on things like capital cities). Anyone with the slightest bit of curiosity knows that LLMs aren't always correct.
They can with some degree of predictability be accurate. But they are not accurate full stop.
You are curious. But the people shilling these things - like in my company - are not.
Are you sure they are so uncurious as to be unaware? Far more likely I would imagine is they have a different tolerance level to mistakes than you do. Commercially an AI that is 95% accurate may well be better than a human who is 99% accurate, depending on the setting.
Comments
https://bsky.app/profile/huwcdavies.bsky.social/post/3mi2zbdzlls2e
https://bsky.app/profile/huwcdavies.bsky.social/post/3mi2zbdzlls2e
Goodwin is asked if the book was peer-reviewed, doesn't answer the question but talks about top demographers.
Academic books generally would be checked in a way that Matt G's clearly wasn't. Which is fine if you accept that it's an ill-informed polemic. His trouble is that he's trying to keep the protective veneer of the academy without adhering to its professional standards.
In all honesty I just think he’s incredibly unlikeable. I despise Farage but he’s clearly got something. Goodwin does not.
Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
Goodwin was doing the misleading and the tweet is relevant.
What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
You just could not make this up in your wildest dreams
It's shit like that that causes ill feeling and people on here should know better.
https://www.youtube.com/watch?v=k9fuSOPjSXM
There is a large body of the electorate still inclined to small "c" Conservatism, if not currently to the Conservative Party. They could still come back if the policies of the Conservative Party are seen to be sensible and attuned to the needs of the country. Those needsb could be markedly different by the time of the next election. I still think Labour the largest party is the sensible bet, just because of the vast number of seats they have to lose. But Labour is doing its very best to piss of the very most, so who knows.
But let’s get some perspective. That extreme party has hit 30% of the vote a few times in polls. Decent, no doubt about that. But the Tories have frequently polled in actual elections over 40%. And indeed during Covid they polled (not in a real election) 50%.
I think we run the risk of concluding what the majority wants is what Reform are selling. And I’m still totally unconvinced that’s true.
The Tories clearly ran out of steam towards the end. But there’s definitely room for a party that aims itself at the 25-50 age bracket. Badenoch has shown some signs of doing that.
SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.
What rough beast is slouching towards the electorate instead of course remains to be seen.
I don't hate Americans either, as Americans. But I find the policies and actions off the US Government 'nearly' as abhorrent.
The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.
Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.
Is it possible to unravel this?
If an LLM regurgitates 460 words from a book without attribution then that’s a problem.
Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
Many are provided by the AI owners/developers and are invisible to the ordinary users.
They provide "guardrails" eg "Don't give bomb making instructions".
Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user.
Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain".
I find this substantially reduces incorrect info and made up stories.
They are not malicious (yet). They are only trying to please. They are still children.
You have to squirm at the quality of Reform councillors we would have had elected if these elections were in mid-late 2025.
Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
It sounds like you understand that. But a lot of people do not.
I’d not be confident the 460 words would even be accurate. Apparently Matthew Goodwin was.
He said you asked me if it was peer reviewed. I say it was sent to demographic experts and checked (or something like that). Definitely misleading but not the claim presented in the tweet
Improve the chance of global peace no end
RFM: 24% (-1)
GRN: 20% (+1)
CON: 18% (+1)
LAB: 16% (=)
LDM: 12% (+1)
SNP: 3% (=)
Via @FindoutnowUK, 26-27 Mar.
Changes w/ 18 Mar.
Reform at their lowest with FoN in their weekly series since December 2024
You ask them to be careful and stick to the facts but they still go off.
See PB.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
If for some reason Streeting was outside the cabinet would he perhaps be seen as centre left rather than right and portray himself differently? I suspect so, and similarly for people outside government now, including Rayner, when inside they were/would be more sympathetic to govt policy.
One of our local Tory councillors is going on about traffic improvements and even bus services, which I am deeply sceptical about as there is only a month to go and the Tory constituency is people who drive SUVs and can afford new EVs and wouldn't understand why some people need to catch a bus. Anyway I am in neither his District nor County ward so I don't have to decide whether to vote for him or not.
It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice.
Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive.
At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France.
High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.
Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters.
You know who I mean.
Betting PostF1: split a stake evenly between Hulk to beat Bortoleto at 2.8, and Norris to beat Piastri at 3.4. Largely based on suspecting car reliability is a bit shit.
On that note, I've hedged Hadjar (mentioned here at 5.25) to beat Verstappen, backing the Dutchman at 1.66.
https://morrisf1.blogspot.com/2026/03/japan-2026-pre-race.html
You are curious. But the people shilling these things - like in my company - are not.
If it had a poor/non-existent training set for a specialised topic on 0.0 it will still output nonsense.
The Infinite Room of Echoing Pianos: Imagine a hall with ten thousand pianos. Every sensory input—a smell of rain, a distant siren—is a finger hitting a key. Consciousness isn’t the sound of one note; it’s the accidental harmony that emerges when the echoes overlap just right. We aren't the players; we are the shimmering "third tone" created by the interference of a billion vibrations.
The Scavenger's Stitched-Together Kite: Your mind is a massive, flimsy kite made of stolen scraps: a piece of your grandmother’s apron, a line from a cereal commercial, and the fear of falling. It’s held together by the "glue" of your ego. "You" are not the kite itself, but the erratic tension in the string that keeps the whole mess from disintegrating in the wind of the present moment.
The Bioluminescent Algae in a Ship’s Wake: Reality is a massive, dark ocean. The "self" is just the glowing disturbance left behind as the physical body plows through time. The light (thought) only exists because of the friction of the movement; it’s a beautiful, temporary chemical reaction to being shoved through the void, disappearing almost as soon as it's sparked.
The Sentient Dust Bunny in a Library: We are a collection of accidental debris—memories, genetic leftovers, and cultural lint—that has rolled around long enough to become "heavy." Eventually, this clump of static-charged nothingness starts reading the books it's resting on and mistakenly believes it wrote them.
Enough! I must stop. I realise I'm doing a @Leon.
But if you can get back whole books with simple techniques, to 99%+ accuracy, isn’t that functionally equivalent to… getting the whole book?
To add to the fun - quite a few pirated books are OCR transcriptions from PDF. Complete with errors.
SNP 48
Con 50
Lab 55
Grn 74
LD 75
Ref 326!