Skip to content

The end of the Keir show might be delayed – politicalbetting.com

13

Comments

  • FoxyFoxy Posts: 55,789

    Foxy said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    He claimed it was by demographic experts.
    Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
    Here is him claiming it:

    https://bsky.app/profile/huwcdavies.bsky.social/post/3mi2zbdzlls2e
  • OldKingColeOldKingCole Posts: 36,985

    Roger said:

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    Depends if it's as big as expected
    No matter how big

    How do you remove him ?
    Enthuse anti-Trump voters and ensure they vote in November.
  • StuartinromfordStuartinromford Posts: 22,010

    Foxy said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    He claimed it was by demographic experts.
    Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
    I think this is the relevant clip;

    https://bsky.app/profile/huwcdavies.bsky.social/post/3mi2zbdzlls2e

    Goodwin is asked if the book was peer-reviewed, doesn't answer the question but talks about top demographers.

    Academic books generally would be checked in a way that Matt G's clearly wasn't. Which is fine if you accept that it's an ill-informed polemic. His trouble is that he's trying to keep the protective veneer of the academy without adhering to its professional standards.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976

    Roger said:

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    Depends if it's as big as expected
    No matter how big

    How do you remove him ?
    Enthuse anti-Trump voters and ensure they vote in November.
    He can, and most certainly will, do a whole lot of damage between now and then
  • BatteryCorrectHorseBatteryCorrectHorse Posts: 5,597
    edited 11:04AM

    Foxy said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    He claimed it was by demographic experts.
    Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
    I think this is the relevant clip;

    https://bsky.app/profile/huwcdavies.bsky.social/post/3mi2zbdzlls2e

    Goodwin is asked if the book was peer-reviewed, doesn't answer the question but talks about top demographers.

    Academic books generally would be checked in a way that Matt G's clearly wasn't. Which is fine if you accept that it's an ill-informed polemic. His trouble is that he's trying to keep the protective veneer of the academy without adhering to its professional standards.
    He’s been doing that for years now. He’ll say something which is just echo chamber nonsense like we’re all being replaced which would be brushed off normally but because he’s an “academic” he implies he’s got more knowledge.

    In all honesty I just think he’s incredibly unlikeable. I despise Farage but he’s clearly got something. Goodwin does not.
  • ydoethurydoethur Posts: 78,247

    Foxy said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    He claimed it was by demographic experts.
    Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
    It's common among conspiracy theorists in America - Richard Carrier springs to mind. He claimed one of his books on mathematical probability had been 'peer reviewed' when it turned out that he'd sent it himself to an expert for comment, then doctored the feedback to make it look like a peer review report he then sent to his publisher.
  • LeonLeon Posts: 67,368
    DougSeal said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recently
    No they're not, not unless they are aimed at an academic audience

    Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
  • malcolmgmalcolmg Posts: 46,068
    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    Cleverly is a donkey with no charisma
  • LeonLeon Posts: 67,368

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    This is palpable bullshit

    I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
  • FF43FF43 Posts: 19,248
    edited 11:09AM

    eek said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    Would be useful to have had a fact checker to confirm that the facts he used as the basis of the book actually had something to back them up.
    Of course.

    My point is that the “gotcha” question in the tweet is designed to mislead - you wouldn’t expect a book to be peer reviewed so “Goodwin can’t even name a single peer reviewer” is a meaningless statement that gives the wrong impression to the unwary
    Yeah but it was Goodwin who used that gotcha, by claiming his book was peer reviewed but wasn't able to say who the reviewers are. He then fell back on saying peer reviews are anonymous, which isn't the case and goes against the whole point of peer reviews as public endorsement of the methods used.

    Goodwin was doing the misleading and the tweet is relevant.
  • Leon said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    This is palpable bullshit

    I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
    I've read your "journalism" and would concede that, among your good friends, it probably doesn't.
  • wooliedyedwooliedyed Posts: 16,970
    edited 11:09AM
    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    Reform slipping towards a level where vote efficiency disintegrates and the 3 'seconds' in a position where a 1% swing gets you 40 odd seats
  • OldKingColeOldKingCole Posts: 36,985

    Roger said:

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    Depends if it's as big as expected
    No matter how big

    How do you remove him ?
    Enthuse anti-Trump voters and ensure they vote in November.
    He can, and most certainly will, do a whole lot of damage between now and then
    Unquestionably, and wherever legal cases can be brought to slow him down they should be.

    What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
  • DougSealDougSeal Posts: 13,327
    Leon said:

    DougSeal said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recently
    No they're not, not unless they are aimed at an academic audience

    Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
    Sorry, that I accept entirely. The context of my comment was his apparent confusion over what “peer reviewed” meant and his claim it had been so.
  • FairlieredFairliered Posts: 7,718
    malcolmg said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    Cleverly is a donkey with no charisma
    As I am not a Tory I would be delighted if Kemi was replaced, especially by Cleverly, as it will reduce the number of seats they will win.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976

    Roger said:

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    Depends if it's as big as expected
    No matter how big

    How do you remove him ?
    Enthuse anti-Trump voters and ensure they vote in November.
    He can, and most certainly will, do a whole lot of damage between now and then
    Unquestionably, and wherever legal cases can be brought to slow him down they should be.

    What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
    I agree, I just do not know where his defence chiefs are in the strategy but then he apparently renamed the Strait of Hormuz as the Trump Strait yesterday

    You just could not make this up in your wildest dreams
  • Can’t see the point in replacing Badenoch. Iran War aside she’s doing just fine.
  • FairlieredFairliered Posts: 7,718

    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    Reform slipping towards a level where vote efficiency disintegrates and the 3 'seconds' in a position where a 1% swing gets you 40 odd seats
    2029 election night could be a long one, with multiple recounts.
  • RogerRoger Posts: 22,695

    Roger said:

    Cyclefree said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?

    It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
    Who said that?
    https://www.telegraph.co.uk/politics/2026/03/27/greens-for-palestine-antisemitic-whatsapp-messages/
    Well I've heard Jews say a Hell of a lot worse about Palestinians including from the Israeli President. What's more if the Telegraph wants to besmirch a political Party they could at least use names and say what the emails actually said rather than their precis'.

    It's shit like that that causes ill feeling and people on here should know better.
  • LeonLeon Posts: 67,368

    Leon said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    This is palpable bullshit

    I have good friends who are academics and writers, I know lots of publishers. This simply doesn't happen
    I've read your "journalism" and would concede that, among your good friends, it probably doesn't.
    But I know what I'm talking about, and you don't. So there's that
  • malcolmgmalcolmg Posts: 46,068

    Taz said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Here in Birmingham, I'd say the main factors driving the exodus are the bin strike, the bankrupt council and dissatisfaction with Labour's policy on Israel/Gaza.
    How many seats do you think the Gaza indies will pick up in Brum ?
    What can Brummie's councillors do about Gaza? I realise it's in a terrible, terrible mess, so there are some similarities but that's about as far as it goes.

    And Good Morning everyone. Lovely sunshine here, but an unusually cold West wind.
    The clowns need to look closer to home
  • LeonLeon Posts: 67,368
    DougSeal said:

    Leon said:

    DougSeal said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recently
    No they're not, not unless they are aimed at an academic audience

    Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
    Sorry, that I accept entirely. The context of my comment was his apparent confusion over what “peer reviewed” meant and his claim it had been so.
    IF Goodwin claimed that - there seems some confusion - then I'd be very surprised he bothered to get "peer review" for a book aimed at the public. No one else does. If he claimed that and he lied, than he's a damn fool
  • TheuniondivvieTheuniondivvie Posts: 47,241
    Leon said:

    DougSeal said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recently
    No they're not, not unless they are aimed at an academic audience

    Take somenne like Scruton. If he wrote for the general public, the idea he'd get it "peer reviewed" is laughable
    Coincidentally hasn’t Goodwin attributed a quote to Scruton that no one else seems to be able to find? Not sure even without peer review one should think that’ particularly adds anything to his arguments.
  • LeonLeon Posts: 67,368
    Roger said:

    Roger said:

    Cyclefree said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?

    It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
    Who said that?
    https://www.telegraph.co.uk/politics/2026/03/27/greens-for-palestine-antisemitic-whatsapp-messages/
    Well I've heard Jews say a Hell of a lot worse about Palestinians including from the Israeli President. What's more if the Telegraph wants to besmirch a political Party they could at least use names and say what the emails actually said rather than their precis'.

    It's shit like that that causes ill feeling and people on here should know better.
    Are you saying @ThomasNashe shouldn't be allowed to post links to major newspaper articles, and he should "know better"?
  • wooliedyedwooliedyed Posts: 16,970

    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    Reform slipping towards a level where vote efficiency disintegrates and the 3 'seconds' in a position where a 1% swing gets you 40 odd seats
    2029 election night could be a long one, with multiple recounts.
    Yep. A hell of a lot of wins on 30.1% versus 30% type results across the country if this 5 party pattern persists
  • OldKingColeOldKingCole Posts: 36,985
    edited 11:20AM

    Roger said:

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    Depends if it's as big as expected
    No matter how big

    How do you remove him ?
    Enthuse anti-Trump voters and ensure they vote in November.
    He can, and most certainly will, do a whole lot of damage between now and then
    Unquestionably, and wherever legal cases can be brought to slow him down they should be.

    What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
    I agree, I just do not know where his defence chiefs are in the strategy but then he apparently renamed the Strait of Hormuz as the Trump Strait yesterday

    You just could not make this up in your wildest dreams
    Apparently, according to CNBC, only semi-facetiously. Just looked up Apple Maps; they have Gulf of Mexico, with Gulf of America in brackets and smaller letters underneath.
  • RogerRoger Posts: 22,695

    Roger said:

    Cyclefree said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?

    It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
    Who said that?
    https://www.telegraph.co.uk/politics/2026/03/27/greens-for-palestine-antisemitic-whatsapp-messages/
    How does this compare. Should we all hate Jews now? This happened two nights ago. This is a game for any number of players

    https://www.youtube.com/watch?v=k9fuSOPjSXM
  • MarqueeMarkMarqueeMark Posts: 58,938

    Can’t see the point in replacing Badenoch. Iran War aside she’s doing just fine.

    She's playing a poor hand reasonably well. She seems to be growing into the role. Can't see any of the other immediate options doing any better. Which might be damning with faint praise, but it is realistic.

    There is a large body of the electorate still inclined to small "c" Conservatism, if not currently to the Conservative Party. They could still come back if the policies of the Conservative Party are seen to be sensible and attuned to the needs of the country. Those needsb could be markedly different by the time of the next election. I still think Labour the largest party is the sensible bet, just because of the vast number of seats they have to lose. But Labour is doing its very best to piss of the very most, so who knows.
  • MarqueeMarkMarqueeMark Posts: 58,938
    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
  • Can’t see the point in replacing Badenoch. Iran War aside she’s doing just fine.

    She's playing a poor hand reasonably well. She seems to be growing into the role. Can't see any of the other immediate options doing any better. Which might be damning with faint praise, but it is realistic.

    There is a large body of the electorate still inclined to small "c" Conservatism, if not currently to the Conservative Party. They could still come back if the policies of the Conservative Party are seen to be sensible and attuned to the needs of the country. Those needsb could be markedly different by the time of the next election. I still think Labour the largest party is the sensible bet, just because of the vast number of seats they have to lose. But Labour is doing its very best to piss of the very most, so who knows.
    I think a lot of people are concluding not even mid way into a Parliament that a party that is extreme (by the historical standards of who has won elections) means that is the settled outcome and therefore the Tories must become the same.

    But let’s get some perspective. That extreme party has hit 30% of the vote a few times in polls. Decent, no doubt about that. But the Tories have frequently polled in actual elections over 40%. And indeed during Covid they polled (not in a real election) 50%.

    I think we run the risk of concluding what the majority wants is what Reform are selling. And I’m still totally unconvinced that’s true.

    The Tories clearly ran out of steam towards the end. But there’s definitely room for a party that aims itself at the 25-50 age bracket. Badenoch has shown some signs of doing that.
  • algarkirkalgarkirk Posts: 16,918
    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.

    SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.

    What rough beast is slouching towards the electorate instead of course remains to be seen.
  • dixiedeandixiedean Posts: 31,736

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
  • OldKingColeOldKingCole Posts: 36,985
    Roger said:

    Roger said:

    Cyclefree said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?

    It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
    Who said that?
    https://www.telegraph.co.uk/politics/2026/03/27/greens-for-palestine-antisemitic-whatsapp-messages/
    How does this compare. Should we all hate Jews now? This happened two nights ago. This is a game for any number of players

    https://www.youtube.com/watch?v=k9fuSOPjSXM
    I definitely do not hate Jews as Jews. I do find the present policies of the State of Israel abhorrent, to say the least.
    I don't hate Americans either, as Americans. But I find the policies and actions off the US Government 'nearly' as abhorrent.
  • AnneJGPAnneJGP Posts: 5,043

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    He might be pleased, he likes to have big crowds out for him.
  • MarqueeMarkMarqueeMark Posts: 58,938

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.

    The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
  • TheuniondivvieTheuniondivvie Posts: 47,241
    edited 11:33AM

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts are precisely zero.
  • RogerRoger Posts: 22,695
    Leon said:

    Roger said:

    Roger said:

    Cyclefree said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    And you're comfortable, are you, with a party whose activists describe Jews as "an abomination on the planet" without a word of condemnation from its leader, a party which has spent the best part of half a million quid fighting legal battles caused by its failure to comply with the Equality Act, a party which has welcomed anti-semites even Corbyn's Labour expelled?

    It is the worst type of populist party - led by a lying charlatan with few scruples and even fewer principles. Rather than being the opposite of Reform it is simply another version of the sort of stupid parties which are ruining this country's politics.
    Who said that?
    https://www.telegraph.co.uk/politics/2026/03/27/greens-for-palestine-antisemitic-whatsapp-messages/
    Well I've heard Jews say a Hell of a lot worse about Palestinians including from the Israeli President. What's more if the Telegraph wants to besmirch a political Party they could at least use names and say what the emails actually said rather than their precis'.

    It's shit like that that causes ill feeling and people on here should know better.
    Are you saying @ThomasNashe shouldn't be allowed to post links to major newspaper articles, and he should "know better"?
    https://www.youtube.com/watch?v=k9fuSOPjSXM
  • Are we seriously saying that 20% of the Tory vote base has always been extreme and wanted to vote for somebody else or is it just that the Tories were perceived to have failed in what they set out to do?
  • wooliedyedwooliedyed Posts: 16,970
    dixiedean said:

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
    Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
  • Daveyboy1961Daveyboy1961 Posts: 5,409

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.

    The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
    Who to choose for a heart attack first, Trump or Farage. It's difficult to choose. I wonder if there are odds available
  • wooliedyedwooliedyed Posts: 16,970
    algarkirk said:

    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.

    SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.

    What rough beast is slouching towards the electorate instead of course remains to be seen.
    The one big issue the Greens may find in the race for first is a swathe of their surge appears to be amongst those least likely to vote in May - youth and usual non voters. Theyd be better off if this were a GE driving up turnout. It might ameliorate an otherwise breakthrough type night.
  • OldKingColeOldKingCole Posts: 36,985

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
    In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
  • algarkirkalgarkirk Posts: 16,918
    Question for Nick Palmer and others.

    Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.

    Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.

    Is it possible to unravel this?
  • MarqueeMarkMarqueeMark Posts: 58,938

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.

    The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
    Who to choose for a heart attack first, Trump or Farage. It's difficult to choose. I wonder if there are odds available
    Farage is succeeded by Tice, Trump by Vance...
  • StillWatersStillWaters Posts: 12,961

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    That’s not the issue.

    If an LLM regurgitates 460 words from a book without attribution then that’s a problem.
  • BarnesianBarnesian Posts: 9,840
    edited 11:46AM

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
  • MarqueeMarkMarqueeMark Posts: 58,938
    edited 11:47AM
    dixiedean said:

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
    They are certainly hundreds and hundreds of seats below where they were six month ago.

    You have to squirm at the quality of Reform councillors we would have had elected if these elections were in mid-late 2025.
  • ydoethurydoethur Posts: 78,247
    edited 11:48AM

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    They certinly were shite under Liz Truss and were given a good and deserved kicking. Whether the voters still think they are shite or are prepared to revisist them is to be seen.

    The Conservartives are probably a Farage heart attack away from Downing Street. Good job he has such a healthy life style, eh?
    Who to choose for a heart attack first, Trump or Farage. It's difficult to choose. I wonder if there are odds available
    I'd say Farage is a better bet. At least there's a chance he has a heart.
  • RochdalePioneersRochdalePioneers Posts: 31,870
    Facebook Business Suite was created by sadists
  • StillWatersStillWaters Posts: 12,961
    Foxy said:

    Foxy said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    He claimed it was by demographic experts.
    Can you provide a link? I haven’t looked. But I’d be surprised that a trained academic (no matter how far he has strayed) would use a term of art like “peer reviewed” incorrectly
    Here is him claiming it:

    https://bsky.app/profile/huwcdavies.bsky.social/post/3mi2zbdzlls2e
    He doesn’t *quite* make the claim.
  • TheuniondivvieTheuniondivvie Posts: 47,241
    edited 11:58AM

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
    In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
    I think there are more successes in self-publishing than there used to be, in fact Goodwin will no doubt get a few sales because people want to believe what he writes, made up or not. S.p. is also a lot cheaper than it used to be.
    Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
  • JohnLilburneJohnLilburne Posts: 8,070

    dixiedean said:

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
    Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
    I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
  • OldKingColeOldKingCole Posts: 36,985

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
    In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
    I think there are more successes in self-publishing than there used to be, in fact Goodwin will no doubt get a few sales because people want to believe what he writes, made up or not. S.p. is also a lot cheaper than it used to be.
    Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
    Very sorry to read your third sentence. Sympathies; must be a strain on the family.
  • Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
  • Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    That’s not the issue.

    If an LLM regurgitates 460 words from a book without attribution then that’s a problem.
    That’s certainly a problem. It wasn’t the problem I was thinking of initially but it’s certainly one.

    I’d not be confident the 460 words would even be accurate. Apparently Matthew Goodwin was.
  • Andy_JSAndy_JS Posts: 39,665
    algarkirk said:

    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.

    SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.

    What rough beast is slouching towards the electorate instead of course remains to be seen.
    I don't believe YouGov over and above what the average of all the other polls are showing.
  • TheuniondivvieTheuniondivvie Posts: 47,241

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    They generally are if they go through an academic publisher. Even at the end of the market where an (ex) academic is writing a "popular" book for the layman, you'd expect drafts to be read on a more informal basis by respected people in the field to avoid blunders.

    But Goodwin left serious academia years ago and is away with the far right fairies.
    Goodwin self published his tome. Having unfortunately some experience with self publishing, the chances of the printer of his book suggesting any second thoughts on drafts is precisely zero.
    In my (admittedly limited experience) self publishers fall into two categories, Those who've written a (possibly rather bad) book and can't get it accepted by a conventional publisher, and those who feel, rightly or wrongly, that they have the knowledge and expertise to ignore the mainstream.
    I think there are more successes in self-publishing than there used to be, in fact Goodwin will no doubt get a few sales because people want to believe what he writes, made up or not. S.p. is also a lot cheaper than it used to be.
    Unfortunately a component of my brother’s mental illness is a belief that he’s a fiction author, resulting in tens of thousands of pounds (not always his own) over the years spent on publishing his books.
    Very sorry to read your third sentence. Sympathies; must be a strain on the family.
    Thanks, appreciated.
  • OldKingColeOldKingCole Posts: 36,985

    dixiedean said:

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
    Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
    I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
    I've posted before that I will probably vote Green in the forthcoming County Council elections, because I know and like the candidate. I will probably vote tactically in the next general election, though, assuming I'm still around.
  • RogerRoger Posts: 22,695
    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?
  • wooliedyedwooliedyed Posts: 16,970

    dixiedean said:

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
    Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
    I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
    Maybe. Maybe. Cant see it in London (Reform wise) which is a third of seats up, i also think the relative incompetence shown in last years councils will focus minds. I dont see them doing better than (and probably rather worse) in local by elections since 2025 which really are a 'free hit'
  • NickPalmerNickPalmer Posts: 21,996
    algarkirk said:

    Question for Nick Palmer and others.

    Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.

    Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.

    Is it possible to unravel this?

    I'd be very attracted to a party with competing visions, but the problem with Labour is that the competing groups don't seem to go for that - they are all about giving more weight to this or that specific policy, as you say. It's the main reason why I'm drifting away.
  • StillWatersStillWaters Posts: 12,961
    FF43 said:

    eek said:

    https://x.com/i_ammukhtar/status/2037808586626080886

    Matt GPT got absolutely cooked

    He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.

    He couldn't name a single person who peer-reviewed his book.

    Books aren’t typically peer reviewed
    Would be useful to have had a fact checker to confirm that the facts he used as the basis of the book actually had something to back them up.
    Of course.

    My point is that the “gotcha” question in the tweet is designed to mislead - you wouldn’t expect a book to be peer reviewed so “Goodwin can’t even name a single peer reviewer” is a meaningless statement that gives the wrong impression to the unwary
    Yeah but it was Goodwin who used that gotcha, by claiming his book was peer reviewed but wasn't able to say who the reviewers are. He then fell back on saying peer reviews are anonymous, which isn't the case and goes against the whole point of peer reviews as public endorsement of the methods used.

    Goodwin was doing the misleading and the tweet is relevant.
    He didn’t quite claim that though.

    He said you asked me if it was peer reviewed. I say it was sent to demographic experts and checked (or something like that). Definitely misleading but not the claim presented in the tweet
  • Brixian59Brixian59 Posts: 1,703

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    As a committed pro-semite (I'm technically Jewish and I was brought up on accounts of the horrors of 30s Germany and the necessity of Israel), I've really had enough of Netanyahu and current Israeli policy, and that doesn't make me an anti-semite. Obviously burning Jewish ambulances is both wrong and stupid, but I don't think that being critical of Israeli policy qualifies at all.
    Typical succinct comment from @NickPalmer that many should take on board
    Needs a fellow Jew to take him out.

    Improve the chance of global peace no end
  • wooliedyedwooliedyed Posts: 16,970
    Westminster Voting Intention:

    RFM: 24% (-1)
    GRN: 20% (+1)
    CON: 18% (+1)
    LAB: 16% (=)
    LDM: 12% (+1)
    SNP: 3% (=)

    Via @FindoutnowUK, 26-27 Mar.
    Changes w/ 18 Mar.

    Reform at their lowest with FoN in their weekly series since December 2024
  • BarnesianBarnesian Posts: 9,840

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976
    Brixian59 said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    As a committed pro-semite (I'm technically Jewish and I was brought up on accounts of the horrors of 30s Germany and the necessity of Israel), I've really had enough of Netanyahu and current Israeli policy, and that doesn't make me an anti-semite. Obviously burning Jewish ambulances is both wrong and stupid, but I don't think that being critical of Israeli policy qualifies at all.
    Typical succinct comment from @NickPalmer that many should take on board
    Needs a fellow Jew to take him out.

    Improve the chance of global peace no end
    It didn't work in Iran so no certainty it wouldn't strengthen Israel's resolve

  • Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976
    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
  • RogerRoger Posts: 22,695
    Brixian59 said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    As a committed pro-semite (I'm technically Jewish and I was brought up on accounts of the horrors of 30s Germany and the necessity of Israel), I've really had enough of Netanyahu and current Israeli policy, and that doesn't make me an anti-semite. Obviously burning Jewish ambulances is both wrong and stupid, but I don't think that being critical of Israeli policy qualifies at all.
    Typical succinct comment from @NickPalmer that many should take on board
    Needs a fellow Jew to take him out.

    Improve the chance of global peace no end
    You would be amazed how few Jews are still sympathetic to Israel which is extraordinary. Politicians who go chasing the 'Jewish' vote are looking in the wrong direction
  • OldKingColeOldKingCole Posts: 36,985
    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    How can you post such a statement! PB contributors ignoring the facts; nonsense, it's just that some of us have different facts.
  • noneoftheabovenoneoftheabove Posts: 27,059
    algarkirk said:

    Question for Nick Palmer and others.

    Leaving power blocs and personalities entirely on one side, is there a recent account in existence of what the various factions in the Labour party (right, centrist, Blue, mainstream, soft left, left, hard left, softish left, social democrat, socialist, marxist, Blairite, pragmatic or whatever), actually believe and think by way of principle, underlying philosophy, policy, visions and goal? Is it possible to give such an account? I read the New Statesman (someone has to) and not even they seem to try very hard to elucidate.

    Discussion seems to centre mostly around particular single issues - like bits of welfare reform, or little bits of cash to pensioners - and of course the personalities - Who Whom.

    Is it possible to unravel this?

    On top of this are the current splits mostly ideological or actually mostly just that half the MPs represent the government so are promoting policy with spending constraints and the other half are free to promote policies without being at all responsible for making them work or funding them by finding cuts or tax raises elsewhere.

    If for some reason Streeting was outside the cabinet would he perhaps be seen as centre left rather than right and portray himself differently? I suspect so, and similarly for people outside government now, including Rayner, when inside they were/would be more sympathetic to govt policy.
  • JohnLilburneJohnLilburne Posts: 8,070

    dixiedean said:

    Foxy said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    One Nation conservatives are not going to win hearts
    But neither is being a marginally less offensive version of Reform.

    The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.

    Reform are going to have a good round of elections in May.
    Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.
    A couple of points is hundreds and hundreds of seats not dozens.
    Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close races
    I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.
    I've posted before that I will probably vote Green in the forthcoming County Council elections, because I know and like the candidate. I will probably vote tactically in the next general election, though, assuming I'm still around.
    I expect I'll still be voting LibDem in the next GE, unless the Tories come up with something good and not Reform-lite. Cleverly would help. In May I will probably vote LibDem for the county (the Tory administration needs an opposition) and Green for the District (the LibDem/localist administration likewise)

    One of our local Tory councillors is going on about traffic improvements and even bus services, which I am deeply sceptical about as there is only a month to go and the Tory constituency is people who drive SUVs and can afford new EVs and wouldn't understand why some people need to catch a bus. Anyway I am in neither his District nor County ward so I don't have to decide whether to vote for him or not.
  • noneoftheabovenoneoftheabove Posts: 27,059

    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    Have you ever seen an afternoon quiz show?
  • RogerRoger Posts: 22,695

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
  • Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    Have you ever seen an afternoon quiz show?
    My point is that people go to these things to get answers, assuming them to be correct.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976
    Roger said:

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
    He is UK PM and as such should not take sides
  • noneoftheabovenoneoftheabove Posts: 27,059

    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    Have you ever seen an afternoon quiz show?
    My point is that people go to these things to get answers, assuming them to be correct.
    They are mostly correct, just as human experts are mostly correct (they are already far more correct than average humans on things like capital cities). Anyone with the slightest bit of curiosity knows that LLMs aren't always correct.
  • BarnesianBarnesian Posts: 9,840
    edited 12:36PM

    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    There is a parameter in LLMs called temperature that can be set by the developer/user.
    It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice.
    Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive.
    At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France.
    High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.

    Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters.
    You know who I mean. :wink:
  • RogerRoger Posts: 22,695

    Roger said:

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
    He is UK PM and as such should not take sides
    I think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeous
  • dixiedeandixiedean Posts: 31,736

    Westminster Voting Intention:

    RFM: 24% (-1)
    GRN: 20% (+1)
    CON: 18% (+1)
    LAB: 16% (=)
    LDM: 12% (+1)
    SNP: 3% (=)

    Via @FindoutnowUK, 26-27 Mar.
    Changes w/ 18 Mar.

    Reform at their lowest with FoN in their weekly series since December 2024

    Puts Labour sixth in seats.
  • TheScreamingEaglesTheScreamingEagles Posts: 127,126
    Roger said:

    Roger said:

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
    He is UK PM and as such should not take sides
    I think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeous
    Keir Starmer holds a meeting with representatives of the Jewish community in Downing Street after four ambulances belonging to Hatzola, a Jewish community organisation, were set on fire in North London
  • JohnLilburneJohnLilburne Posts: 8,070
    Roger said:

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
    Is it today? You haven't provided a link. In which case it is Shabbat, and in fact a special one as Passover starts next week
  • FrankBoothFrankBooth Posts: 10,484
    Roger said:

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
    You think it odd that he's meeting with someone wearing a Yarmulka? Would you find it odd if he met with someone wearing a turban? Some sort of Islamic garb?
  • wooliedyedwooliedyed Posts: 16,970
    dixiedean said:

    Westminster Voting Intention:

    RFM: 24% (-1)
    GRN: 20% (+1)
    CON: 18% (+1)
    LAB: 16% (=)
    LDM: 12% (+1)
    SNP: 3% (=)

    Via @FindoutnowUK, 26-27 Mar.
    Changes w/ 18 Mar.

    Reform at their lowest with FoN in their weekly series since December 2024

    Puts Labour sixth in seats.
    And I dont see anything that drags the remaining core out on May 7th to vote. Very low teens NEV possible?
  • EabhalEabhal Posts: 13,828
    Dura_Ace said:

    Westminster Voting Intention:

    RFM: 24% (-1)
    GRN: 20% (+1)
    CON: 18% (+1)
    LAB: 16% (=)
    LDM: 12% (+1)
    SNP: 3% (=)

    Via @FindoutnowUK, 26-27 Mar.
    Changes w/ 18 Mar.

    Reform at their lowest with FoN in their weekly series since December 2024

    Go back to your yurts, VW campers and treehouses AND PREPARE FOR GOVERNMENT.
    Hope he travels to the Palace on a cargo bike. The King would probably love that.
  • EabhalEabhal Posts: 13,828

    Roger said:

    Roger said:

    Should be some good Demos today for anyone in the US.

    https://www.nokings.org/

    It won't make any difference to the mad Trump though will it ?
    Depends if it's as big as expected
    No matter how big

    How do you remove him ?
    Enthuse anti-Trump voters and ensure they vote in November.
    He can, and most certainly will, do a whole lot of damage between now and then
    Unquestionably, and wherever legal cases can be brought to slow him down they should be.

    What is alarming is that the armed forces seem to be obeying him without any arguments. A mutiny in the Straits of Hormuz might, just might cause him to change course.
    There are a lot of internet rumours about what exactly happened on the Ford.
  • Morris_DancerMorris_Dancer Posts: 63,718
    Betting Post

    F1: split a stake evenly between Hulk to beat Bortoleto at 2.8, and Norris to beat Piastri at 3.4. Largely based on suspecting car reliability is a bit shit.

    On that note, I've hedged Hadjar (mentioned here at 5.25) to beat Verstappen, backing the Dutchman at 1.66.

    https://morrisf1.blogspot.com/2026/03/japan-2026-pre-race.html
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976
    Roger said:

    Roger said:

    Roger said:

    Anyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?

    Why shouldn't he ?
    I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at work
    He is UK PM and as such should not take sides
    I think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeous
    I really do not care if Starmer wants to meet with Jews or any other religious group
  • Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    Have you ever seen an afternoon quiz show?
    My point is that people go to these things to get answers, assuming them to be correct.
    They are mostly correct, just as human experts are mostly correct (they are already far more correct than average humans on things like capital cities). Anyone with the slightest bit of curiosity knows that LLMs aren't always correct.
    They can with some degree of predictability be accurate. But they are not accurate full stop.

    You are curious. But the people shilling these things - like in my company - are not.
  • BatteryCorrectHorseBatteryCorrectHorse Posts: 5,597
    edited 12:49PM
    Barnesian said:

    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    There is a parameter in LLMs called temperature that can be set by the developer/user.
    It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice.
    Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive.
    At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France.
    High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.

    Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters.
    You know who I mean. :wink:
    I don’t believe that’s quite right. Even at 0.0 there’s no way to guarantee it will be correct. The likelihood is just relatively higher that it will choose the most likely next token. That token can still be incorrect.

    If it had a poor/non-existent training set for a specialised topic on 0.0 it will still output nonsense.
  • BarnesianBarnesian Posts: 9,840
    Here is Gemini with temperature set at 1.0, musing about a few low-probability metaphors for human consciousness:

    The Infinite Room of Echoing Pianos: Imagine a hall with ten thousand pianos. Every sensory input—a smell of rain, a distant siren—is a finger hitting a key. Consciousness isn’t the sound of one note; it’s the accidental harmony that emerges when the echoes overlap just right. We aren't the players; we are the shimmering "third tone" created by the interference of a billion vibrations.

    The Scavenger's Stitched-Together Kite: Your mind is a massive, flimsy kite made of stolen scraps: a piece of your grandmother’s apron, a line from a cereal commercial, and the fear of falling. It’s held together by the "glue" of your ego. "You" are not the kite itself, but the erratic tension in the string that keeps the whole mess from disintegrating in the wind of the present moment.

    The Bioluminescent Algae in a Ship’s Wake: Reality is a massive, dark ocean. The "self" is just the glowing disturbance left behind as the physical body plows through time. The light (thought) only exists because of the friction of the movement; it’s a beautiful, temporary chemical reaction to being shoved through the void, disappearing almost as soon as it's sparked.

    The Sentient Dust Bunny in a Library: We are a collection of accidental debris—memories, genetic leftovers, and cultural lint—that has rolled around long enough to become "heavy." Eventually, this clump of static-charged nothingness starts reading the books it's resting on and mistakenly believes it wrote them.

    Enough! I must stop. I realise I'm doing a @Leon.
  • Anyway whatever you think about the pros and cons of AI, I’m very confident the bubble is going to burst before long. That can only be a good thing.
  • HYUFDHYUFD Posts: 135,041

    malcolmg said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    Cleverly is a donkey with no charisma
    As I am not a Tory I would be delighted if Kemi was replaced, especially by Cleverly, as it will reduce the number of seats they will win.
    Kemi is currently projected to win about 50 to 70 seats, tactical anti Reform votes could hold more Tory seats
  • williamglennwilliamglenn Posts: 58,484
    Dura_Ace said:

    Westminster Voting Intention:

    RFM: 24% (-1)
    GRN: 20% (+1)
    CON: 18% (+1)
    LAB: 16% (=)
    LDM: 12% (+1)
    SNP: 3% (=)

    Via @FindoutnowUK, 26-27 Mar.
    Changes w/ 18 Mar.

    Reform at their lowest with FoN in their weekly series since December 2024

    Go back to your yurts, VW campers and treehouses AND PREPARE FOR GOVERNMENT.
    Drain the Swampy.
  • MalmesburyMalmesbury Posts: 61,902

    That’s also not true, the books themselves aren’t sitting inside ChatGPT. It’s been given a set of training data that contains these books (I assume) and it has been trained on the basis of them.

    The information in those books in inside the LLM. All of it. And can be retrieved, as has been demonstrated multiple times.

    An amusing riff - write a prompt to get one LLM to tease out the large chunks of a given work from another LLM and reassemble them.
    I don’t believe you are correct as I’ve said.

    It has “learned” from a set of training data containing the books. And it has derived information from said data. But that’s not the same as just having the books.

    It will still hallucinate and make up things that aren’t there. You cannot trust it to just blurt out a novel without very careful checking. Because it is probabilistic (something I wish the very worst rampers would understand), it CANNOT accurately represent a novel accurately and consistently.
    Yet people have demonstrated, repeatedly that you can get entire works back from it. Using automated stitching together of the big chunks of original text you can prod them to regurgitate.

    You can, in fact get one LLM to automate the process for you on another.

    At which point it’s a philosophical question - the LLM training transforms the information into an internal representation. But the book(s) can be reconstituted.

    The cherry on top is that they used pirate electronic versions
    I don’t want to keep repeating this point but it’s a probabilistic model.

    You cannot guarantee it will ever give you back correct information.

    You stated it can give you back a whole novel. I’m not saying it cannot do that but that’s essentially the result of a fluke as opposed to actual knowledge. Because as I explained you can only ever say to some degree of PROBABILITY that what it provides is what we judge to be correct.

    I know you and I disagree very strongly about AI but the facts are facts and we’d be good to understand those.
    We may or may not disagree.

    But if you can get back whole books with simple techniques, to 99%+ accuracy, isn’t that functionally equivalent to… getting the whole book?

    To add to the fun - quite a few pirated books are OCR transcriptions from PDF. Complete with errors.
  • HYUFDHYUFD Posts: 135,041
    algarkirk said:

    Andy_JS said:

    The contest for second place is hotting up.

    ElectionMaps polling average

    Ref 25.9%
    Lab 17.9%
    Con 17.7%
    Grn 17.4%
    LD 12.8%
    SNP 2.4%

    https://electionmaps.uk/polling/vi

    If you believe YouGov the race for first place is hotting up too. And if Reform are perceived as underperforming a bit in May it could get interesting. On current trends (which are not remotely predictions!) the Greens crossover with Reform sometime in 2026/7.

    SFAICS the mixture of being in bed with Putin and Trump +, for proper headbangers, the prospect of an even loonier party to the far right of them should see them off before the GE in 2029.

    What rough beast is slouching towards the electorate instead of course remains to be seen.
    Most 2019 Boris voters are now voting Reform, over half 2019 Corbyn voters are now voting Green. Given Boris won a landslide in 2019 on that basis Reform will stay ahead of the Greens
  • noneoftheabovenoneoftheabove Posts: 27,059
    Barnesian said:

    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    There is a parameter in LLMs called temperature that can be set by the developer/user.
    It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice.
    Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive.
    At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France.
    High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.

    Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters.
    You know who I mean. :wink:
    Can someone invent a way to set the temperature of pb posters please.
  • BarnesianBarnesian Posts: 9,840
    HYUFD said:

    malcolmg said:

    HYUFD said:

    HYUFD said:

    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
    And that is why you are a de facto Faragist hiding behind a pro Cleverly agenda
    What utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anyway
    Cleverly is a donkey with no charisma
    As I am not a Tory I would be delighted if Kemi was replaced, especially by Cleverly, as it will reduce the number of seats they will win.
    Kemi is currently projected to win about 50 to 70 seats, tactical anti Reform votes could hold more Tory seats
    My model, based on the EMA of recent polls, shows the following seats:

    SNP 48
    Con 50
    Lab 55
    Grn 74
    LD 75
    Ref 326!
  • noneoftheabovenoneoftheabove Posts: 27,059

    Barnesian said:

    Barnesian said:

    Barnesian said:

    AnneJGP said:

    AnneJGP said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
    Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.
    That's good, but everybody makes mistakes.
    It’s a probabilistic model. It will ALWAYS make mistakes.
    Just like humans.
    Non- determinancy is needed for creativity and innovation.
    That's how evolution and progress works.
    It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).

    But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.

    As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.

    I just wish people would try and understand its limits and get away from the hype, that’s all.
    LLMs are directed and controlled by prompts.
    Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
    Many are provided by the AI owners/developers and are invisible to the ordinary users.
    They provide "guardrails" eg "Don't give bomb making instructions".
    Others provide behavioural guidance eg "Be nice and polite to users".

    The last prompt can encourage an AI to provide false information to avoid disappointing the user.
    Hence "hallucinations" and incorrect info in an effort to please.

    The solution is for the user to prompt "Say you don't know unless you are are certain".
    I find this substantially reduces incorrect info and made up stories.
    They are not malicious (yet). They are only trying to please. They are still children.
    Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.

    It sounds like you understand that. But a lot of people do not.
    It's the same with humans.
    You ask them to be careful and stick to the facts but they still go off.
    See PB.
    You keep comparing it to humans.

    We know the capital of France is Paris.

    There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.

    As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
    Have you ever seen an afternoon quiz show?
    My point is that people go to these things to get answers, assuming them to be correct.
    They are mostly correct, just as human experts are mostly correct (they are already far more correct than average humans on things like capital cities). Anyone with the slightest bit of curiosity knows that LLMs aren't always correct.
    They can with some degree of predictability be accurate. But they are not accurate full stop.

    You are curious. But the people shilling these things - like in my company - are not.
    Are you sure they are so uncurious as to be unaware? Far more likely I would imagine is they have a different tolerance level to mistakes than you do. Commercially an AI that is 95% accurate may well be better than a human who is 99% accurate, depending on the setting.
Sign In or Register to comment.