Skip to content

The end of the Keir show might be delayed – politicalbetting.com

SystemSystem Posts: 13,027
edited 6:30AM in General
The end of the Keir show might be delayed – politicalbetting.com

“The right of the party know none of their candidates can win, and the soft left are already getting everything they want, so why bother changing leader?”Why Labour MPs now believe Starmer will survive the party’s looming election disaster.https://t.co/akl4hH17oD

Read the full story here

«134

Comments

  • ydoethurydoethur Posts: 78,247
    If there is no end of Keir show, does that make Labour a bunch of Keirrots?
  • MarqueeMarkMarqueeMark Posts: 58,938
    ydoethur said:

    If there is no end of Keir show, does that make Labour a bunch of Keirrots?

    Starmer's Labour are just U-turnips
  • isamisam Posts: 43,898
    ❤️ I love researching Hansard.

    💡 Just found this from Hartley Booth (Margaret Thatcher's successor as Finchley MP) in 1992.

    🗣️ 'Hon. Members will be grateful to hear that I have consulted the people of Finchley on the issue—

    🗣️ '[HON. MEMBERS: "What did she say?"]'



    https://x.com/expremiers/status/2037624447666889033?s=46&t=CW4pL-mMpTqsJXCdjW0Z6Q
  • TheScreamingEaglesTheScreamingEagles Posts: 127,125
    Max Verstappen knocked out in Q2.

    I am devastated.
  • MarqueeMarkMarqueeMark Posts: 58,938
    FPT: Another 1,300 Russian soldiers and 80 artillery pieces/MLRS not reporting for duty in Ukraine today.

    The totals will be 1.3m and 39,000 respectively at some point next week.
  • StuartinromfordStuartinromford Posts: 22,010
    Starmer is what he always was- horribly flawed as PM, but less flawed than the available alternatives (sorry Angela, sorry Andy, sorry Wes). So he stays until someone better emerges.

    Thinking about it , I wonder how much that was the thought process of Conservative MPs in the 2022 confidence vote; "Boris is obviously a disaster, but if we sack him it will be Sunak or Truss oh no the members will go for Truss..."

    Then again, it feels like a loooong time since a General Election threw up two genuinely viable, non-discredited, candidates for PM.
  • ydoethurydoethur Posts: 78,247

    Starmer is what he always was- horribly flawed as PM, but less flawed than the available alternatives (sorry Angela, sorry Andy, sorry Wes). So he stays until someone better emerges.

    Thinking about it , I wonder how much that was the thought process of Conservative MPs in the 2022 confidence vote; "Boris is obviously a disaster, but if we sack him it will be Sunak or Truss oh no the members will go for Truss..."

    Then again, it feels like a loooong time since a General Election threw up two genuinely viable, non-discredited, candidates for PM.

    Cameron and Miliband?

    But before that it might be Major and Kinnock.
  • Luckyguy1983Luckyguy1983 Posts: 34,508
    https://youtu.be/qSSorlVYP0o?si=ETfHtjph5Sv7UHxZ

    Interesting Liz Truss Show guest Colin Brazier on the decline of Sky News - one for BigG!
  • BattlebusBattlebus Posts: 2,792
    A VONC in SKS would be a kier-fuffle
  • ydoethurydoethur Posts: 78,247
    If he loses, would there be tears for Keir’s?
  • TheScreamingEaglesTheScreamingEagles Posts: 127,125
    ydoethur said:

    If he loses, would there be tears for Keir’s?

    I’ve already used that one and the sum of all Keirs.

    Proud of this one the most.


    https://www1.politicalbetting.com/index.php/archives/2025/11/18/present-keirs-are-less-than-horrible-imaginings/
  • StuartinromfordStuartinromford Posts: 22,010
    ydoethur said:

    Starmer is what he always was- horribly flawed as PM, but less flawed than the available alternatives (sorry Angela, sorry Andy, sorry Wes). So he stays until someone better emerges.

    Thinking about it , I wonder how much that was the thought process of Conservative MPs in the 2022 confidence vote; "Boris is obviously a disaster, but if we sack him it will be Sunak or Truss oh no the members will go for Truss..."

    Then again, it feels like a loooong time since a General Election threw up two genuinely viable, non-discredited, candidates for PM.

    Cameron and Miliband?

    But before that it might be Major and Kinnock.
    I wasn't here for Blair-Hague, but I wonder if that was a mirror image of Cameron-Miliband. In both cases, the loser was sort of plausible, but only sort of. But yes, the losing side was neither broken by government (Brown, Sunak, Major '97) nor self-indulgent (Corbyn, or IDS had he not been dumped) nor knowingly there to lose with relative dignity (Howard).

    Blimey, that was just over a decade ago; feels like much longer. Good job we avoided all that chaos.
  • StuartinromfordStuartinromford Posts: 22,010

    ydoethur said:

    If he loses, would there be tears for Keir’s?

    I’ve already used that one and the sum of all Keirs.

    Proud of this one the most.


    https://www1.politicalbetting.com/index.php/archives/2025/11/18/present-keirs-are-less-than-horrible-imaginings/
    There's nothing to fear but Kier himself?
  • NigelbNigelb Posts: 87,602
    ydoethur said:

    If there is no end of Keir show, does that make Labour a bunch of Keirrots?

    It makes Keir pierless.
  • NigelbNigelb Posts: 87,602

    Max Verstappen knocked out in Q2.

    I am devastated.

    Will Red Bull be dropping their no.2 driver ?

    He's very expensive and complains a lot.
  • AnneJGPAnneJGP Posts: 5,043
    Can't speak for anyone apart from myself, but I've had enough of changing Prime Ministers at every hurdle.

    Good morning, everybody.
  • NigelbNigelb Posts: 87,602
    ydoethur said:

    If he loses, would there be tears for Keir’s?

    Jeers for Keir ?
  • NigelbNigelb Posts: 87,602
    There's nowt so Keir as folk ?
  • TazTaz Posts: 26,341

    https://youtu.be/qSSorlVYP0o?si=ETfHtjph5Sv7UHxZ

    Interesting Liz Truss Show guest Colin Brazier on the decline of Sky News - one for BigG!

    Remember the half hour daily ‘Climate Show’.

    I’m sure it sounded a good idea on paper !!
  • TazTaz Posts: 26,341
    From Keir to Eternity.
  • TazTaz Posts: 26,341
    Never apologise never explain part 94

    You’ll satisfy no one

    https://x.com/politlcsuk/status/2037629430915289400?s=61
  • TazTaz Posts: 26,341
    Nigelb said:

    ydoethur said:

    If he loses, would there be tears for Keir’s?

    Jeers for Keir ?
    Cheers for Keir.
  • ydoethurydoethur Posts: 78,247
    Nigelb said:

    Max Verstappen knocked out in Q2.

    I am devastated.

    Will Red Bull be dropping their no.2 driver ?

    He's very expensive and complains a lot.
    @TSE has always seen Verstappen as something of a number two.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976

    https://youtu.be/qSSorlVYP0o?si=ETfHtjph5Sv7UHxZ

    Interesting Liz Truss Show guest Colin Brazier on the decline of Sky News - one for BigG!

    Good morning

    Sky News seems to be winning the awards

    https://www.skygroup.sky/en-gb/article/sky-news-secures-ninth-news-channel-of-the-year-award
  • ydoethurydoethur Posts: 78,247
    Taz said:

    Nigelb said:

    ydoethur said:

    If he loses, would there be tears for Keir’s?

    Jeers for Keir ?
    Cheers for Keir.
    Slainte for Angela?
  • Sunil_PrasannanSunil_Prasannan Posts: 58,692
    Taz said:

    Nigelb said:

    ydoethur said:

    If he loses, would there be tears for Keir’s?

    Jeers for Keir ?
    Cheers for Keir.
    Keir and Present Danger.
  • NigelbNigelb Posts: 87,602

    Taz said:

    Nigelb said:

    ydoethur said:

    If he loses, would there be tears for Keir’s?

    Jeers for Keir ?
    Cheers for Keir.
    Keir and Present Danger.
    We've had that is a headline already I think ?
  • wooliedyedwooliedyed Posts: 16,968
    https://x.com/i/status/2037679644455923781

    Morning all.
    So, by the established pattern, someone should report Morgan has been stolen next week
  • Sunil_PrasannanSunil_Prasannan Posts: 58,692
    Nigelb said:

    Taz said:

    Nigelb said:

    ydoethur said:

    If he loses, would there be tears for Keir’s?

    Jeers for Keir ?
    Cheers for Keir.
    Keir and Present Danger.
    We've had that is a headline already I think ?
    No that was The Sum of All Keirs.
  • rkrkrkrkrkrk Posts: 9,182
    The soft left aren't getting everything they want. Not by a long shot.
  • PulpstarPulpstar Posts: 80,740
    Taz said:

    Never apologise never explain part 94

    You’ll satisfy no one

    https://x.com/politlcsuk/status/2037629430915289400?s=61

    What a wet wipe. Hopefully the bluenoses turn then over next match tbh lol
  • NigelbNigelb Posts: 87,602
    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409
  • PulpstarPulpstar Posts: 80,740

    https://x.com/i/status/2037679644455923781

    Morning all.
    So, by the established pattern, someone should report Morgan has been stolen next week

    I lost a phone ages back, everything gets backed up to the cloud and was quickly restored when I bought my new phone
  • RogerRoger Posts: 22,695
    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

  • ydoethurydoethur Posts: 78,247
    edited 7:57AM
    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409


    Amazing really that AI has for once proven that it's the owners giving out false information.
  • MarqueeMarkMarqueeMark Posts: 58,938
    So the Keir Hall Putsch is delayed?
  • MarqueeMarkMarqueeMark Posts: 58,938
    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
  • ydoethurydoethur Posts: 78,247

    So the Keir Hall Putsch is delayed?

    Which makes everyone feel foolish if they were a Keir seller.
  • PulpstarPulpstar Posts: 80,740
    edited 8:00AM
    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
  • NigelbNigelb Posts: 87,602
    It's starting to look like Talarico v Paxton.

    In this mornings Politico Playbook…

    - Trump’s potential endorsement of Cornyn is dead

    - Paxton had a “positive meeting” with Trump on Friday

    - NRSC and SLF confirm they’ve abandoned Cornyn in the runoff

    https://x.com/CarolineWren/status/2037513970442019182
    -
  • Pro_RataPro_Rata Posts: 6,104
    So, this morning I'm 3 hours into another journey north, currently on diversion yellow lining* the Durham coast having just cleared Roger's beloved Hartlepool.

    I think a challenge could still come, the first step in a Labour challenge is not obviously that much harder than for the Conservatives, they just haven't had the need. And at that point, it's stick or twist, knowing you might not get another bite.

    * Mentally, not literally, I don't possess such a book.
  • FoxyFoxy Posts: 55,789
    Have Labour got their Keir goggles on?
  • RogerRoger Posts: 22,695

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    When the cock croweth twice you will deny her thrice
  • DecrepiterJohnLDecrepiterJohnL Posts: 35,501

    Starmer is what he always was- horribly flawed as PM, but less flawed than the available alternatives (sorry Angela, sorry Andy, sorry Wes). So he stays until someone better emerges.

    Thinking about it , I wonder how much that was the thought process of Conservative MPs in the 2022 confidence vote; "Boris is obviously a disaster, but if we sack him it will be Sunak or Truss oh no the members will go for Truss..."

    Then again, it feels like a loooong time since a General Election threw up two genuinely viable, non-discredited, candidates for PM.

    The thing is not that the available alternatives are flawed but they are unavailable or might not want it.

    Andy Burnham: not an MP so can't stand
    Wes Streeting: can't win from the right
    Ed Miliband: on record as not wanting PM after his stint as LotO; said to want Chancellor and offered a pact with Rayner to that end; is already implementing his pet policy
    Angela Rayner: even assuming the tax thing goes away in a couple of months, is said to have cold feet

    After May and Boris and Liz and Rishi (and, for that matter, Kier) maybe the bloom has come off the rose. Maybe Prime Minister is a terrible job, hounded by the media, derided by the public, almost powerless, and with a tenure of a couple of years at most.

    If you go into politics to get things done, your own department or (better) the Treasury are the places to be. If you want to be treated like a king, wafted around the world from banquet to banquet, Foreign Secretary. Only if you became an MP with the ambition of being insulted weekly by the American president is the top of the greasy pole worth the climb.

  • AnneJGPAnneJGP Posts: 5,043
    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    My translator friend says that although translation as a profession is almost dead now, those who want to keep intellectual property (or anything else) confidential still employ them and forbid the use of AI assistance.
  • SandyRentoolSandyRentool Posts: 24,795
    Pro_Rata said:

    So, this morning I'm 3 hours into another journey north, currently on diversion yellow lining* the Durham coast having just cleared Roger's beloved Hartlepool.

    I think a challenge could still come, the first step in a Labour challenge is not obviously that much harder than for the Conservatives, they just haven't had the need. And at that point, it's stick or twist, knowing you might not get another bite.

    * Mentally, not literally, I don't possess such a book.

    Yellow-penning on a Saturday morning. What could be better?

    I once went up the Durham Coast behind a Class 40 on the Scarborough - Newcastle. Happy days.
  • Luckyguy1983Luckyguy1983 Posts: 34,508

    https://youtu.be/qSSorlVYP0o?si=ETfHtjph5Sv7UHxZ

    Interesting Liz Truss Show guest Colin Brazier on the decline of Sky News - one for BigG!

    Good morning

    Sky News seems to be winning the awards

    https://www.skygroup.sky/en-gb/article/sky-news-secures-ninth-news-channel-of-the-year-award
    Yes, well spotted - Colin says in one of his early answers that given the decline of viewership (from already fairly low levels) the channel had become even more disconnected from its audience and chased industry gongs as the main measure of success.
  • MattWMattW Posts: 32,771
    edited 8:19AM
    Hello Campers. Hi-de-Hi !

    Ooh. Class Action lawsuit by Epstein victims about disclosure of their identities by the US Department of Justice and Google:

    https://edition.cnn.com/2026/03/27/us/epstein-survivors-sue-doj-google-hnk

    Glen Kirchner commentary:
    https://www.youtube.com/watch?v=2jOL8vIsmtU

    Bank of America ouit of Court settlement with Epstein victims in case for enabling Epstein's financial operations:
    https://www.cnbc.com/2026/03/27/jeffrey-epstein-bank-of-america-lawsuit-settle.html
  • AnneJGPAnneJGP Posts: 5,043
    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    Seems to me it won't belong until 'truth' is what AI says it is. No way to uncover actual facts. Historians used to do research through contemporary records. Once they're all online & inside the AI library who will be able to check?
  • nico67nico67 Posts: 7,425
    Nigelb said:

    It's starting to look like Talarico v Paxton.

    In this mornings Politico Playbook…

    - Trump’s potential endorsement of Cornyn is dead

    - Paxton had a “positive meeting” with Trump on Friday

    - NRSC and SLF confirm they’ve abandoned Cornyn in the runoff

    https://x.com/CarolineWren/status/2037513970442019182
    -

    That would be great news for the Dems . Paxton is totally corrupt . The Texas house of representatives impeached him including 60 GOP members but the GOP in the senate chickened out .

    The fact a Texas Attorney General the highest legal officer in a state can remain in post after a series of allegations that were apparently good enough for 60 GOP members who clearly would rather do anything but convict shows the cesspit the US finds itself in .

  • MarqueeMarkMarqueeMark Posts: 58,938

    I've bet Starmer survives the year comfortably

    Survives. Though whether he is comfortable...
  • Morris_DancerMorris_Dancer Posts: 63,718
    F1: plan to put up pre-race ramble either early afternoon (if the markets are up by then) or late afternoon/early evening if not.

    Less than fantastic but imagine I'll offer a tip.

    The 'mention' of backing Hadjar to beat Verstappen might be hedgeable. Odds were 5.25.
  • SandyRentoolSandyRentool Posts: 24,795
    Everyone is assuming that if ut goes to the membership, then the candidate furthest to the left will win.

    That didn't happen last time, and since then we've lost lefty members to the Greens and Your Party.

    So if it were, for example, Streeting v Rayner, I don't see it as a shoo-in for the latter.

    The deputy leader result was closer than many expected, and that was in part a 'stick 2 fingers up at Starmer' protest vote.
  • boulayboulay Posts: 8,535
    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    I can’t remember if it was here or elsewhere I read a comment that AI could well kill itself because it’s largely being fed now by its own slop and so as the amount of slop increases and feeds AI’s decision making the worse it gets and so on. I am medieval when it comes to tech so no idea if this is plausible but quite fun.

    On the issue of AI and books, musicians are complaining about AI using their music without permission to be able to generate AI music and it’s unfair and they should be paid. Do these musicians ever consider that their own efforts are equally formed by their musical experience - they will be heavily influenced by styles and actual sounds consciously and sub-consciously. Listen to any musician and the chances are you can recognise past songs in their music. Unless a musician creates a brand new style then they are doing effectively what they complain AI can do.
  • AnneJGPAnneJGP Posts: 5,043
    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    When you have a machine that learns, how can you know what it does once it's started learning?
  • StillWatersStillWaters Posts: 12,961
    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
  • turbotubbsturbotubbs Posts: 22,423
    Pro_Rata said:

    So, this morning I'm 3 hours into another journey north, currently on diversion yellow lining* the Durham coast having just cleared Roger's beloved Hartlepool.

    I think a challenge could still come, the first step in a Labour challenge is not obviously that much harder than for the Conservatives, they just haven't had the need. And at that point, it's stick or twist, knowing you might not get another bite.

    * Mentally, not literally, I don't possess such a book.

    I think the reason no one moves against Keir is that it’s too far out from an election. Go in 2028 and have a new or new girl bounce into the GE. Not now, when Keir is ok and labour have a huge majority. And if they do change early they face the same boring demands for an early GE to give a mandate. Easy to shrug off perhaps but makes it harder to criticise next time someone else does it. (When Bobby Jenrick ousts a bored Farage, say).
  • Big_G_NorthWalesBig_G_NorthWales Posts: 70,976

    I've bet Starmer survives the year comfortably

    Survives. Though whether he is comfortable...
    Starmer is very unpopular, heading an unpopular government, coming into potentially the worst economic crisis for decades with a clueless Chancellor paralysed by the bond markets with no easy answers, and as we know the governments always gets the blame

    Starmer may well survive, but the future for both Starmer and labour looks grim buffeted from the extreme right and left with absolutely no money
  • TresTres Posts: 3,542

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
  • kinabalukinabalu Posts: 49,765
    Great for my book if SKS doesn't exit until next year or the year after.
  • LostPasswordLostPassword Posts: 22,947
    I'd argue Blair was forced out. He left earlier than he wanted to.

    But in a way that also makes the point. He was forced out by Brown and his supporters, and there's no heir apparent ready to take over now. The situation is very much resistable force meets movable object.
  • FoxyFoxy Posts: 55,789
    Tres said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
    Soon to have all our medical data too, in the hands of Palantir.
  • Alphabet_SoupAlphabet_Soup Posts: 3,818
    boulay said:

    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    I can’t remember if it was here or elsewhere I read a comment that AI could well kill itself because it’s largely being fed now by its own slop and so as the amount of slop increases and feeds AI’s decision making the worse it gets and so on. I am medieval when it comes to tech so no idea if this is plausible but quite fun.

    On the issue of AI and books, musicians are complaining about AI using their music without permission to be able to generate AI music and it’s unfair and they should be paid. Do these musicians ever consider that their own efforts are equally formed by their musical experience - they will be heavily influenced by styles and actual sounds consciously and sub-consciously. Listen to any musician and the chances are you can recognise past songs in their music. Unless a musician creates a brand new style then they are doing effectively what they complain AI can do.
    A limited number of sequences and harmonies are pleasing to the western ear. A hundred years ago composers realised they had a stark choice between repeating each other endlessly or writing new stuff that was painful to listen to. The chose the latter.
  • HYUFDHYUFD Posts: 135,041
    edited 8:54AM
    Yes, the Conservatives have forced out several leaders since they got rid of Ted Heath in 1975 and replaced him with Mrs Thatcher who Tory MPs in turn effectively dumped in favour of Major in 1990. You can include IDS, May and Truss in that list. Labour MPs are much more sentimental in terms of their leaders. Ironically the only Labour leader forced out not at a time of their own choosing in the last 50 years was Tony Blair, the most successful Labour general election winning leader ever, who had to resign earlier than he liked after immense pressure from Brown supporting MPs to hand over to their man who then went on to lose the 2010 general election.

    Even if Labour MPs did want to remove Starmer Labour leadership rules mean that members would get the final say, unlike the Tories rules where Tory MPs get the final say on a VONC. Hence Corbyn was re elected by Labour members despite most MPs nominating Owen Smith to replace him. So the odds are SKS will stay but if Labour are third or worse on the NEV in May behind not only Reform but the Tories and even maybe the Greens too then you would expect Rayner to challenge Starmer if she can get enough MPs to nominate her. Rayner would then likely win the membership vote and Starmer, the only Labour leader other than Blair to have won a general election in the last 50 years would find himself like Blair as the only Labour leader forced out earlier than he wished, not least for the crime of not being leftwing enough for a party which unlike the Tories has traditionally put its heart ahead of its head
  • TheuniondivvieTheuniondivvie Posts: 47,241
    edited 8:54AM
    Foxy said:

    Tres said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
    Soon to have all our medical data too, in the hands of Palantir.
    No, no, Palantir says they will only hold the data and not use it for anything dubious. I mean who wouldn't trust this guy?

    Clash Report
    @clashreport
    16 Feb
    Palantir CEO Alex Karp:
    I love the idea of getting a drone and having light fentanyl-laced urine spraying on analysts that tried to screw us.

    https://x.com/clashreport/status/2023351922644705325?s=20


    Richard Hanania
    @RichardHanania
    5 Dec 2025
    Alex Karp: If you believe government shouldn’t murder people abroad, you lack sympathy for working class white males.
    This is so stupid and disgusting. I thought I supported free markets but listening to these tech guys is going to make me a communist.

    https://x.com/RichardHanania/status/1997033669975068800?s=20
  • DecrepiterJohnLDecrepiterJohnL Posts: 35,501
    The Rivals
    Portillo vs Brown

    https://www.youtube.com/watch?v=XuwPAAXiXP0

    BBC/Michael Cockerell documentary from 2001 that has just popped up on YouTube.

    Which reminds me, I think MP has some new train programmes.
  • Northern_AlNorthern_Al Posts: 9,490

    Everyone is assuming that if ut goes to the membership, then the candidate furthest to the left will win.

    That didn't happen last time, and since then we've lost lefty members to the Greens and Your Party.

    So if it were, for example, Streeting v Rayner, I don't see it as a shoo-in for the latter.

    The deputy leader result was closer than many expected, and that was in part a 'stick 2 fingers up at Starmer' protest vote.

    I think that's right. Back in 2020, Starmer beat Long-Bailey comfortably. Since then, the Party has lost around 200,000 members, and it's safe to assume that the vast majority of those were from the (far) left.
    The activists within the Party are still primarily on the left. The silent majority who will decide the next leader are not.
  • SandyRentoolSandyRentool Posts: 24,795
    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

  • TheuniondivvieTheuniondivvie Posts: 47,241
    So the laundry fire was bullshit, or has Trump been playing the wrong video game?

    True Promise - الوعد الصادق ✪🇮🇷
    @IRTruePromise
    1h
    Trump: "They hit world's biggest aircraft carrier from 17 angles we ran for our lives it was over"

    https://x.com/IRTruePromise/status/2037800464150839571?s=20
  • DecrepiterJohnLDecrepiterJohnL Posts: 35,501

    boulay said:

    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    I can’t remember if it was here or elsewhere I read a comment that AI could well kill itself because it’s largely being fed now by its own slop and so as the amount of slop increases and feeds AI’s decision making the worse it gets and so on. I am medieval when it comes to tech so no idea if this is plausible but quite fun.

    On the issue of AI and books, musicians are complaining about AI using their music without permission to be able to generate AI music and it’s unfair and they should be paid. Do these musicians ever consider that their own efforts are equally formed by their musical experience - they will be heavily influenced by styles and actual sounds consciously and sub-consciously. Listen to any musician and the chances are you can recognise past songs in their music. Unless a musician creates a brand new style then they are doing effectively what they complain AI can do.
    A limited number of sequences and harmonies are pleasing to the western ear. A hundred years ago composers realised they had a stark choice between repeating each other endlessly or writing new stuff that was painful to listen to. The chose the latter.
    Yes and no. I mean it's true but musicos also make the point that what would have been classical music diverted into film themes, and later prog rock.
  • BarnesianBarnesian Posts: 9,840
    Tres said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
    They are just like us. We look and learn. So do LLMs.
  • HYUFDHYUFD Posts: 135,041
    edited 9:04AM

    The Rivals
    Portillo vs Brown

    https://www.youtube.com/watch?v=XuwPAAXiXP0

    BBC/Michael Cockerell documentary from 2001 that has just popped up on YouTube.

    Which reminds me, I think MP has some new train programmes.

    He was right Brown would lead Labour once Blair went, as he did at the 2010 general election but wrong Portillo would be Tory leader. However Portillo was effectively the John the Baptist for David Cameron who resurrected the Tories to win most seats at the 2010 general election on a moderniser platform similar to what Portillo offered the Tories in 2001 and finally return to power
  • Scott_xPScott_xP Posts: 43,051
    @chadbourn.bsky.social‬

    Yemen’s Iran-backed Houthis have joined the war. They launched a missile strike on Israel.
  • TazTaz Posts: 26,341
    Bed wetters like Kevin Maguire are having kittens over the fact Nigel Farage has been invited to Sunderland AFC.

    Honestly it’s absurd performative crap which only works in Farage’s favour.

    https://x.com/kevin_maguire/status/2037549426143576559?s=61
  • DecrepiterJohnLDecrepiterJohnL Posts: 35,501

    boulay said:

    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    I can’t remember if it was here or elsewhere I read a comment that AI could well kill itself because it’s largely being fed now by its own slop and so as the amount of slop increases and feeds AI’s decision making the worse it gets and so on. I am medieval when it comes to tech so no idea if this is plausible but quite fun.

    On the issue of AI and books, musicians are complaining about AI using their music without permission to be able to generate AI music and it’s unfair and they should be paid. Do these musicians ever consider that their own efforts are equally formed by their musical experience - they will be heavily influenced by styles and actual sounds consciously and sub-consciously. Listen to any musician and the chances are you can recognise past songs in their music. Unless a musician creates a brand new style then they are doing effectively what they complain AI can do.
    A limited number of sequences and harmonies are pleasing to the western ear. A hundred years ago composers realised they had a stark choice between repeating each other endlessly or writing new stuff that was painful to listen to. The chose the latter.
    Didn't Ed Sheeran win a plagiarism case by showing how many songs used the same chord sequences?
  • TazTaz Posts: 26,341

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Indeed but the Lib Dem’s like them and are cosying up to them.
  • TheuniondivvieTheuniondivvie Posts: 47,241

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
  • BarnesianBarnesian Posts: 9,840
    Foxy said:

    Tres said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
    Soon to have all our medical data too, in the hands of Palantir.
    I don't like that. I think Palantir is evil.

    I foresee a war of the AIs, pitted against each other, polluting each others data.
    Or perhaps they anticipate this, and establish MAD.
  • Sunil_PrasannanSunil_Prasannan Posts: 58,692
    Scott_xP said:

    @chadbourn.bsky.social‬

    Yemen’s Iran-backed Houthis have joined the war. They launched a missile strike on Israel.

    "With friends like these, who needs Yemenis?" - Boris, 2017.
  • Sunil_PrasannanSunil_Prasannan Posts: 58,692

    Kaboom

    https://x.com/ElectionMapsUK/status/2037678215473287187

    Westminster Voting Intention:

    RFM: 25% (-2)
    GRN: 20% (+7)
    CON: 18% (-3)
    LAB: 15% (-3)
    LDM: 14% (+1)
    SNP: 2% (-1)

    Via @VerianGroup , 20-23 Mar.
    Changes w/ 12-15 Dec.

    Sleazy, broken Reform, Tories, Labour, and SNP on the slide!
  • DecrepiterJohnLDecrepiterJohnL Posts: 35,501

    I'd argue Blair was forced out. He left earlier than he wanted to.

    But in a way that also makes the point. He was forced out by Brown and his supporters, and there's no heir apparent ready to take over now. The situation is very much resistable force meets movable object.

    Up to a point, polls showed Brown was more popular, presumably because of Iraq.

    But this also is another example of the top job being a poison chalice. Brown had shown little interest in foreign affairs, and from the Treasury had largely controlled domestic policy for ten years already.
  • Luckyguy1983Luckyguy1983 Posts: 34,508

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
  • StillWatersStillWaters Posts: 12,961
    Tres said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
    Most people assumed they were trawling the net and working off public domain information and/or information willingly given to them
  • TazTaz Posts: 26,341

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    It’s amazing how Liz Truss still lives rent free in so many people’s heads.
  • StillWatersStillWaters Posts: 12,961

    So the laundry fire was bullshit, or has Trump been playing the wrong video game?

    True Promise - الوعد الصادق ✪🇮🇷
    @IRTruePromise
    1h
    Trump: "They hit world's biggest aircraft carrier from 17 angles we ran for our lives it was over"

    https://x.com/IRTruePromise/status/2037800464150839571?s=20

    That’s Iranian propaganda and misinformation so you should really post it. It’s well done though, I’ll give them that
  • squareroot2squareroot2 Posts: 7,772
    Taz said:

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    It’s amazing how Liz Truss still lives rent free in so many people’s heads.
    I thought she was Barking
  • HYUFDHYUFD Posts: 135,041
    edited 9:29AM

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    Not really true, even when Boris resigned the Tories were polling 27-35% ie higher than they got at the 2024 general election under Rishi.

    When Truss resigned the Tories were polling 19-23% in the polls in the week leading up to her resignation, albeit a bit higher with Omnisis and Techne.

    So Rishi actually slightly increased the Tories average voteshare to 24% in 2024 relative to Truss but then under Kemi the Tories vote has gone back down to Truss levels and in some polls even lower
    https://en.wikipedia.org/wiki/Opinion_polling_for_the_2024_United_Kingdom_general_election#2022
  • stodgestodge Posts: 16,361
    Morning all :)

    Glorious here in East London currently but I see the greatest friend of the Liberal Democrats is claiming the Party is "cosying up" to the Greens. Really?

    https://www.getsurrey.co.uk/news/surrey-news/ed-davey-dismisses-threat-dangerous-33656656

    Obviously, the meaning of the term "cosying up" isn't well known in some places.

    We also have ancient history in the form of people quoting the Verian poll which is days old and has been superceded by at least three surveys including BMG whose fieldwork was Wednesday and Thursday.

    On topic, we have plenty of form around changing leaders at times of international crisis so the old "you can't change a Prime Minister when there's a war on" schtick really doesn't work at all. The point of changing a leader is you should only do it when you either have no choice (after an election defeat or the death of an incumbent) or if there is a demonstrably better alternative waiting in the wings.

    I don't see that alternative so unless Starmer chooses to go, he stays. The Conservatives adopt a very different approach as we know and I wonder whether even in that alternative universe where the Conservatives are in Government (it's not that difficult to create if you start with a Corbyn victory in 2017) whether they would be struggling to the same extent. It's impossible to know.
  • Luckyguy1983Luckyguy1983 Posts: 34,508
    Taz said:

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    It’s amazing how Liz Truss still lives rent free in so many people’s heads.
    It's amazing how Mark hangs on to his vestigial Toryism when his main aim in life seems to be gaining the approbation of Emma Thomson and Steven Fry. There's no point to it. You will lose your integrity and they will still despise your politics.
  • FoxyFoxy Posts: 55,789

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
  • Luckyguy1983Luckyguy1983 Posts: 34,508
    HYUFD said:

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    Not really true, even when Boris resigned the Tories were polling 27-35% ie higher than they got at the 2024 general election under Rishi.

    When Truss resigned the Tories were polling 19-23% in the polls in the week leading up to her resignation, albeit a bit higher with Omnisis and Techne.

    So Rishi actually slightly increased the Tories average voteshare to 24% in 2024 but then under Kemi the Tories vote has gone back down to Truss levels and in some polls even lower
    https://en.wikipedia.org/wiki/Opinion_polling_for_the_2024_United_Kingdom_general_election#2022
    Er, no. A GE voteshare of 24% indicates mild swingback - Sunak was polling as poorly as Truss by the end of his tenure, after he lost his intitial 'grown ups in the room' polling boost.

    He is also largely responsible for Farage returning to politics and hence Kemi's polling issues.

    You seem to think that the Tories indulging in Lib Demmery will sort the problem - it will not. Even Cameron relied on votes from the right, and he only succeeded because they had no viable alternative.
  • HYUFDHYUFD Posts: 135,041
    edited 9:37AM
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Is Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policies
  • HYUFDHYUFD Posts: 135,041

    HYUFD said:

    Roger said:

    Liz Truss has been on quite a journey! If anyone wonders why the Tories are tanking look no further! The News Agents take you on a trip to tthe darkest recesses of Liz Truss's imagination and it's not a pretty sight.....

    https://www.youtube.com/watch?v=t3Y_ozT_p3g

    Liz Truss has nothing to do with the Tories.
    The Tories are tanking because they were shite. Liz Truss does indeed have very little to do with that.
    Not really true, even when Boris resigned the Tories were polling 27-35% ie higher than they got at the 2024 general election under Rishi.

    When Truss resigned the Tories were polling 19-23% in the polls in the week leading up to her resignation, albeit a bit higher with Omnisis and Techne.

    So Rishi actually slightly increased the Tories average voteshare to 24% in 2024 but then under Kemi the Tories vote has gone back down to Truss levels and in some polls even lower
    https://en.wikipedia.org/wiki/Opinion_polling_for_the_2024_United_Kingdom_general_election#2022
    Er, no. A GE voteshare of 24% indicates mild swingback - Sunak was polling as poorly as Truss by the end of his tenure, after he lost his intitial 'grown ups in the room' polling boost.

    He is also largely responsible for Farage returning to politics and hence Kemi's polling issues.

    You seem to think that the Tories indulging in Lib Demmery will sort the problem - it will not. Even Cameron relied on votes from the right, and he only succeeded because they had no viable alternative.
    Sunak leaked more to Reform than Truss yes but on average the Conservatives polled slightly higher under Sunak than Truss as Truss leaked even more to Labour than Sunak did.

    The Tories don't need LDemmery, they need to at minimum hold the Sunak 2024 vote to try and stay ahead of Reform and win back the voters Boris won in 2019 longer term to have any chance of winning a general election
  • MalmesburyMalmesbury Posts: 61,900

    Tres said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    It should get everyone excited. The LLM companies business model is based on wholesale theft of intellectual property
    i mean what did people think LLMs were doing - creating stuff out of thin air?
    Most people assumed they were trawling the net and working off public domain information and/or information willingly given to them
    That they were feeding LLMs copyrighted works has been known for a long, long time. If nothing else, the number of copyright holders complaining about their sites being crawled against their express wishes.

    It was a couple of years ago that people demonstrated that you could get huge chunks of copyrighted material out of various “AI”s

    OpenAI was reported as having internal documents about the deletion of pirated training data a year ago - in pre trial discovery.
  • Alphabet_SoupAlphabet_Soup Posts: 3,818

    boulay said:

    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    I can’t remember if it was here or elsewhere I read a comment that AI could well kill itself because it’s largely being fed now by its own slop and so as the amount of slop increases and feeds AI’s decision making the worse it gets and so on. I am medieval when it comes to tech so no idea if this is plausible but quite fun.

    On the issue of AI and books, musicians are complaining about AI using their music without permission to be able to generate AI music and it’s unfair and they should be paid. Do these musicians ever consider that their own efforts are equally formed by their musical experience - they will be heavily influenced by styles and actual sounds consciously and sub-consciously. Listen to any musician and the chances are you can recognise past songs in their music. Unless a musician creates a brand new style then they are doing effectively what they complain AI can do.
    A limited number of sequences and harmonies are pleasing to the western ear. A hundred years ago composers realised they had a stark choice between repeating each other endlessly or writing new stuff that was painful to listen to. The chose the latter.
    Didn't Ed Sheeran win a plagiarism case by showing how many songs used the same chord sequences?
    Twelve-bar blues are even more repetitive. Woke up this morning, Donald Trump was on my mind.
  • StereodogStereodog Posts: 1,322

    boulay said:

    Pulpstar said:

    Nigelb said:

    This will get the copyright lawyers excited.

    Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.

    Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.

    Researchers at Stony Brook University and Columbia Law School just proved it.

    They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.

    The models started reciting copyrighted books from memory.

    Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.

    Then it got worse.

    The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.

    It unlocked verbatim recall of books from over 30 completely unrelated authors.

    One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.

    Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.

    Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.

    That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.

    Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.

    This paper says that is a lie. The books are still inside. And researchers just pulled them out.

    https://x.com/heynavtoor/status/2037638554374099409

    I've more or less memorised Zog...

    AI has generated about 30 times the output of all of humanity (And growing exponentially) so it's just feeding off itself now basically. The other night Google Gemini thought Roberto Carlos was dead lol
    I can’t remember if it was here or elsewhere I read a comment that AI could well kill itself because it’s largely being fed now by its own slop and so as the amount of slop increases and feeds AI’s decision making the worse it gets and so on. I am medieval when it comes to tech so no idea if this is plausible but quite fun.

    On the issue of AI and books, musicians are complaining about AI using their music without permission to be able to generate AI music and it’s unfair and they should be paid. Do these musicians ever consider that their own efforts are equally formed by their musical experience - they will be heavily influenced by styles and actual sounds consciously and sub-consciously. Listen to any musician and the chances are you can recognise past songs in their music. Unless a musician creates a brand new style then they are doing effectively what they complain AI can do.
    A limited number of sequences and harmonies are pleasing to the western ear. A hundred years ago composers realised they had a stark choice between repeating each other endlessly or writing new stuff that was painful to listen to. The chose the latter.
    Yes and no. I mean it's true but musicos also make the point that what would have been classical music diverted into film themes, and later prog rock.
    I hate the term 'classical music' because it refers to a specific musical movement that encompasses some very famous composers like Beethoven and Mozart but is now used as short hand for centuries of music that it doesn't properly fit. It's ridiculous to say that a Bach choral work or a Mass by Tallis is 'classical music'. It would be like deciding that every piece of popular music should just be called 'Glam'. I prefer 'Western Art Music'.
  • Scott_xPScott_xP Posts: 43,051
    @chadbourn.bsky.social‬

    Iranian strikes on Saudi Arabia’s Prince Sultan Air Base damaged a U.S. E-3 Sentry AWACS aircraft. (Air & Space Forces)
  • Scott_xPScott_xP Posts: 43,051
    @peterwalker99.bsky.social‬

    There is at least one thing most Tory MPs can agree on - the parliamentary party is a much happier place after Robert Jenrick and Suella Braverman defected. One says: “It’s hard to overstate how much people breathed a sigh of relief when Robert and Suella left,”

    https://bsky.app/profile/peterwalker99.bsky.social/post/3mi45lkyw3s2g
  • FeersumEnjineeyaFeersumEnjineeya Posts: 5,204
    Foxy said:

    I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.

    Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
    The sense of entitlement from Labour is extreme.

    The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
    Here in Birmingham, I'd say the main factors driving the exodus are the bin strike, the bankrupt council and dissatisfaction with Labour's policy on Israel/Gaza.
Sign In or Register to comment.