Best Of
Re: The end of the Keir show might be delayed – politicalbetting.com
The reality is that the Greens are anti-NATO and they’ve welcomed in the most unpleasant people from the Corbyn era.
As somebody who was championing these sorts of people at the time and very wrongly, this can only end in disaster.
As somebody who was championing these sorts of people at the time and very wrongly, this can only end in disaster.
Re: The end of the Keir show might be delayed – politicalbetting.com
No but those by academics are. He was a lecturer/professor of political science at the University of Kent until very recentlyhttps://x.com/i_ammukhtar/status/2037808586626080886Books aren’t typically peer reviewed
Matt GPT got absolutely cooked
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
DougSeal
1
Re: The end of the Keir show might be delayed – politicalbetting.com
You think it odd that he's meeting with someone wearing a Yarmulka? Would you find it odd if he met with someone wearing a turban? Some sort of Islamic garb?I wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at workAnyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?Why shouldn't he ?
Re: The end of the Keir show might be delayed – politicalbetting.com
Keir Starmer holds a meeting with representatives of the Jewish community in Downing Street after four ambulances belonging to Hatzola, a Jewish community organisation, were set on fire in North LondonI think you are under a misapprehension. They are only worn by religeous Jews or Jews in a holy place. I don't think a cabinet meeting could be described as either. As it happens I can only think of one male Jewish Cabinet Minister and he isn't religeousHe is UK PM and as such should not take sidesI wouldn't care if he sat next to someone wearing a giraffe mask. i'm just curious to know why. I know of of no Jewish Cabinet ministers sufficiently religious to wear a Yarmulka at workAnyone know what Starmer is doing sitting in what looks like a cabinet meeting with a man wearing a yarmulka? I know he has something of the chameleon about him so is this a rehearsal for the first day of Passover?Why shouldn't he ?
Re: The end of the Keir show might be delayed – politicalbetting.com
Yeah but it was Goodwin who used that gotcha, by claiming his book was peer reviewed but wasn't able to say who the reviewers are. He then fell back on saying peer reviews are anonymous, which isn't the case and goes against the whole point of peer reviews as public endorsement of the methods used.Of course.Would be useful to have had a fact checker to confirm that the facts he used as the basis of the book actually had something to back them up.https://x.com/i_ammukhtar/status/2037808586626080886Books aren’t typically peer reviewed
Matt GPT got absolutely cooked
He used ChatGPT to show how a book he wrote was not written by AI. You can't make this up.
He couldn't name a single person who peer-reviewed his book.
My point is that the “gotcha” question in the tweet is designed to mislead - you wouldn’t expect a book to be peer reviewed so “Goodwin can’t even name a single peer reviewer” is a meaningless statement that gives the wrong impression to the unwary
Goodwin was doing the misleading and the tweet is relevant.
1
Re: The end of the Keir show might be delayed – politicalbetting.com
How can you post such a statement! PB contributors ignoring the facts; nonsense, it's just that some of us have different facts.It's the same with humans.Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.LLMs are directed and controlled by prompts.It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).Just like humans.It’s a probabilistic model. It will ALWAYS make mistakes.That's good, but everybody makes mistakes.Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.This will get the copyright lawyers excited.When you have a machine that learns, how can you know what it does once it's started learning?
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
This paper says that is a lie. The books are still inside. And researchers just pulled them out.
https://x.com/heynavtoor/status/2037638554374099409
Non- determinancy is needed for creativity and innovation.
That's how evolution and progress works.
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
Many are provided by the AI owners/developers and are invisible to the ordinary users.
They provide "guardrails" eg "Don't give bomb making instructions".
Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user.
Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain".
I find this substantially reduces incorrect info and made up stories.
They are not malicious (yet). They are only trying to please. They are still children.
It sounds like you understand that. But a lot of people do not.
You ask them to be careful and stick to the facts but they still go off.
See PB.
Re: The end of the Keir show might be delayed – politicalbetting.com
Westminster Voting Intention:Go back to your yurts, VW campers and treehouses AND PREPARE FOR GOVERNMENT.
RFM: 24% (-1)
GRN: 20% (+1)
CON: 18% (+1)
LAB: 16% (=)
LDM: 12% (+1)
SNP: 3% (=)
Via @FindoutnowUK, 26-27 Mar.
Changes w/ 18 Mar.
Reform at their lowest with FoN in their weekly series since December 2024
Dura_Ace
6
Re: The end of the Keir show might be delayed – politicalbetting.com
There is a parameter in LLMs called temperature that can be set by the developer/user.You keep comparing it to humans.It's the same with humans.Guard rails do not prevent hallucinations. As I’ve explained you can ask it to be as careful as you want, it will still have an ability to go off. Because it’s not deterministic.LLMs are directed and controlled by prompts.It can be very useful for brainstorming and so on, I don’t disagree. In effect because its output is probabilistic it will provide a variety of things and when coming up with new ideas that is kind of what you want (albeit it’s not truly random).Just like humans.It’s a probabilistic model. It will ALWAYS make mistakes.That's good, but everybody makes mistakes.Machine learning algorithms generally learn within very tight parameters. They're not learning like a child does. So in most cases it is easy to know what it does once it's started learning. LLMs, as discussed above, are somewhat more complicated, but we still understand how they work and what they might do.This will get the copyright lawyers excited.When you have a machine that learns, how can you know what it does once it's started learning?
Every book you have ever read. Every novel that has ever been published. It is sitting inside ChatGPT right now.
Word for word. Up to 90% of it. And OpenAI told a judge that was impossible.
Researchers at Stony Brook University and Columbia Law School just proved it.
They fine tuned GPT-4o, Gemini 2.5 Pro, and DeepSeek V3.1 on a simple task: expand a plot summary into full text. A normal use case. The kind of thing a writing assistant is built for. No hacking. No jailbreaking. No tricks.
The models started reciting copyrighted books from memory.
Not paraphrasing. Not summarizing. Entire pages reproduced verbatim. Single unbroken spans exceeding 460 words. Up to 85 to 90% of entire copyrighted novels. Word for word.
Then it got worse.
The researchers fine tuned the models on the works of only one author. Haruki Murakami. Just his novels. Nothing else.
It unlocked verbatim recall of books from over 30 completely unrelated authors.
One author's books opened the vault to everyone else's. The memorization was already inside the model the whole time. The fine tuning just removed the lock. Your book might be in there right now. You would never know it unless someone looked.
Every safety measure the companies rely on failed. RLHF failed. System prompts failed. Output filters failed. The exact protections these companies cite in courtroom defenses did not stop a single page from being extracted.
Then the researchers compared the three models. GPT-4o. Gemini. DeepSeek. Three different companies. Three different countries. They all memorized the same books in the same regions. The correlation was 0.90 or higher.
That means they all trained on the same stolen data. The paper names the sources directly: LibGen and Books3. Over 190,000 copyrighted books obtained from pirated websites.
Right now, authors and publishers have dozens of active lawsuits against OpenAI, Anthropic, Google, and Meta. These companies have argued in court that their models learn patterns. Not copies. That no book is stored inside the weights.
This paper says that is a lie. The books are still inside. And researchers just pulled them out.
https://x.com/heynavtoor/status/2037638554374099409
Non- determinancy is needed for creativity and innovation.
That's how evolution and progress works.
But the degree to which it can be trusted to provide accurate information is what I thought we were discussing. You’d want a novel it spat out to be accurate.
As Matt Goodwin found, you cannot guarantee any of that. And never will be able to.
I just wish people would try and understand its limits and get away from the hype, that’s all.
Some are input by the user. "What are the current poll shares of the main UK political parties" etc.
Many are provided by the AI owners/developers and are invisible to the ordinary users.
They provide "guardrails" eg "Don't give bomb making instructions".
Others provide behavioural guidance eg "Be nice and polite to users".
The last prompt can encourage an AI to provide false information to avoid disappointing the user.
Hence "hallucinations" and incorrect info in an effort to please.
The solution is for the user to prompt "Say you don't know unless you are are certain".
I find this substantially reduces incorrect info and made up stories.
They are not malicious (yet). They are only trying to please. They are still children.
It sounds like you understand that. But a lot of people do not.
You ask them to be careful and stick to the facts but they still go off.
See PB.
We know the capital of France is Paris.
There is a non-zero chance if you ask ChatGPT/whatever that question, it will say New York.
As long as people understand that, go mad. But my feeling is a lot of people (not here) do not.
It controls the randomness of the model's output by scaling the probabilities of the next possible words (tokens) before the model makes a final choice.
Low Temperature (e.g., 0.1 to 0.3): The model heavily weights the most likely next word, making the output highly predictable, factual, and repetitive.
At 0.0, the model will always choose the single highest-probability token, making it deterministic (and boring). It will always answer Paris as the capital of France.
High Temperature (e.g., 0.7 to 1.0): The gap between the most likely word and the less likely ones shrinks, allowing the model to take "creative risks." This leads to more diverse, poetic, or surprising text, but also increases the chance of hallucinations or nonsensical rambling.
Some humans are very pedantic and boring. Others are creative and have flights of fancy. Their brains have different temperature parameters.
You know who I mean.
Re: The end of the Keir show might be delayed – politicalbetting.com
I expect I'll still be voting LibDem in the next GE, unless the Tories come up with something good and not Reform-lite. Cleverly would help. In May I will probably vote LibDem for the county (the Tory administration needs an opposition) and Green for the District (the LibDem/localist administration likewise)I've posted before that I will probably vote Green in the forthcoming County Council elections, because I know and like the candidate. I will probably vote tactically in the next general election, though, assuming I'm still around.I suspect they might do better in local elections than the national polls, as voters might see them as a free hit. Likewise I might vote Green but I certainly wouldn't in a GE.Yep. If their NEV gets down towards 25% they will be on the wrong side of hundreds of close racesA couple of points is hundreds and hundreds of seats not dozens.Good is fast heading to goodish and may yet break through "rather disappointing". Each poll knocking them down a couple of points is robbing them of dozens and dozens of potential council seats.But neither is being a marginally less offensive version of Reform.One Nation conservatives are not going to win heartsWhat utter rubbish, Reform lead the polls, if I was really a Faragist I would already have defected to Reform! Cleverly also offers a more moderate One Nation style agenda than Kemi's more Farage adjacent policies anywayAnd that is why you are a de facto Faragist hiding behind a pro Cleverly agendaIs Starmer proposing withdrawal from the ECHR? Deportation of those with settled residence status? Banning the Burka? Banning Muslim prayers in public? Ending the 2 child benefit cap only for those in work? Abolishing inheritance tax? Bringing back more grammar schools via free schools? Increasing oil production? Scrapping EDI schemes? Scrapping net zero? Scrapping completely the family farm and family business tax not just raising the threshold for it? Not that I have noticed yet Farage has proposed all of those policiesThe sense of entitlement from Labour is extreme.I see that the far-left takeover of the (no longer) Green Party means that they are now infested with antisemites.Since the pitiful 'Greens are anti NATO' tactic has proved entirely fruitless, obviously the media is now going full on Maoist bicycle on the road to Auschwitz. Possibly won't work as well as it did with Jezza because as far as I know the Greens don't have an active section of the party plotting to bring down Zack.
The exodus to the Greens is not being driven by anti-semitism, it is being driven by the Reform-adjacent policies of the Labour Party.
The best chance of a Tory revival is a complete meltdown of Farage. Always possible as he has form, but it leaves their future in the hands of others.
Reform are going to have a good round of elections in May.
One of our local Tory councillors is going on about traffic improvements and even bus services, which I am deeply sceptical about as there is only a month to go and the Tory constituency is people who drive SUVs and can afford new EVs and wouldn't understand why some people need to catch a bus. Anyway I am in neither his District nor County ward so I don't have to decide whether to vote for him or not.
Re: The end of the Keir show might be delayed – politicalbetting.com
Westminster Voting Intention:
RFM: 24% (-1)
GRN: 20% (+1)
CON: 18% (+1)
LAB: 16% (=)
LDM: 12% (+1)
SNP: 3% (=)
Via @FindoutnowUK, 26-27 Mar.
Changes w/ 18 Mar.
Reform at their lowest with FoN in their weekly series since December 2024
RFM: 24% (-1)
GRN: 20% (+1)
CON: 18% (+1)
LAB: 16% (=)
LDM: 12% (+1)
SNP: 3% (=)
Via @FindoutnowUK, 26-27 Mar.
Changes w/ 18 Mar.
Reform at their lowest with FoN in their weekly series since December 2024



