I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
This briefing story starting to get quite murky. Surely McSweeney has to go if the Telegraph has receipts and Starmer has just denied it came from No.10? Or if something came from a Cabinet Minister and their identity is…. conveniently… leaked, that makes their position very shaky doesn’t it, given SKS’s public pronouncements on the matter?
Is it conceivable that the Telegraph could be making it up as they go along?
Did the Telegraph impersonate No10 staff and contact, and fool, Pippa Crerar, Alex Wickham and Chris Mason?
Seems unlikely
I'm talking about the ad hoc follow ups. Obviously the initial briefing happened.
This briefing story starting to get quite murky. Surely McSweeney has to go if the Telegraph has receipts and Starmer has just denied it came from No.10? Or if something came from a Cabinet Minister and their identity is…. conveniently… leaked, that makes their position very shaky doesn’t it, given SKS’s public pronouncements on the matter?
Is it conceivable that the Telegraph could be making it up as they go along?
See the Guardian.
The simple truth is that everyone in the press knows who briefed. They are just watching as various people tie themselves in bizarre knots about it.
A couple of days ago I mentioned that I was mystery shopping the Forest of Dean heritage railway wrt accessibility.
This is the reply, to which I give about 7.5 out of 10 - that is, quite good, especially around prompt individual attention given to my query. So well done the FODway. I'll copy to the other project I was talking about, and ask them to make sure that latest accessibility standards are a foundational aspect of their project.
There are other things for a 9/10 or 1 10/10, such as secure, inclusive parking for adapted cycles and mobility aids, gradients, offsite safe routes to get there etc, but I'd need a more detailed conversation to explore those.
----------------- Good morning, Matt
Many thanks for your enquiry.
We are able to accommodate both manual and electric wheelchairs. We provide a ramped access to the train for boarding and disembarking at our stations, and there are disabled toilets at our main station, Norchard, and also Lydney Junction and Parkend. Our porters will be happy to assist.
If you are thinking of visiting, especially during busier days, or booking an experience such as a Santa Special, we ask that you let us know when booking so we can check availability of wheelchair spaces and reserve these for you, as there is limited room on board. For those who are able to transfer from wheelchair to a seat, we are happy to store wheelchairs and reserve tables closest to the ramped access.
We regret there is no wheelchair access for the First Class saloon, due to the nature of the carriage.
Finally, we recommend starting your journey at Norchard station, as we have a large car park including disabled spaces, plus a wheelchair accessible museum, shop and café. There are also half price discounts for carers for our steam train rides, and an £8 discount per carer for our Santa Specials.
Please let us know if you have any further queries, or need help booking something.
Kind Regards
That looks to me to be a very good response and they are doing everything they reasonably can to accommodate wheelchair users.
Where did they fall down so that you docked 2.5/10 points?
Because accessibility is about far more than wheelchairs, and I would need to do a site survey to get a full impression. So it's more about not having enough information to go higher, rather than docking points.
eg is there an inductive loop in the ticket office, are gradients in paths less than 1 in 20, types of surface, safe access separate from motor vehicles, secure mobility aid parking where people would do the actual train trip with sticks, is the secure parking suitable and wheel-in wheel-out for mobility aids with no reverse gear, is a loan wheelchair available for people who tire easily, is tactile paving to national guidelines.
There's loads of stuff to consider.
Seems harsh to score them down vs note the limitations of your work
I disagree, but I'm happy to withdraw the (good) score, and just leave the praise.
This is the reply. It is quite good, especially around prompt individual attention given to my query. So well done the FODway. I'll copy to the other project I was talking about, and ask them to make sure that latest accessibility standards are a foundational aspect of their project.
Thinking as a consumer (albeit not disabled but I assume they would think the same as other consumers) if I see something with a 7 or 8 rating then it’s a “may be” and probably a bit “meh”.
For all you know, they have all the pathways laid out in an optimal way for wheelchairs. And yet you are discouraging people from going.
Better to highlight what is good, what is bad and what you don’t know. So give it a 10:10 rating (if that is what it deserves based on your observations) and note where you don’t have the data to form a judgement
I missed this reply from @StillWaters . Following up ...
For me, a 7.5 or 8 from 10 in this arena is "good/very good". There's a huge variety.
It's a fascinating conversation all about different views and frames of reference, and how they need to be fully defined for mutual understanding to work. One problem with 10/10 is that everyone assumes it is perfect and finished and done, which is never true. In this case my 7.5/10 was just a note to PB and self, so I wont tell anyone else, and will use words in my feedback. That's the "too short a summary" problem, which skewered Ofsted when they tried one word summaries.
A good example is a National Parks project called "Miles without Stiles" for accessible public footpaths. They label their routes as "for 'all'", "for 'many'", "for 'some'", and ‘challenging’, including gradients and surfaces.
The reference frame problem is that even their "for all" allows gradients are 1:10, described as "Suitable for everyone, including pushchairs and wheelchairs and mobility scooters". But they are not suitable for everyone. A reasonable maximum permissible gradient for normal mobility aids is 1:20 with resting platforms every so often, if we ask disabled organisations - and they prefer shallower gradients still.
In such circs, a consistent standard is the important thing if it is intended to be used widely - so people with different needs can judge it for themselves. And that needs a lot of information.
The same goes on for barriers. Guidelines are that a 1.5m gap with a sealed surface and level approaches is the only accessible access point. Anything else obstructs. I visited a project in Kenilworth in September where Kissing Gates had been used, on advice from a Local Council engaged Consultant. Those were accepted in the 1990s or 2000s, but mobility aid users know that they go rusty, or stiff, or overgrown, and cannot be relied upon unless perfectly maintained, which NEVER happens. And if they get stuck it becomes life threatening if in the countryside, because they cannot pick up their mobility aid and walk.
So it is a complex question, with no cut-and-dried solution.
Darren Tierney, who was at the time of Mandelson’s appointment in 2024 the Director General of the Propriety and Constitution Group, completed a full briefing. It included the following section:
“After Mr Epstein was first convicted of procuring an underage girl in 2008, their relationship continued through to 2011, beginning when Lord Mandelson was Business Minister and continued after the end of the then Labour government. Lord Mandelson stayed in Epstein’s House while he was in jail in June 2009. You will wish to consider his suitability given this and the other information in this note.”
It was brought in physical form to the PM’s office by a junior official in the Propriety team. It was prepared by civil servants and signed by Tierney.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
This briefing story starting to get quite murky. Surely McSweeney has to go if the Telegraph has receipts and Starmer has just denied it came from No.10? Or if something came from a Cabinet Minister and their identity is…. conveniently… leaked, that makes their position very shaky doesn’t it, given SKS’s public pronouncements on the matter?
Is it conceivable that the Telegraph could be making it up as they go along?
See the Guardian.
The simple truth is that everyone in the press knows who briefed. They are just watching as various people tie themselves in bizarre knots about it.
Scoop: Senior military officials on Wednesday presented Trump with updated options for potential operations in Venezuela, including land strikes, sources tell @JimLaPorta and me. Hegseth, Caine were at White House yday afternoon for briefings. No final decision made, two of the sources told @CBSNews
"When asked about the single attribute that make someone English, YouGov found that only 10% of white Britons believe being white is a requirement — compared with 24% of ethnic-minority Britons. Just 9% of white respondents said both family heritage and whiteness are necessary, while more than twice as many ethnic-minority Britons (21%) agreed. And when Englishness was framed as a mix of whiteness, family heritage and Christian values, only 4% of white Britons born in the UK endorsed that view, along with 5% of ethnic-minority Britons born in the UK."
Scoop: Senior military officials on Wednesday presented Trump with updated options for potential operations in Venezuela, including land strikes, sources tell @JimLaPorta and me. Hegseth, Caine were at White House yday afternoon for briefings. No final decision made, two of the sources told @CBSNews
Bay of Pigs in the offing?
Bay of Pigs was JFK doing what hardline anti-commies do.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
A lot of it is people running it in agentic mode and letting it changes vast portions of the code base at once go. That is near guaranteed to introduce bugs that are difficult to spot and will require time consuming fixes. If you break down the problem and target small chunks which you test on their own (generally good software engineering regardless) it enables massive increases in productivity.
Its also great if you need to prototype something particularly try something you have little experience.
"When asked about the single attribute that make someone English, YouGov found that only 10% of white Britons believe being white is a requirement — compared with 24% of ethnic-minority Britons. Just 9% of white respondents said both family heritage and whiteness are necessary, while more than twice as many ethnic-minority Britons (21%) agreed. And when Englishness was framed as a mix of whiteness, family heritage and Christian values, only 4% of white Britons born in the UK endorsed that view, along with 5% of ethnic-minority Britons born in the UK."
The only truly British people are those who are Half Jewish, Half Polish, Half Ukrainian, Half French, Half American & Half Scottish. With a smidgin of English in there somewhere.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
The thing is both that anecdote and Max are correct
There are scenarios where it’s clear that AI can save time, read specified api document and generate code to pull from the API.
The problem comes when you ask AI to generate details where the specification is ill defined or changed over the years. Ask it to write JavaScript and AI’s struggle because it will look at 30 years of examples and write poor quality code based on things that are popular but 20 years out of date.
That’s one reason it’s bad at my day job, it sees information that was accurate in 2015 and assumes it’s still valid now as it doesn’t check the blogpost and think hmm this is old so may be incorrect. It sees this is from 2015 and is still there so most be important and correct
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
We may need that belief that it's a $x000bn industry to get to the point where it's a $x0bn industry. It may end up being like the web browser - browsers have value; that's why Google and Apple develop/fund them and Microsoft and other's rebadge those with their own value-adds - but they don't have the value today the browser industry suggested it did in the mid to late 90s.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
Speaking seriously, AI really kicks in when it comes to help me code in languages which I don't have a depth of experience in. I don't have an AI account but I use the one that my employer provides (CoPilot) or are free (Perplexity.ai). Although I am keen to hear of the experience of others, I am reluctant to use AIs that need to be paid for and/or require an account.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
Speaking seriously, AI really kicks in when it comes to help me code in languages which I don't have a depth of experience in. I don't have an AI account but I use the one that my employer provides (CoPilot) or are free (Perplexity.ai). Although I am keen to hear of the experience of others, I am reluctant to use AIs that need to be paid for and/or require an account.
You are missing out if you are only using CoPilot or free tiers. The models behind the paywall from Google, OpenAI and Claude as significantly better. You don't need the $200 a month tiers though.
There are of course ones that are reasonably capable that are open weights and run locally (with a good mahcine). But Claude is miles ahead of them in terms of coding ability.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
The whole industry is based on more hype than just about anything I've ever seen. You can even see that in the name. It's not Intelligence in any meaningful sense because it's not self-aware.
But I suppose the Pattern Recognition industry is less appealing to gullible investors and corporate procurement offices than the Artificial Intelligence concept.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
Won't somebody think of the airport thriller writers...
The problem could be that decision makers leap on AI, lay off / stop recruiting humans, then when AI doesn't perform devote the resources to fixing AI rather than reemploying humans. AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
The whole industry is based on more hype than just about anything I've ever seen. You can even see that in the name. It's not Intelligence in any meaningful sense because it's not self-aware.
But I suppose the Pattern Recognition industry is less appealing to gullible investors and corporate procurement offices than the Artificial Intelligence concept.
OpenAI / Sam Altman is particularly bad. He hates Musk, but is engaging in exactly the same sort of over-hyping practices.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
Won't somebody think of the airport thriller writers...
The problem could be that decision makers leap on AI, lay off / stop recruiting humans, then when AI doesn't perform devote the resources to fixing AI rather than reemploying humans. AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
One thing that I think is definite concern is these LLMs are very good at coding, they are better than a grad straight out of uni. So you don't hire many of them, the senior devs get productivity boost by using them (and not having to "waste" time coaching juniors), all good, oh wait they have left to a different company or are retiring, we don't have any juniors trained up to their skill levels.....its a bit like political parties at the moment.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
Thanks. Those are really interesting - not just the comedic value but the wider implications. I suppose some think the law is nice and rigid and should be easy peasy for some overgrown Robby the Robot. And I suppose some are just used to making some junior do the work for them to sign off. But these results, oooh, I don't usually use the expression FAFO but it sure applies.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
Won't somebody think of the airport thriller writers...
The problem could be that decision makers leap on AI, lay off / stop recruiting humans, then when AI doesn't perform devote the resources to fixing AI rather than reemploying humans. AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
One thing that I think is definite concern is these LLMs are very good at coding, they are better than a grad straight out of uni. So you don't hire many of them, the senior devs get productivity boost by using them (and not having to "waste" time coaching juniors), all good, oh wait they have left to a different company or are retiring, we don't have any juniors trained up to their skill levels.....its a bit like political parties at the moment.
The West is already doing that with overseas workers. Management really likes the idea of a Indonesian or Malaysian with 2-3 years of experience for less than a new Western grad.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
The thing is both that anecdote and Max are correct
There are scenarios where it’s clear that AI can save time, read specified api document and generate code to pull from the API.
The problem comes when you ask AI to generate details where the specification is ill defined or changed over the years. Ask it to write JavaScript and AI’s struggle because it will look at 30 years of examples and write poor quality code based on things that are popular but 20 years out of date.
That’s one reason it’s bad at my day job, it sees information that was accurate in 2015 and assumes it’s still valid now as it doesn’t check the blogpost and think hmm this is old so may be incorrect. It sees this is from 2015 and is still there so most be important and correct
Will be interesting to see if the number of unexplained outages starts increasing as service providers become more reliant on the assistants?
I'd guess that actively deskilling software developers will come with large downstream effects. Whether the large software houses will care might be different matter.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
Won't somebody think of the airport thriller writers...
The problem could be that decision makers leap on AI, lay off / stop recruiting humans, then when AI doesn't perform devote the resources to fixing AI rather than reemploying humans. AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
One thing that I think is definite concern is these LLMs are very good at coding, they are better than a grad straight out of uni. So you don't hire many of them, the senior devs get productivity boost by using them (and not having to "waste" time coaching juniors), all good, oh wait they have left to a different company or are retiring, we don't have any juniors trained up to their skill levels.....its a bit like political parties at the moment.
The West is already doing that with overseas workers. Management really likes the idea of a Indonesian or Malaysian with 2-3 years of experience for less than a new Western grad.
There was the story of the significant number of North Koreans pretending to be from other places in Asia working their out sourced (and sometime in-sourced jobs totally remotely).
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
Won't somebody think of the airport thriller writers...
The problem could be that decision makers leap on AI, lay off / stop recruiting humans, then when AI doesn't perform devote the resources to fixing AI rather than reemploying humans. AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
One thing that I think is definite concern is these LLMs are very good at coding, they are better than a grad straight out of uni. So you don't hire many of them, the senior devs get productivity boost by using them (and not having to "waste" time coaching juniors), all good, oh wait they have left to a different company or are retiring, we don't have any juniors trained up to their skill levels.....its a bit like political parties at the moment.
The West is already doing that with overseas workers. Management really likes the idea of a Indonesian or Malaysian with 2-3 years of experience for less than a new Western grad.
Seems we removed most of our secondary industry with the promise of well paid service sector positions, after we've outsourced the tertiary 'industries' what is going to be left?
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
Speaking seriously, AI really kicks in when it comes to help me code in languages which I don't have a depth of experience in. I don't have an AI account but I use the one that my employer provides (CoPilot) or are free (Perplexity.ai). Although I am keen to hear of the experience of others, I am reluctant to use AIs that need to be paid for and/or require an account.
You are missing out if you are only using CoPilot or free tiers. The models behind the paywall from Google, OpenAI and Claude as significantly better. You don't need the $200 a month tiers though.
There are of course ones that are reasonably capable that are open weights and run locally (with a good mahcine). But Claude is miles ahead of them in terms of coding ability.
Google AI Premium plan. Cost: $19.99 per month
Google AI Ultra plan. Cost: $249.99 per month
Open AI plan: priced per token, not per month
Claude AI Pro plan. Cost: $20 per month
Claude AI Max plan. Cost: "from" $100 per month
The lower paid tier is affordable, the one above that is not something I would stretch to without a very good reason. As of this moment, Claude AI Pro is unnecessary but I will genuinely bear it in mind. Which languages do you use it to code in?
Following on to the debate the other day on types of mobility aids, and the Cycle to Work scheme this morning, here's an example of a good modern mobility aid. I think this should be on such a scheme, or on Motability, or both. This is a "clip on e-handcycle" a friend uses. It is in the unregulated gap (ie no formal legal classification, which I hope the Govt will now sort out). It clips onto a manual wheelchair, and makes it like a powerchair but as a e-tricycle, quicker than I can put on a pair of slip-on shoes. It has an "autonomy" (ie range) of up to around 40-50km (depending on battery size). It has quite a powerful motor for traction.
He upgraded to this one because UK cycle routes (eg the NCN) are full of steep hills *, and have little investment (total budget a few million for 20000 km of route), and essentially zero seasonal maintenance so are currently mainly covered in wet leaves. This one costs about £8k. He has done 15,000km using this type of device over the last 8 years.
(He is currently wanting rear wheel drive too - that won't, I think, happen.)
* As an example, the pedestrian bridge near me over the M1 has a straight 60m 1-in-15 ramp one side with a packed earth surface, which needs an industrial strength mobility aid to tackle safely. That's in addition to 3 sets of chicane barriers.
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
I've had this experience (with AI, not the police). It would rather be wrong than vague.
Edit: in fairness, AI can be useful in finding stuff - but I never, ever accept what it says without actual verification.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Ai has uses - the problem is that in its current form it’s probably an $x0bn industry and not the $x000bn it’s being hyped up to be.
Won't somebody think of the airport thriller writers...
The problem could be that decision makers leap on AI, lay off / stop recruiting humans, then when AI doesn't perform devote the resources to fixing AI rather than reemploying humans. AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
One thing that I think is definite concern is these LLMs are very good at coding, they are better than a grad straight out of uni. So you don't hire many of them, the senior devs get productivity boost by using them (and not having to "waste" time coaching juniors), all good, oh wait they have left to a different company or are retiring, we don't have any juniors trained up to their skill levels.....its a bit like political parties at the moment.
That's probably also true at an individual level.
I could (and am sometimes encouraged to, including as government policy) take the misery out of teaching by using AI to generate lessons, or feedback on assignments.
And it's easy to see the logic. But leaving aside the issues of energy use, copyright ethics and many AIs drawing triangles that look like triangles but aren't...
Some of that misery is inseparable from the essential and fun stuff. To prepare resources includes thinking about the content to have fun things to say. Marking work is the only way to really know what has worked and what hasn't.
As a society and culture, we're pretty bad at valuing tangible and intangible correctly relative to each other.
"When asked about the single attribute that make someone English, YouGov found that only 10% of white Britons believe being white is a requirement — compared with 24% of ethnic-minority Britons. Just 9% of white respondents said both family heritage and whiteness are necessary, while more than twice as many ethnic-minority Britons (21%) agreed. And when Englishness was framed as a mix of whiteness, family heritage and Christian values, only 4% of white Britons born in the UK endorsed that view, along with 5% of ethnic-minority Britons born in the UK."
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
"When asked about the single attribute that make someone English, YouGov found that only 10% of white Britons believe being white is a requirement — compared with 24% of ethnic-minority Britons. Just 9% of white respondents said both family heritage and whiteness are necessary, while more than twice as many ethnic-minority Britons (21%) agreed. And when Englishness was framed as a mix of whiteness, family heritage and Christian values, only 4% of white Britons born in the UK endorsed that view, along with 5% of ethnic-minority Britons born in the UK."
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
I'm not at all sure the shite pretend robots you get on corporate website chat aren't intended to deceive. But then I always end up saying "I want to speak to a human being" before long. And they may not be AI anyway.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
I've had this experience (with AI, not the police). It would rather be wrong than vague.
It simply aims to please.
We don't like to say it out loud, but humans largely respond better to confidently wrong than to correct diffidence.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
I've had this experience (with AI, not the police). It would rather be wrong than vague.
In my experience its main utility is in summarising, rewriting and rephrasing - that’s where it saves you time. Get your thoughts down on a page in any old order then ask it to summarise or present it in a logical way.
Any research it does needs independently checking. Use it as a starting point only.
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
There is a difference, although as my wife would say it probably isn't something you should discuss down the pub in case you are misunderstood.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
"Germany calls up all men aged 18 for military medical exam The country has stopped short of reintroducing mandatory conscription but hopes to boost numbers by giving potential recruits an insight into army life" (£)
"When asked about the single attribute that make someone English, YouGov found that only 10% of white Britons believe being white is a requirement — compared with 24% of ethnic-minority Britons. Just 9% of white respondents said both family heritage and whiteness are necessary, while more than twice as many ethnic-minority Britons (21%) agreed. And when Englishness was framed as a mix of whiteness, family heritage and Christian values, only 4% of white Britons born in the UK endorsed that view, along with 5% of ethnic-minority Britons born in the UK."
Can’t be true. From reading PB and social media I thought most white people are racist, and plastic patriots.
The reality of course is that white British people are some of the least racist people in the world.
They certainly are. Time after time we see that in polling on questions like ‘would you mind if your child married a person from another race’ or ‘would you mind if your neighbour is different race’.
I think we’re pretty tolerant and rub along as long as we don’t think people are taking the piss and they are good neighbours
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
There is a difference, although as my wife would say it probably isn't something you should discuss down the pub in case you are misunderstood.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
This is quite a modern view. For instance Tory MP Alan Clark got married to 16 year old Jane Beuttler in 1958 when he was 30.
same as "we are bringing a huge pile of steaming horse droppings to help Welsh Business grow"
In the beginning was the plan
And then came the assumptions
And the assumptions were without form
And the plan was completely without substance
And the darkness was upon the face of workers
And they spoke among themselves, saying "It is a crock of shit and it stinketh."
And the workers went upon their supervisors and sayeth, "It is a pail of dung and none may abide the odor thereof."
And the supervisors went unto their managers and sayeth unto them "It is a container of excrement and it is very strong, such that none may abide by it."
And the managers went unto their directors and sayeth, "It is a vessel of fertilizer, none may abide its strength."
And the directors spoke amongst themselves, saying one to another, "It contains that which aid plant growth, and it is very strong."
And the directors went unto the vice presidents and sayeth unto them, "It promotes growth and is exceedingly powerful."
And the vice presidents went unto the president and sayeth unto him, "This new plan will actively promote the growth and efficiency of this company, and these areas in particular."
And the president looked upon the plan,
And saw that it was good, and the plan became policy.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
Speaking seriously, AI really kicks in when it comes to help me code in languages which I don't have a depth of experience in. I don't have an AI account but I use the one that my employer provides (CoPilot) or are free (Perplexity.ai). Although I am keen to hear of the experience of others, I am reluctant to use AIs that need to be paid for and/or require an account.
You are missing out if you are only using CoPilot or free tiers. The models behind the paywall from Google, OpenAI and Claude as significantly better. You don't need the $200 a month tiers though.
There are of course ones that are reasonably capable that are open weights and run locally (with a good mahcine). But Claude is miles ahead of them in terms of coding ability.
Google AI Premium plan. Cost: $19.99 per month
Google AI Ultra plan. Cost: $249.99 per month
Open AI plan: priced per token, not per month
Claude AI Pro plan. Cost: $20 per month
Claude AI Max plan. Cost: "from" $100 per month
The lower paid tier is affordable, the one above that is not something I would stretch to without a very good reason. As of this moment, Claude AI Pro is unnecessary but I will genuinely bear it in mind. Which languages do you use it to code in?
FYI, OpenAI offter per month plans. $30 and $200 if I remember correctly. I pay for it but I can't honestly remember as I have lots of subs for things.
Claude Pro at $20 will be more than enough for most people unless you are wanting to hammer it.
But you can also get API access and pay per token. Which is what a lot of people do if they are linking it into things like Cursor.
If you want to just try out different models you can sub to something like T3Chat that lets you have limited usage of basically all the models for a single sub of $10.
As I do ML, its overwhelmingly Python, with some C++ / Cuda.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
It's hard to imagine anybody pays $200 a month for autocomplete but I guess that's how hype bubbles work...
The scary bit is $200/month is not enough for the likes of OpenAI to make a profit.
Google ended up designing its own value engineered hardware to cut costs. OpenAI might have to go the same way - with all the implications for Nvidia that would entail.
It's hard to imagine anybody pays $200 a month for autocomplete but I guess that's how hype bubbles work...
The scary bit is $200/month is not enough for the likes of OpenAI to make a profit.
Google ended up designing its own value engineered hardware to cut costs. OpenAI might have to go the same way - with all the implications for Nvidia that would entail.
It is very telling how Google won't sell their TPUs to anybody else and actually that not much is known about them.
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
There is a difference, although as my wife would say it probably isn't something you should discuss down the pub in case you are misunderstood.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
This is quite a modern view. For instance Tory MP Alan Clark got married to 16 year old Jane Beuttler in 1958 when he was 30.
Well I definitely think that is odd. I might find a 16 year old pretty, but I am pretty sure that once I was 18 or over I wouldn't have contemplated going out with one. At 30 definitely not.
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
There is a difference, although as my wife would say it probably isn't something you should discuss down the pub in case you are misunderstood.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
This is quite a modern view. For instance Tory MP Alan Clark got married to 16 year old Jane Beuttler in 1958 when he was 30.
Well I definitely think that is odd. I might find a 16 year old pretty, but I am pretty sure that once I was 18 or over I wouldn't have contemplated going out with one. At 30 definitely not.
Clark was a rum cove, even at the time.
But also, now is better than then in a number of important ways.
It's hard to imagine anybody pays $200 a month for autocomplete but I guess that's how hype bubbles work...
The scary bit is $200/month is not enough for the likes of OpenAI to make a profit.
Google ended up designing its own value engineered hardware to cut costs. OpenAI might have to go the same way - with all the implications for Nvidia that would entail.
It is very telling how Google won't sell their TPUs to anybody else and actually that not much is known about them.
The Soviets cloned western CPUs in the late 70s and through the 80s by sanding the tops off them and copying the layouts under (very good) microscopes. They're probably worried about someone doing the modern equivalent.
The range of the Labour / Green polling numbers is the interesting bit across all the pollsters. Some Labour aren't doing that bad all things considered, others they are in danger of single digits if the budget goes badly.
It's hard to imagine anybody pays $200 a month for autocomplete but I guess that's how hype bubbles work...
The scary bit is $200/month is not enough for the likes of OpenAI to make a profit.
Google ended up designing its own value engineered hardware to cut costs. OpenAI might have to go the same way - with all the implications for Nvidia that would entail.
Google has the advantage of already possessing semiconductor design teams, OpenAI would need to start from scratch and that's not easy. Their deal with AMD suggests they will try to play AMD and NVidia off against each other.
But in the short to medium term all the AI providers are going to have to burn even more money, given how the costs of DRAM and flash memory have started to skyrocket.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
You find me a blockchain solution that can go to an API docs site, interpret it, write an API call, write a lambda function and push the output into S3 then add the new S3 folder to the warehousing script and I'll accept your comparison.
Previously a custom pipeline would take a data engineer 2-3 days to code, test and merge. Now it's half a day to review, test and merge. LLMs are the real deal, but their success rates will depend hugely on what kind of jobs they are being applied to. Anyone working a desk job in front of a monitor should start making plans to not have the same career in five years.
Keen to hear more anecdotes like this. Ive heard Claude is the best coding AI assistant.
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
Speaking seriously, AI really kicks in when it comes to help me code in languages which I don't have a depth of experience in. I don't have an AI account but I use the one that my employer provides (CoPilot) or are free (Perplexity.ai). Although I am keen to hear of the experience of others, I am reluctant to use AIs that need to be paid for and/or require an account.
You are missing out if you are only using CoPilot or free tiers. The models behind the paywall from Google, OpenAI and Claude as significantly better. You don't need the $200 a month tiers though.
There are of course ones that are reasonably capable that are open weights and run locally (with a good mahcine). But Claude is miles ahead of them in terms of coding ability.
Google AI Premium plan. Cost: $19.99 per month
Google AI Ultra plan. Cost: $249.99 per month
Open AI plan: priced per token, not per month
Claude AI Pro plan. Cost: $20 per month
Claude AI Max plan. Cost: "from" $100 per month
The lower paid tier is affordable, the one above that is not something I would stretch to without a very good reason. As of this moment, Claude AI Pro is unnecessary but I will genuinely bear it in mind. Which languages do you use it to code in?
FYI, OpenAI offter per month plans. $30 and $200 if I remember correctly. I pay for it but I can't honestly remember as I have lots of subs for things.
Claude Pro at $20 will be more than enough for most people unless you are wanting to hammer it.
But you can also get API access and pay per token. Which is what a lot of people do if they are linking it into things like Cursor.
If you want to just try out different models you can sub to something like T3Chat that lets you have limited usage of basically all the models for a single sub of $10.
As I do ML, its overwhelmingly Python, with some C++ / Cuda.
It's hard to imagine anybody pays $200 a month for autocomplete but I guess that's how hype bubbles work...
The scary bit is $200/month is not enough for the likes of OpenAI to make a profit.
Google ended up designing its own value engineered hardware to cut costs. OpenAI might have to go the same way - with all the implications for Nvidia that would entail.
I read somewhere that in the very early days they built their own servers using velcro instead of screws so they could swap out hardware as it failed
They also did side by side comparisons of things like hard drives. One vendor has significantly longer life span that all of the others. I don't know if the finding were made public, but i strongly suspect the winner was IBM (then sold to Hitachi, now sold to Western Digital)
The range of the Labour / Green polling numbers is the interesting bit across all the pollsters. Some Labour aren't doing that bad all things considered, others they are in danger of single digits if the budget goes badly.
The Green surge is bad news for Your Party to state the obvious.
The range of the Labour / Green polling numbers is the interesting bit across all the pollsters. Some Labour aren't doing that bad all things considered, others they are in danger of single digits if the budget goes badly.
Isn't it the enthusiasm gap, and the different ways that different pollsters interpret that?
Reform and the Greens have fans, whereas almost anyone voting Labour is currently doing so pretty reluctantly, I imagine.
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
There is a difference, although as my wife would say it probably isn't something you should discuss down the pub in case you are misunderstood.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
This is quite a modern view. For instance Tory MP Alan Clark got married to 16 year old Jane Beuttler in 1958 when he was 30.
Well I definitely think that is odd. I might find a 16 year old pretty, but I am pretty sure that once I was 18 or over I wouldn't have contemplated going out with one. At 30 definitely not.
Follow the divide by two and add seven rule and you can't go far wrong. Personally speaking, every woman I've slept with has been two months older than me.
The headline summarises it - "chances to prevent murder ‘lost to racial sensitivities’".
An appalling case. No-one did what they ought because they feared causing offence or being branded as racist. And so a young girl - Sara Sharif - was brutally abused and killed. 96 separate injuries on her body. Her back had been broken 10 times. Unimaginable suffering in a short life.
25 years ago - 25 years - similar reasons ("cultural reasons" they were called - the murderers then were black Africans not people from Pakistan) led to no-one taking action to prevent the abuse and murder of a young girl of a similar age - Victoria Climbie. Her murder led to a public inquiry and lots of new legislation.
Yet here we are - despite all that - reading the same horrific story and wondering when in God's name those charged with caring for our children realise that putting children with men who are known to be violent is a fucking stupid idea and that worrying about being called racist simply should not be a consideration when a child's safety is at stake and that if abusing a child is part of a "culture" (and not a pathetic excuse for violence and cruelty) then we should be calling that culture what it is - barbaric - and refusing to accept it as a defence or excuse for barbarism instead of running scared of its sensitivities.
Incredibly funny that people watched Teslas explode and burn down to the frame for no good reason and decided they wanted that technology inside their house
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
The range of the Labour / Green polling numbers is the interesting bit across all the pollsters. Some Labour aren't doing that bad all things considered, others they are in danger of single digits if the budget goes badly.
The Green surge is bad news for Your Party to state the obvious.
Indeed and the Greens are eating Your Party’s lunch.
I see Zarah Sultana is on QT tonight. We will have to see how she performs. I don’t have high hopes but she may surprise. It would be remis of Fiona Bruce not to ask about the state of,their finances.
It's hard to imagine anybody pays $200 a month for autocomplete but I guess that's how hype bubbles work...
The scary bit is $200/month is not enough for the likes of OpenAI to make a profit.
Google ended up designing its own value engineered hardware to cut costs. OpenAI might have to go the same way - with all the implications for Nvidia that would entail.
I read somewhere that in the very early days they built their own servers using velcro instead of screws so they could swap out hardware as it failed
They also did side by side comparisons of things like hard drives. One vendor has significantly longer life span that all of the others. I don't know if the finding were made public, but i strongly suspect the winner was IBM (then sold to Hitachi, now sold to Western Digital)
Before they stopped talking about that kind of thing, I believe Google said it was more efficient just to remotely turn a server off and leave it until they replaced the entire racks worth of equipment.
Backblaze do modern hard drive stats but they run them under odd conditions.
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
There is a difference, although as my wife would say it probably isn't something you should discuss down the pub in case you are misunderstood.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
This is quite a modern view. For instance Tory MP Alan Clark got married to 16 year old Jane Beuttler in 1958 when he was 30.
Well I definitely think that is odd. I might find a 16 year old pretty, but I am pretty sure that once I was 18 or over I wouldn't have contemplated going out with one. At 30 definitely not.
What is the guideline? Half your age plus seven. So at 18, it is 16. At 30 it is 22. At my age it is 48 - but that's younger than my daughters so a bit icky. In practice it is 66. OAPs.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
The headline summarises it - "chances to prevent murder ‘lost to racial sensitivities’".
An appalling case. No-one did what they ought because they feared causing offence or being branded as racist. And so a young girl - Sara Sharif - was brutally abused and killed. 96 separate injuries on her body. Her back had been broken 10 times. Unimaginable suffering in a short life.
25 years ago - 25 years - similar reasons ("cultural reasons" they were called - the murderers then were black Africans not people from Pakistan) led to no-one taking action to prevent the abuse and murder of a young girl of a similar age - Victoria Climbie. Her murder led to a public inquiry and lots of new legislation.
Yet here we are - despite all that - reading the same horrific story and wondering when in God's name those charged with caring for our children realise that putting children with men who are known to be violent is a fucking stupid idea and that worrying about being called racist simply should not be a consideration when a child's safety is at stake and that if abusing a child is part of a "culture" (and not a pathetic excuse for violence and cruelty) then we should be calling that culture what it is - barbaric - and refusing to accept it as a defence or excuse for barbarism instead of running scared of its sensitivities.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
That's what Rachel did.
AI is much more than pattern-matching. The emergent behaviour is already awesome and we are just at the beginning.
Nobel prize winner Geoffrey Hinton is knowledgeable and interesting on the subject if you want to learn more.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
That's what Rachel did.
AI is much more than pattern-matching. The emergent behaviour is already awesome and we are just at the beginning.
Nobel prize winner Geoffrey Hinton is knowledgeable and interesting on the subject if you want to learn more.
I am sorry but Hinton is not actually very good on SOTA LLMs. That isn't to downplay his previous works, but he isn't a leading light on SOTA AI. Before he retired, he has been working on what turned out to be a dead end for many years and his predictions for years have been Prof Peston-esque.
People like Andrej Karpathy are much more in touch as they have been at the OpenAI and Tesla (as well as their previous impressive academic CV) when all these big leaps forward have occurred.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
Mm. Well, they need to sort out the electricity first. Which raises the question, how does this work with devolution when PC take over? London can't just keep plonking great chunks of such basic stuff into the planning system. LLafur have to suck it up pro tem, obvs.
Not without regional pricing. @Big_G_NorthWales is going to be the world's biggest bitcoin miner.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
That's what Rachel did.
AI is much more than pattern-matching. The emergent behaviour is already awesome and we are just at the beginning.
Nobel prize winner Geoffrey Hinton is knowledgeable and interesting on the subject if you want to learn more.
I am sorry but Hinton is not actually very good on SOTA LLMs. That isn't to downplay his previous works, but he isn't a leading light on SOTA AI. Before he retired, he has been working on what turned out to be a dead end for many years and his predictions for years have been Prof Peston-esque.
People like Andrej Karpathy are much more in touch as they have been at the OpenAI and Tesla (as well as their previous impressive academic CV) when all these big leaps forward have occurred.
Hinton is a bit like Arsene Wenger. The EPL wouldn't be where it is today without Wenger, he was way ahead of the curve when he joined Arsenal. He revolutionised how British game was played. However, by the end of it, he was miles off it. The likes of Arteta were telling him how out of date he was with tactics, training, analysis.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
That's what Rachel did.
AI is much more than pattern-matching. The emergent behaviour is already awesome and we are just at the beginning.
Nobel prize winner Geoffrey Hinton is knowledgeable and interesting on the subject if you want to learn more.
I am sorry but Hinton is not actually very good on SOTA LLMs. That isn't to downplay his previous works, but he isn't a leading light on SOTA AI. Before he retired, he has been working on what turned out to be a dead end for many years and his predictions for years have been Prof Peston-esque.
Yes - he worked on expert systems and logic symbolic approaches to reasoning rather than a biological neural net approach to learning, as he explains in the Royal Institution lecture that I linked to. We'll see about his predictions. He knows what he is talking about.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
AI has no originality, no capacity to think critically, no ability to discern truth from falsehood, no capacity to research primary sources, and it relies heavily upon there being a lot of accurate written material, online, on any given subject. If that material does not exist, or if you feed it lies, the answers it generates will be ridiculous.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
That's what Rachel did.
AI is much more than pattern-matching. The emergent behaviour is already awesome and we are just at the beginning.
Nobel prize winner Geoffrey Hinton is knowledgeable and interesting on the subject if you want to learn more.
I am sorry but Hinton is not actually very good on SOTA LLMs. That isn't to downplay his previous works, but he isn't a leading light on SOTA AI. Before he retired, he has been working on what turned out to be a dead end for many years and his predictions for years have been Prof Peston-esque.
Yes - he worked on expert systems and logic symbolic approaches to reasoning rather than a biological neural net approach to learning, as he explains in the Royal Institution lecture that I linked to. We'll see about his predictions. He knows what he is talking about.
I am sorry, but he really doesn't when it comes to SOTA AI. I know what I am talking about on this. I was invited to recent talk in person and he was wrong about lots of things.
If you want to know about SOTA stuff in an understandable way, Andrej Karpathy, is more the person you want to listen to.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
A small minority of humans lie to deceive. More people try to be honest then we give them credit for.
The LLMs can't help but make things up because it's an inherent part of how they work.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
These announcements remind me a bit of HS2 was always sold as speed, when it should have been capacity. The nuclear power as part of securing future energy supplies particularly with much more renewables in the mix, absolutely, going to be loads of local AI jobs, bullshit. Even if datacentres are constructed nearby in 2035, they require about 5 mole people to operate.
I had the (tbh fairly obvious) revelation about the nature of the AI bubble this week in the Bay Area. Everywhere you look are billboards advertising AI this and AI that. As numerous as the billboards advertising personal injury attorneys in Houston.
Dig deeper though, and you realise AI is just a new word for tech. These are all largely standard tech companies most of which would have been doing 98% of the same stuff 5 years ago when Gen AI wasn’t a thing, and calling it something else. Back then the fashionable things were cloud, and blockchain. Same billboards but just substitute the word AI.
We do need to invest in tech infrastructure. If that requires us to use the AI word to drum up excitement then fair enough.
Somebody sent me an analysis a few days ago, which was carried out in remarkably quick time. Giving credit where credit was due, they said "aided by my trusty AI assistant". Um, given the warnings that come with the AI search results summary about these results may contain errors I'm not so confident about the analysis after all.
Am I unduly pessimistic?
Friend of mine who is a lecturer tried an experiment on a very well known AI on a subject he and I know well. He got the AI to chunter out some stuff about the subject and then started in on it. IT's not long before this starts happening:
"You say X, can you please give me the evidence?" "Bloggs 1978 says so." "But I've looked at it and it doesn't say that at all. What is your evidence?" "I'm sorry, the correct evidence is in website so and so." "But that is about something else. What is your evidence that it is relevant?" "Jones 2008 ... "But that doesn't exist. Tell me ... ... and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
Barely a week goes by without a hallucinated citation causing hilarity, embarrassment, and compromised careers, in the legal profession.
AIs are as unreliable as humans, though unlike humans, they don't intend to deceive. I find it helps if you add to your prompt "if you are not certain, say you don't know. "
Humans making up citations is not “unreliability”
OK Humans lie to deceive. AIs try to please by making things up. Who is most trustworthy?
People. Humans answer based on some conception of reality, even if it is fictional. AI is just pattern-matching words: it has no concept of the meaning of the words, just the pattern they form. A human can poison you, a human can burn the toast, but an AI might feed you a Rachel's Trifle and not recognise the problem
AI has no originality, no capacity to think critically, no ability to discern truth from falsehood, no capacity to research primary sources, and it relies heavily upon there being a lot of accurate written material, online, on any given subject. If that material does not exist, or if you feed it lies, the answers it generates will be ridiculous.
I just did an AI Overview google search on the performance of the Spanish army in the Peninsular War - and got quite a neat summary of my own thesis on the subject.
The range of the Labour / Green polling numbers is the interesting bit across all the pollsters. Some Labour aren't doing that bad all things considered, others they are in danger of single digits if the budget goes badly.
"If the Budget goes badly" - it seems very likely it will go badly.
Putting up taxes is always unpopular - but the normal excuse is obviously we didn't have a choice, position is much worse than we thought etc.
But this time, the Government is going to choose to abolish the 2 child benefit cap.
However keeping the 2 child benefit cap is massively popular - Support 60%, Oppose 24%, Don't Know 16%.
Now very few journalists have cottoned on to this - but Times Radio picked up on it yesterday. And everyone is going to pick up on it very quickly on Budget Day.
I don't know. I hate to say this, but it might actually be pure bullshit. An AI-generated tweet issued by a SPAD who's never had a job with a yearly assessment to preserve a Prime Minister who hasn't got a clue about anything without somebody telling him first and is just counting the days when he can bugger off abroad so he can mix with Important People and think Important Thoughts with people who aren't British.
AIGZ = special designated area basically, with the right power supply etc. Been in the water all year.
According to the government, AI Growth Zones are designated sites that are well-suited to housing AI-enabled datacentres and their supporting infrastructure. Ideally, these zones should have “enhanced access” to power supplies of at least 500MW and sympathetic planning support. This is because datacentres are notoriously power-hungry entities, and siting them in areas where energy is in short-supply could slow down the time it takes to bring one of these new AI server farms online."
Targeting, in part, poor areas eg deinustralised ones, which are less likely to complain planning wise. Not sure how much it will do for long term employment of the actual locals, or their leccy bills once regional pricing comes in, though (which may be one reason Labour don't like the latter).
In terms of locals long term employment from AI. The announcement today will be "generating power by the mid 2030s." So lets call it 2040....we will be on our 2nd or 3rd AI bubble bursting by then.
Mm. Well, they need to sort out the electricity first. Which raises the question, how does this work with devolution when PC take over? London can't just keep plonking great chunks of such basic stuff into the planning system. LLafur have to suck it up pro tem, obvs.
Not without regional pricing. @Big_G_NorthWales is going to be the world's biggest bitcoin miner.
Very rude - it's all muscle.
@Big_G_NorthWales will be delighted to still be on this planet when this is built
Mind you, more pressing for me is to have my blue badge soon [hopefully]
The range of the Labour / Green polling numbers is the interesting bit across all the pollsters. Some Labour aren't doing that bad all things considered, others they are in danger of single digits if the budget goes badly.
"If the Budget goes badly" - it seems very likely it will go badly.
Putting up taxes is always unpopular - but the normal excuse is obviously we didn't have a choice, position is much worse than we thought etc.
But this time, the Government is going to choose to abolish the 2 child benefit cap.
However keeping the 2 child benefit cap is massively popular - Support 60%, Oppose 24%, Don't Know 16%.
Now very few journalists have cottoned on to this - but Times Radio picked up on it yesterday. And everyone is going to pick up on it very quickly on Budget Day.
Got to say, announcing changes like that alongside massive tax rises is going to send their polling even further into the basement.
Yes it’s probably needed to keep Labour MPs happy but they have a majority of 169, allow thr 50 or so that make up the awkward squad to leave
Literacy means reading books, mostly. Messages on social media doesn't really count.
The most pressing need for people to learn to read back when literacy first expanded in England was to be able to read and write letters, and read legal documents.
Social media, whatsapp, and text messages are the modern equivalent of letters, and everybody has given up on reading legal documents, pretty much.
Edit: Although I'd agree that to describe someone as literate, would be to say that they read books widely (probably beyond the Black Library).
Comments
"You say X, can you please give me the evidence?"
"Bloggs 1978 says so."
"But I've looked at it and it doesn't say that at all. What is your evidence?"
"I'm sorry, the correct evidence is in website so and so."
"But that is about something else. What is your evidence that it is relevant?"
"Jones 2008 ...
"But that doesn't exist. Tell me ...
...
and so on and so on for about 20 cycles. I paraphrase - but you get the idea. He makes a point of showing it to his students - I am sure as a warning to them not to make the AI write their essays.
The whole thing reminds me of nothing so much as the poor chap who was not quite all there mentally who just wanted to help the nice police and didn't want to admit he didn't know the answer to anything, so tried to keep them happy by answering yes etc. to their questions.
Trouble was, what they were visiting about was a murder down the road. And no adult present. He got banged up for life ... when they had a grade A suspect very close by.
The use in speeding up boilerplate stuff is interesting, though. Sounds more likely.
Apart from their journalistic integrity of course.
For me, a 7.5 or 8 from 10 in this arena is "good/very good". There's a huge variety.
It's a fascinating conversation all about different views and frames of reference, and how they need to be fully defined for mutual understanding to work. One problem with 10/10 is that everyone assumes it is perfect and finished and done, which is never true. In this case my 7.5/10 was just a note to PB and self, so I wont tell anyone else, and will use words in my feedback. That's the "too short a summary" problem, which skewered Ofsted when they tried one word summaries.
A good example is a National Parks project called "Miles without Stiles" for accessible public footpaths. They label their routes as "for 'all'", "for 'many'", "for 'some'", and ‘challenging’, including gradients and surfaces.
https://www.lakedistrict.gov.uk/visiting/things-to-do/walking/mileswithoutstiles
The reference frame problem is that even their "for all" allows gradients are 1:10, described as "Suitable for everyone, including pushchairs and wheelchairs and mobility scooters". But they are not suitable for everyone. A reasonable maximum permissible gradient for normal mobility aids is 1:20 with resting platforms every so often, if we ask disabled organisations - and they prefer shallower gradients still.
In such circs, a consistent standard is the important thing if it is intended to be used widely - so people with different needs can judge it for themselves. And that needs a lot of information.
The same goes on for barriers. Guidelines are that a 1.5m gap with a sealed surface and level approaches is the only accessible access point. Anything else obstructs. I visited a project in Kenilworth in September where Kissing Gates had been used, on advice from a Local Council engaged Consultant. Those were accepted in the 1990s or 2000s, but mobility aid users know that they go rusty, or stiff, or overgrown, and cannot be relied upon unless perfectly maintained, which NEVER happens. And if they get stuck it becomes life threatening if in the countryside, because they cannot pick up their mobility aid and walk.
So it is a complex question, with no cut-and-dried solution.
Evening. Surfing is too much effort. Nice day for a walk though.
“After Mr Epstein was first convicted of procuring an underage girl in 2008, their relationship continued through to 2011, beginning when Lord Mandelson was Business Minister and continued after the end of the then Labour government. Lord Mandelson stayed in Epstein’s House while he was in jail in June 2009. You will wish to consider his suitability given this and the other information in this note.”
It was brought in physical form to the PM’s office by a junior official in the Propriety team. It was prepared by civil servants and signed by Tierney.
https://order-order.com/2025/11/13/exc-the-advice-on-mandelson-and-epstein-which-starmer-chose-to-ignore/
I presume it still didn't cross Starmer's desk.
You’re in the mail… 😉
https://www.dailymail.co.uk/news/article-15287319/Fake-admiral-medals-veterans-Remembrance-Sunday.html
There was that paper out which suggested it actually takes people longer to fix/check the code and so ai is actually killing productivity without people realising.
I guess the key test will be if people start letting coders go or stop hiring.
"When asked about the single attribute that make someone English, YouGov found that only 10% of white Britons believe being white is a requirement — compared with 24% of ethnic-minority Britons. Just 9% of white respondents said both family heritage and whiteness are necessary, while more than twice as many ethnic-minority Britons (21%) agreed. And when Englishness was framed as a mix of whiteness, family heritage and Christian values, only 4% of white Britons born in the UK endorsed that view, along with 5% of ethnic-minority Britons born in the UK."
https://unherd.com/newsroom/ethnic-minority-brits-twice-as-likely-to-tie-englishness-to-whiteness
https://en.wikipedia.org/wiki/Gulf_of_Tonkin_incident more like it ?
Or maybe https://en.wikipedia.org/wiki/War_of_Jenkins'_Ear ??
Its also great if you need to prototype something particularly try something you have little experience.
The only truly British people are those who are Half Jewish, Half Polish, Half Ukrainian, Half French, Half American & Half Scottish. With a smidgin of English in there somewhere.
Like me.
There are scenarios where it’s clear that AI can save time, read specified api document and generate code to pull from the API.
The problem comes when you ask AI to generate details where the specification is ill defined or changed over the years. Ask it to write JavaScript and AI’s struggle because it will look at 30 years of examples and write poor quality code based on things that are popular but 20 years out of date.
That’s one reason it’s bad at my day job, it sees information that was accurate in 2015 and assumes it’s still valid now as it doesn’t check the blogpost and think hmm this is old so may be incorrect. It sees this is from 2015 and is still there so most be important and correct
https://www.legalcheek.com/2025/10/judge-finds-barrister-relied-on-entirely-fictitious-ai-generated-cases/
https://www.11kbw.com/knowledge-events/case/andrew-edge-successful-before-high-court-in-ai-fake-authorities-case/
https://www.legalfutures.co.uk/latest-news/law-firm-that-cited-fake-ai-generated-cases-to-pay-wasted-costs
There are of course ones that are reasonably capable that are open weights and run locally (with a good mahcine). But Claude is miles ahead of them in terms of coding ability.
But I suppose the Pattern Recognition industry is less appealing to gullible investors and corporate procurement offices than the Artificial Intelligence concept.
AI doesn't have to be better than what is currently done to replace it, it just has to be fashionable.
I'd guess that actively deskilling software developers will come with large downstream effects. Whether the large software houses will care might be different matter.
- Google AI Premium plan. Cost: $19.99 per month
- Google AI Ultra plan. Cost: $249.99 per month
- Open AI plan: priced per token, not per month
- Claude AI Pro plan. Cost: $20 per month
- Claude AI Max plan. Cost: "from" $100 per month
The lower paid tier is affordable, the one above that is not something I would stretch to without a very good reason. As of this moment, Claude AI Pro is unnecessary but I will genuinely bear it in mind. Which languages do you use it to code in?This is a "clip on e-handcycle" a friend uses. It is in the unregulated gap (ie no formal legal classification, which I hope the Govt will now sort out). It clips onto a manual wheelchair, and makes it like a powerchair but as a e-tricycle, quicker than I can put on a pair of slip-on shoes. It has an "autonomy" (ie range) of up to around 40-50km (depending on battery size). It has quite a powerful motor for traction.
He upgraded to this one because UK cycle routes (eg the NCN) are full of steep hills *, and have little investment (total budget a few million for 20000 km of route), and essentially zero seasonal maintenance so are currently mainly covered in wet leaves. This one costs about £8k. He has done 15,000km using this type of device over the last 8 years.
(He is currently wanting rear wheel drive too - that won't, I think, happen.)
Web page: https://batec-mobility.com/en/handbike/batec-scrambler-2/
* As an example, the pedestrian bridge near me over the M1 has a straight 60m 1-in-15 ramp one side with a packed earth surface, which needs an industrial strength mobility aid to tackle safely. That's in addition to 3 sets of chicane barriers.
@accountablegop.bsky.social
Megyn Kelly: "I know somebody very close to this case…Jeffrey Epstein, in this person's view, was not a pedophile…He was into the barely legal type, like he liked 15 year old girls…He wasn't into like 8 year olds…There's a difference between a 15 year old and a 5 year old."
https://bsky.app/profile/accountablegop.bsky.social/post/3m5jnc3lgx622
N.B. Megyn has a 14 year old daughter
I could (and am sometimes encouraged to, including as government policy) take the misery out of teaching by using AI to generate lessons, or feedback on assignments.
And it's easy to see the logic. But leaving aside the issues of energy use, copyright ethics and many AIs drawing triangles that look like triangles but aren't...
Some of that misery is inseparable from the essential and fun stuff. To prepare resources includes thinking about the content to have fun things to say. Marking work is the only way to really know what has worked and what hasn't.
As a society and culture, we're pretty bad at valuing tangible and intangible correctly relative to each other.
I find it helps if you add to your prompt "if you are not certain, say you don't know. "
We don't like to say it out loud, but humans largely respond better to confidently wrong than to correct diffidence.
See lots and lots and lots of election results.
Any research it does needs independently checking. Use it as a starting point only.
Being attracted to a 5 or 8 year old is very weird.
Being attracted to a 16 or 17 year old isn't, but if you are significantly older than them then if you act upon it you are definitely a rather unpleasant creep.
"Germany calls up all men aged 18 for military medical exam
The country has stopped short of reintroducing mandatory conscription but hopes to boost numbers by giving potential recruits an insight into army life" (£)
https://www.thetimes.com/world/europe/article/germany-military-service-conscription-zzphth5ts
I think we’re pretty tolerant and rub along as long as we don’t think people are taking the piss and they are good neighbours
https://en.wikipedia.org/wiki/Alan_Clark#Personal_life
Hello Malky - at least you evidentluy haven't been drowned today.
And then came the assumptions
And the assumptions were without form
And the plan was completely without substance
And the darkness was upon the face of workers
And they spoke among themselves, saying "It is a crock of shit and it stinketh."
And the workers went upon their supervisors and sayeth, "It is a pail of dung and none may abide the odor thereof."
And the supervisors went unto their managers and sayeth unto them "It is a container of excrement and it is very strong, such that none may abide by it."
And the managers went unto their directors and sayeth, "It is a vessel of fertilizer, none may abide its strength."
And the directors spoke amongst themselves, saying one to another, "It contains that which aid plant growth, and it is very strong."
And the directors went unto the vice presidents and sayeth unto them, "It promotes growth and is exceedingly powerful."
And the vice presidents went unto the president and sayeth unto him, "This new plan will actively promote the growth and efficiency of this company, and these areas in particular."
And the president looked upon the plan,
And saw that it was good, and the plan became policy.
And this is how shit happens
https://youtu.be/jCEamUarOSI?si=yYIuA6HewXHeO_qF
Claude Pro at $20 will be more than enough for most people unless you are wanting to hammer it.
But you can also get API access and pay per token. Which is what a lot of people do if they are linking it into things like Cursor.
If you want to just try out different models you can sub to something like T3Chat that lets you have limited usage of basically all the models for a single sub of $10.
As I do ML, its overwhelmingly Python, with some C++ / Cuda.
BBC Newsnight also doctored footage of a Donald Trump speech and ignored concerns that were raised about it, The Telegraph can reveal.
https://www.telegraph.co.uk/news/2025/11/13/bbc-doctored-trump-speech-second-time-newsnight/
@FindoutnowUK
Find Out Now voting intention:
🟦 Reform UK: 33% (-)
🟢 Greens: 17% (-1)
🔵 Conservatives: 16% (-)
🔴 Labour: 15% (-)
🟠 Lib Dems: 11% (-)
Changes from 5th November
[Find Out Now, 12th November, N=2,339]"
https://x.com/FindoutnowUK/status/1988963847936618606
But also, now is better than then in a number of important ways.
But in the short to medium term all the AI providers are going to have to burn even more money, given how the costs of DRAM and flash memory have started to skyrocket.
They also did side by side comparisons of things like hard drives. One vendor has significantly longer life span that all of the others. I don't know if the finding were made public, but i strongly suspect the winner was IBM (then sold to Hitachi, now sold to Western Digital)
Reform and the Greens have fans, whereas almost anyone voting Labour is currently doing so pretty reluctantly, I imagine.
The headline summarises it - "chances to prevent murder ‘lost to racial sensitivities’".
An appalling case. No-one did what they ought because they feared causing offence or being branded as racist. And so a young girl - Sara Sharif - was brutally abused and killed. 96 separate injuries on her body. Her back had been broken 10 times. Unimaginable suffering in a short life.
25 years ago - 25 years - similar reasons ("cultural reasons" they were called - the murderers then were black Africans not people from Pakistan) led to no-one taking action to prevent the abuse and murder of a young girl of a similar age - Victoria Climbie. Her murder led to a public inquiry and lots of new legislation.
Yet here we are - despite all that - reading the same horrific story and wondering when in God's name those charged with caring for our children realise that putting children with men who are known to be violent is a fucking stupid idea and that worrying about being called racist simply should not be a consideration when a child's safety is at stake and that if abusing a child is part of a "culture" (and not a pathetic excuse for violence and cruelty) then we should be calling that culture what it is - barbaric - and refusing to accept it as a defence or excuse for barbarism instead of running scared of its sensitivities.
https://www.theverge.com/news/820123/tesla-recall-uscpsc-powerwall-2-batteries-overheat-fire-burn-hazard
@coachfinstock.bsky.social
Incredibly funny that people watched Teslas explode and burn down to the frame for no good reason and decided they wanted that technology inside their house
https://www.facebook.com/reel/2330787364092344
I see Zarah Sultana is on QT tonight. We will have to see how she performs. I don’t have high hopes but she may surprise. It would be remis of Fiona Bruce not to ask about the state of,their finances.
http://infolab.stanford.edu/pub/voy/museum/pictures/display/0-4-Google.htm
Before they stopped talking about that kind of thing, I believe Google said it was more efficient just to remotely turn a server off and leave it until they replaced the entire racks worth of equipment.
Backblaze do modern hard drive stats but they run them under odd conditions.
So at 18, it is 16.
At 30 it is 22.
At my age it is 48 - but that's younger than my daughters so a bit icky. In practice it is 66. OAPs.
AI is much more than pattern-matching. The emergent behaviour is already awesome and we are just at the beginning.
Nobel prize winner Geoffrey Hinton is knowledgeable and interesting on the subject if you want to learn more.
https://www.youtube.com/watch?v=IkdziSLYzHw
People like Andrej Karpathy are much more in touch as they have been at the OpenAI and Tesla (as well as their previous impressive academic CV) when all these big leaps forward have occurred.
We'll see about his predictions. He knows what he is talking about.
If you want to know about SOTA stuff in an understandable way, Andrej Karpathy, is more the person you want to listen to.
The LLMs can't help but make things up because it's an inherent part of how they work.
Putting up taxes is always unpopular - but the normal excuse is obviously we didn't have a choice, position is much worse than we thought etc.
But this time, the Government is going to choose to abolish the 2 child benefit cap.
However keeping the 2 child benefit cap is massively popular - Support 60%, Oppose 24%, Don't Know 16%.
Now very few journalists have cottoned on to this - but Times Radio picked up on it yesterday. And everyone is going to pick up on it very quickly on Budget Day.
Mind you, more pressing for me is to have my blue badge soon [hopefully]
Yes it’s probably needed to keep Labour MPs happy but they have a majority of 169, allow thr 50 or so that make up the awkward squad to leave
Social media, whatsapp, and text messages are the modern equivalent of letters, and everybody has given up on reading legal documents, pretty much.
Edit: Although I'd agree that to describe someone as literate, would be to say that they read books widely (probably beyond the Black Library).