Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

This will make you even more confused about HS2 – politicalbetting.com

12346

Comments

  • Sean_FSean_F Posts: 37,068
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I'd make the Conservatives slight favourites to hold Tamworth, on the back of that poll.
  • algarkirk said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    That's an interesting observation. Universities are shitting themselves (or should be) about AI and students using it to cheat in assessment. We have already had to ditch most online assessment due to cheating issues (students chatting on WhatsApp during exams etc) and now any essay based questions in an online exam are so easily answered with ChatGPT and the like.

    So it will be back to exam halls, and handwritten papers. We use software such as Turnitin to detect plagiarism in work, but as far as I know Turnitin does not pick up ChatGPT, and will struggle to. Your test is an interesting example where cheating can be found.

    However, I think humans can detect ChatGPT answers at the moment, at least in my limited field. We had some examples in a chemistry re-sit exam. The language used to answer some of the longer form questions is clearly not from the student (non- English extraction).
    I started university 50 years ago exactly. It is usually in the top 10 or so UK unis in the current lists. We had a stellar outcome in my department in finals - 1976. Nearly 8% got firsts; the rest equally divided between 2.1 and 2.2. In many departments there were no firsts at all. In those days that was a sign of a truly rigorously academic department.

    The classification depended entirely on 9 three hour papers in an exam room over 2 weeks of that lovely summer.

    There is much to be said for both elements of this experience - Firsts being really rare, and performance completely immune from the possibility of cheating.

    There was the added bliss of knowing that you could spend quite a bit of time doing extra curricular stuff without pressures of graded coursework, dissertations and modular exams every fortnight. Our much maligned and wonderful young people could do with a bit of that.
    I think you have to be very clear what it is that you are testing and make sure the test reflects that.

    Sometimes, if the student can use AI to get the correct results then that's fine. People in the real world can use AI too. If you want to test that someone holds the relevant knowledge in their head, then test in exam conditions.

    I think it forces you to think more carefully about what the purpose of the test is, rather than simply to set an essay question.
  • turbotubbsturbotubbs Posts: 17,114
    Farooq said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    They get told not to cheat in exams using WhatApp but they do it anyway. We have electronic resources for our content, and when students access it, it leaves a log, i.e. we can see who accesses it and when. This summer a student went to the toiled during an exam (allowed) for 20 minutes (suspicious) and was shown to have accessed the content on line from a hidden phone (very much not allowed).

    Anyone who cheats generally thinks they are (a) cleverer than they actually are and (b) are not going to get caught.

    They are very often wrong.
    Can you say "often"? Since by definition you don't know how many successful cheating attempts went undetected.
    Which is a fair comment, and I did think about saying it.

    I think we have very little cheating during in person, invigilated exams.

    We had a lot more during online exams - that had no way of invigilating (unlike some online exams we did not even try to see what the students did). We detected a fair amount of it (in one exam of mine, then same wrong answer occurred where a chemical structure had been copied from the paper incorrectly. The chances of more than 10 students making exactly the same mistake are small.

    We had collusion in other papers and some of the culprits coughed to it.

    But yes, I am certain some did get away with it.

    And so its back to 19th C methods of assessment...
  • 148grss said:

    148grss said:

    Leon said:

    148grss said:

    darkage said:

    148grss said:

    Leon said:

    148grss said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    There are tantalising rumours on TwitterX that we have met aliens who can travel across space to visit us.

    It's bizarre that more people aren't talking about this; if it is true* it is one of the biggest news stories in human history.





    * It is not true
    It really might be true

    When I bang on about AI, @Benpointer always says “get back to me when a robot can stack my dishwasher”. And it’s a fair point

    Well, now a robot can easily stack a dishwasher, and what’s more it can learn this simply by watching you do it first

    https://x.com/tesla_optimus/status/1705728820693668189?s=46&t=bulOICNH15U6kB0MwE6Lfw

    “With enough strength and dexterity, Tesla's Bot could handle almost all physical tasks by simply looking at video clips of people doing said tasks.

    Picking up a vacuum and running it through the house. Sorting and folding laundry. Tidying up the house. Moving material from point A to point B. Picking up trash and placing it in a bin. Pushing a lawnmower. Monitor an area for safety-related concerns. Laying bricks. Hammering nails. Using power tools. Clean dishes... etcetera, etcetera, etcetera.”

    https://x.com/farzyness/status/1706006003135779299?s=46&t=bulOICNH15U6kB0MwE6Lfw
    I can find you a parrot that can recite poetry - doesn't mean it can write you any.

    Are learning machines cool? Yes. But at the end of the day they're automatons that can, at a basic level, do simple tasks in relatively stable environments. Complex tasks in other scenarios are out of reach. All the hype is just sales - of course people who own stocks in AI companies would claim it would end the world or be the gadget of the future, because they'll rake the money in.

    It is not coincidental that the new "AI is going to be able to do everything" line came after the "Meta is going to be the new frontier" fell through and the "NFTs and the blockchain are going to revolutionise everything" idea was proven false. Capitalism always needs a new frontier, to exploit and sell and commodify, and tech bros think they can build the next one. So far they're failing.
    In my view the free chat programmes available online have written reasoning capabilities that exceed that of most graduate level professionals with over 20 years of high level report writing experience. They can write better than people who have been doing decision making and report writing for their entire career. From a management point of view they surpass most humans in knowing how to respond to situations in difficult correspondence exercises.

    It is an inevitable human reaction to deny this or not look at it, but it won't help.
    Do you mean ChatGPT, or is there a specific programme you mean? I think the most convincing ones are good at creating the approximation of human writing, until you learn it is either just lying (making up references and quotes and just general facts) or spewing nonsense (this often happens with coding where the coding looks correct, but is really just nonsense).

    The way this stuff currently works is by taking the input, analysing words that are associated with the words relevant to that topic, and picking each word based on the likelihood that it is the most common word to follow the previous word. That requires it to read (and arguably steal) the work of existing people. It cannot think - it is not creating. It is a parrot - a big parrot, a complex parrot, a parrot that can maybe do some simple things - but a parrot. And that's selling parrots short, because I believe parrots have the ability of cognition.
    So, is the robot video real, or not?

    You still haven't told us, and you still don't realise the significance of that
    Do I personally believe in the reality of a video on Twitter? I don't. Do I personally know it isn't real? No - that's why I have said I will await for credible sources to do reporting rather than just trust people chatting on a notoriously untrustworthy social media platform about a topic where there is so much undue hype.
    That's nothing. They also developed a robot that can play ping-pong!

    https://twitter.com/i/status/1687690852456402944
    I hear they have also devised an automaton that can do the work of a police officer - although it does need some organic matter. Apparently it is half man, half machine - all cop.
    That was devised before you were born!
  • LeonLeon Posts: 54,557
    rcs1000 said:

    Regarding the video, I don't think it's AI generated, but I do think it is sped-up compared to reality. If you look at the moments when a block is pushed by a human, or when one falls slightly, they happen far too rapidly. (Indeed, they look like the block is snapping into place.)

    That's a tell tale sign that it has been sped up.

    Edit to add: I think the speeding up is also the reason it looks fake. There are too many moments where it seems slightly off.

    I think you're right, I just had a go at various speeds. Looks most realistic at somewhere between 0.5x and 0.75x the original speed, so it has been accelerated - but not massively

    What an odd thing to do. The robot is impressive enough as is?
  • Sean_FSean_F Posts: 37,068
    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    Certainly, when I was doing my MA, I was impressed by just how much useful material there is online, which you can find quite easily by typing in certain key words (albeit, most archival material is not online. Much of it is not even catalogued, eg the Clinton Papers, in Manchester University, which I found very useful).
  • glwglw Posts: 9,855
    Farooq said:

    If I were running a cheating detection as a service (CDaaS) I would detect large volumes of inputs as a likely attempt to train a CaaS and feed it nonsense results, or some weirdly overfitted responses that punished the use of semi-common words like "study", "confidence", or "historical", forcing your CaaS to write weirdly contorted essays excluding those words.

    For sure it will be an arms race. But nobody should kid themselves that there will be easy fixes to the abuse of AI.
  • Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
  • Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    In the rare event that I have to refer to the website in question I find that, "Elon Musk's fascist-friendly plaything," does the job perfectly.
  • NigelbNigelb Posts: 70,216

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    Is that a problem - or a useful skill ?
  • TazTaz Posts: 14,100

    Not long now before Suella takes to the boards.

    Let's see how her audition to become next PM works out.

    Are we all excited?

    I’m so excited
    I just can’t hide it
    I’m about to lose control
    And I think I like it…….
  • darkage said:



    (Snip)

    I had a similar view on GPT to you prior to this. But a lawyer who runs an outsourcing firm explained they thought it was rapidly advancing and indicated they were starting to use it in work contracted out to them by local authorities, automatically generating correspondence relating to civil law breaches and also drafting legal notices, albeit corrected by humans. So what is clear is that it is already here and taking over work done by professionals.

    That helps with productivity initially - you replace the low level legal work done by associates and reviewed by partners. Saves money and makes the partners richer.

    But means we don’t train up the next generation of lawyers.

    Some might argue that is a good thing…
  • eek said:

    darkage said:

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.

    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.

    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.


    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.
    Its not entirely this. They are also expensive to build, complex land assembly issues etc.
    Costs money and because of the way the Treasury looks at everything only the cheapest options are allowed...
    Friend of a friend owned some land in west London and wanted to develop underground parking. Kensington & Chelsea said no because they were looking to reduce the number of parking spaces in the borough
    Camden is similarly anti-car, on ideological grounds.
  • Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
  • TazTaz Posts: 14,100
    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ
  • Nigelb said:

    Meanwhile, Weekend at Bernie's, or not ?

    Admiral Viktor Sokolov, the commander of Russia’s Black Sea Fleet, is apparently not dead, according to this photo released by the MOD today, despite Ukraine’s claims to have killed him last week.
    https://twitter.com/maxseddon/status/1706624970535669817

    That chair looks like a hospital bed that has been cranked up
  • WhisperingOracleWhisperingOracle Posts: 9,042
    edited September 2023
    Nigelb said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    Is that a problem - or a useful skill ?
    I'm not quite sure, because, as OnlyLivingBoy says, there are all sorts of issues.

    Copying and then re-editing content in our own voice, is something that we all increasingly do. But is that a good thing ?

    But there's another issue, because my nephew also mentioned some quite specific things.

    Rather than only re-editing in your own voice, it seems a lot of students will enjoy handing in essays where about half the paragraphs are entirely the work of Chat GPT, and about half their own, and enjoy re-editing to make it all like a whole. I do think some of these things will raise future issues of how we learn, and what we are devoting time to learn to do.
  • TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
  • kinabalukinabalu Posts: 41,903
    Sean_F said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I'd make the Conservatives slight favourites to hold Tamworth, on the back of that poll.
    I got 4.4 on them the other day. Seemed generous - even to a Labour Landslide pundit like my good self.
  • Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
  • bondegezoubondegezou Posts: 10,640
    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
  • MexicanpeteMexicanpete Posts: 27,993
    ...
    Taz said:

    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ

    So you're back on board with the Conservatives. Excellent, well done!
  • MalmesburyMalmesbury Posts: 49,411
    Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    Twatter (TM by D. Cameron)
  • Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Yes. There are lots of qualities and abilities that you can't test with an essay. No-one would think of doling out driving licenses to people who wrote a good essay on the fundamentals of safe driving.

    Why is it the test of choice for so much else?
  • Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    Left Twix or Right Twix?
  • TimS said:

    New Yougov out:

    Con: 27% (+3 from 13-14 Sep)
    Lab: 43% (-2)
    Lib Dems: 10% (+1)
    Reform UK: 8% (=)
    Green: 7% (-2)
    SNP: 4% (+1)

    https://x.com/YouGov/status/1706625693302325430?s=20

    Fieldwork all post Rishi's announcement cancelling climate change. Three polls now so I think we can declare a bounce, of around 4-5%.

    I think this shows the impact when you have one party opening up a contentious issue where the public is probably more evenly split than polling VI. It got a huge amount of press, and if even say 35% of people agreed with Rishi that might have been enough to push the polling up.

    Long term it's a reversion to the polling numbers earlier in the summer, before quite a marked dip the week before the net zero anouncements.

    Broken, sleazy Labour and Greens on the slide :lol:
  • MalmesburyMalmesbury Posts: 49,411

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Most ultra-brainiacs, in my experience, couldn't organise a piss up in a brewery. They can find the Higgs, given a lot of real, expert, project managers to turn their ideas into steel, copper, concrete etc.

    Without Oppenheimer to cat herd the scientists, the Bomb might have taken longer. Without General Groves, the project might well have meandered to failure.

    I like the story of his first meeting with the principles. He hammered out how he was going to run the thing, then said he had to dash (to cut the meeting short). Dash, that was to catch a train to finalise buying a zillion acres of land needed for the project - something that had been held up for months.
  • LeonLeon Posts: 54,557

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    Mataculus thinks AGI will arrive around 2026-2030. Elon Musk reckons by 2029 , possibly sooner


    https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029/



    Intriguingly that was Kurzweil's prediction 6 years ago, years before ChatGPT

    "At the 2017 SXSW Conference in Austin, Texas, Kurzweil gave a typically pinpoint prediction.

    “By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”"


    The DeepMind founder says "in the next few years, at most a decade", others say 5 years, and so on and so forth

    So the idea this is "remote" is either fanciful - or wishful thinking. This is now close
  • Andy_JSAndy_JS Posts: 32,006
    HS2 should have linked up with HS1, and going to Euston was always a stupid idea, according to this article.

    https://reaction.life/mark-bostock-has-been-proved-totally-right-about-hs2/

    "It is hard to imagine a greater procurement disaster than HS2, the transformative high speed rail line between London and Scotland, currently being axed bit by bit, as the costs go through the roof.

    Mark Bostock, a former Arup consultant who successfully led the construction of HS1 from St Pancras to the Channel Tunnel and a former client of ours, would have had a few things to say about it. Sadly he passed away in August but he has been proven totally right about HS2. In fact, it is the greatest vindication in UK transport policy since promoters of the Stockton & Darlington Railway said it would be better than relying on canals.

    Mark led a proposal on behalf of Arup which would have seen HS2 go via a different route. It would link up with HS1 north of St Pancras. The route would have gone via a hub station connecting with Heathrow and the Great Western Railway near Iver. As now, the route would come into Old Oak Common, but never come into Euston which is simply too small. I can hear him saying now “They’ve got the alignment wrong, the most important decision in a railway. It is going to be a disaster.”"
  • AlsoLeiAlsoLei Posts: 1,415

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Harold Wilson? Youngest C20th Oxford don. Probably also one of the highest-rating PMs on the most of the other requirements you mention, at least for his first term in office.

    ...but despite that, I'm not sure many would put him at the top of their personal "best PMs" list.
  • Andy_JSAndy_JS Posts: 32,006
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I wish I'd posted my prediction for this seat yesterday because it was very similar to this, with Con + Reform very likely to get about 50% of the vote between them.
  • MexicanpeteMexicanpete Posts: 27,993
    AlsoLei said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Harold Wilson? Youngest C20th Oxford don. Probably also one of the highest-rating PMs on the most of the other requirements you mention, at least for his first term in office.

    ...but despite that, I'm not sure many would put him at the top of their personal "best PMs" list.
    Here's one who would.
  • Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
  • rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
  • MexicanpeteMexicanpete Posts: 27,993
    edited September 2023
    Andy_JS said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I wish I'd posted my prediction for this seat yesterday because it was very similar to this, with Con + Reform very likely to get about 50% of the vote between them.
    Tories most likely to win Tamworth, but on their own terms. Why are Reform going to fall into line? Some will, surely many won't.
  • AlsoLeiAlsoLei Posts: 1,415
    .

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    I suspect that one of the next milestones on the path from current generative AI towards AGI will be some form of continuous re-training. Retrieval Augmentation / RETRO is the hot new thing, and certainly points in that direction - but it's going to take much, much more computational power to get there.
  • Andy_JSAndy_JS Posts: 32,006
    edited September 2023
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I don't think this is a constituency poll as such. It's a projection based on other things, which may be national polling, demographics, etc. Happy to be corrected if wrong.
  • bondegezoubondegezou Posts: 10,640
    edited September 2023

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
    If that just represents national polling, then we have to add on a by-election factor. By-elections usually show bigger swings. In which case, this should be a walk in the park for Labour.

    Or have they already done that?
  • MalmesburyMalmesbury Posts: 49,411

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
  • TimSTimS Posts: 12,660
    Andy_JS said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I don't think this is a constituency poll as such. It's a projection based on other things, which may be national polling, demographics, etc.
    Oh, MRP is it? That's annoying. If that's the case then I think Labour have a very good chance. A by-election should always give a worse result for the incumbent than MRP.
  • Andy_JS said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I wish I'd posted my prediction for this seat yesterday because it was very similar to this, with Con + Reform very likely to get about 50% of the vote between them.
    Tories most likely to win Tamworth, but on their own terms. Why are Reform going to fall into line? Some will, surely many won't.
    One thing you have to admire about the typical Reform voter. Agree or disagree, they will defend to the death the right to free non-woke speech.
  • Andy_JS said:

    HS2 should have linked up with HS1, and going to Euston was always a stupid idea, according to this article.

    https://reaction.life/mark-bostock-has-been-proved-totally-right-about-hs2/

    "It is hard to imagine a greater procurement disaster than HS2, the transformative high speed rail line between London and Scotland, currently being axed bit by bit, as the costs go through the roof.

    Mark Bostock, a former Arup consultant who successfully led the construction of HS1 from St Pancras to the Channel Tunnel and a former client of ours, would have had a few things to say about it. Sadly he passed away in August but he has been proven totally right about HS2. In fact, it is the greatest vindication in UK transport policy since promoters of the Stockton & Darlington Railway said it would be better than relying on canals.

    Mark led a proposal on behalf of Arup which would have seen HS2 go via a different route. It would link up with HS1 north of St Pancras. The route would have gone via a hub station connecting with Heathrow and the Great Western Railway near Iver. As now, the route would come into Old Oak Common, but never come into Euston which is simply too small. I can hear him saying now “They’ve got the alignment wrong, the most important decision in a railway. It is going to be a disaster.”"

    Sure - what we are building is a little mad. But the way to make it less mad is to give it a purpose. Building it for the assumed new 400kph standard and then running at 300kph or less, building it for a lot of trains an hour running to a lot of destinations and then barely run any - that is truly bonkers.
  • bondegezoubondegezou Posts: 10,640

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
  • rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
    Is there a record of what's gone through ChatGPT or is it private?

    EG I believe universities for instance are concerned about this and trying to crackdown on it, but presumably in the case of getting it to write it. If you were to eg put the draft of an essay in and say "what have I missed" or something like that, would it be able to handle that? And would that be risking getting done for cheating?
  • Andy_JSAndy_JS Posts: 32,006

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
    Explains why they've got the Greens on 6% which I'm 100% certain isn't going to happen.
  • NigelbNigelb Posts: 70,216
    No question that Hunter Biden is a dodgy character.
    But now that his plea bargain has been blown up, and he is facing trial in court, his lawyers are not unreasonably asking questions about the reliability of some of the evidence against him.

    The reliability of the digital evidence - mobile phones as well as laptop - and the legal propriety of some of the searches (quite possibly criminal), are seriously in question.

    I understand most people are bored rigid by the details of this case, but for those who aren't, this is a very interesting account.
    https://www.emptywheel.net/2023/07/18/wapo-is-suppressing-information-that-might-debunk-devlin-barretts-latest-spin/
  • Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
    Please don’t tarnish Cambridge with Oxford.

    Cambridge creates nothing but modest self effacing people.
  • LeonLeon Posts: 54,557

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
  • Andy_CookeAndy_Cooke Posts: 4,980

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Yes. There are lots of qualities and abilities that you can't test with an essay. No-one would think of doling out driving licenses to people who wrote a good essay on the fundamentals of safe driving.

    Why is it the test of choice for so much else?
    Because it's an easy way of doing the assessment. Whether or not it accurately reflects the knowledge or skills of the testee is secondary.

    IMHO, the best way of doing a test is scenario-based. "You are x, in situation y. You need to provide outcome z. You have access to everything you would have in a real life situation [eg open book/access to internet, etc] other than contacting someone else to get them to do it for you. You have three hours to provide z."

    Because that's what employers or anyone wanting your output will be wanting. Only thing is that it's difficult and resource-intensive to provide this way of doing things.

    So we do what behavioural psychologists call "changing the question," which is what we do when the answer is too hard: we come up with something that we can convince ourselves provides similar outcomes that's far easier to do. Hence essays and closed-book exams.
  • Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    Twatter (TM by D. Cameron)
    Too many Twix make a Twax?
  • FoxyFoxy Posts: 48,356

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
    Please don’t tarnish Cambridge with Oxford.

    Cambridge creates nothing but modest self effacing people.
    And Soviet spies of course.
  • VerulamiusVerulamius Posts: 1,535
    Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.
  • bondegezoubondegezou Posts: 10,640

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
    Is there a record of what's gone through ChatGPT or is it private?

    EG I believe universities for instance are concerned about this and trying to crackdown on it, but presumably in the case of getting it to write it. If you were to eg put the draft of an essay in and say "what have I missed" or something like that, would it be able to handle that? And would that be risking getting done for cheating?
    ChatGPT has a record, but I don't think anyone can see it. (Maybe with a court order?)

    ChatGPT can handle this usage. The university has to lay out what the rules are. If the university says no ChatGPT, then it's cheating. If the university says this usage is allowed, then it's not cheating. My university has this: https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/using-ai-tools-assessment Basically, there are three tiers and we state at the beginning which is being applied for each assignment.

    Tier 1: you can't use LLMs
    Tier 2: you can't get the LLM to write your assignment, but you can use it in support (e.g. the use case described above), but have to declare this
    Tier 3: the assignment intimately uses generative AI as part of the task

    While I'm here, https://openai.com/blog/chatgpt-can-now-see-hear-and-speak is presumably what got people excited on Twitter. It's not remotely AGI, but it's a nice (and expected) increase in ChatGPT's functionality.
  • Andy_JS said:

    HS2 should have linked up with HS1, and going to Euston was always a stupid idea, according to this article.

    https://reaction.life/mark-bostock-has-been-proved-totally-right-about-hs2/

    "It is hard to imagine a greater procurement disaster than HS2, the transformative high speed rail line between London and Scotland, currently being axed bit by bit, as the costs go through the roof.

    Mark Bostock, a former Arup consultant who successfully led the construction of HS1 from St Pancras to the Channel Tunnel and a former client of ours, would have had a few things to say about it. Sadly he passed away in August but he has been proven totally right about HS2. In fact, it is the greatest vindication in UK transport policy since promoters of the Stockton & Darlington Railway said it would be better than relying on canals.

    Mark led a proposal on behalf of Arup which would have seen HS2 go via a different route. It would link up with HS1 north of St Pancras. The route would have gone via a hub station connecting with Heathrow and the Great Western Railway near Iver. As now, the route would come into Old Oak Common, but never come into Euston which is simply too small. I can hear him saying now “They’ve got the alignment wrong, the most important decision in a railway. It is going to be a disaster.”"

    I vaguely remember that Arup proposal, and it was interesting - especially as I was never fully happy with either the Brum or London terminals wrt connections (or the Leeds or Manchester, either...)

    But I can see why the decisions were made, and the idea that *any* proposal in London would not face mahoosive compromises if they wanted any connectivity with the rest of the transport network is rather fantastic. We can all draw lines on a map; lines that can actually achieve what we want in reality is a very different matter.

    Personally, I would go back thirty years and plan Corsrail be able to take two HS2 trains per hour; HS1 is next to Crossrail at Old Oak Common, and Crossrail is near HS1 at Stratford. That would have been really cool, and been connected-up thinking. It would also have increased Crossrail's costs a bit, as well.
  • Andy_JSAndy_JS Posts: 32,006
    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    The simple fact that he called a referendum on EU membership shows that he isn't the brightest person out there.
  • Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Surely his crap autobiography is down to the crap ghost writer he enlisted. Do politicians ever write any of these things themselves?
  • LeonLeon Posts: 54,557
    Relatedly, you will be able to talk to ChatGPT and even show it pictures, inside the next two weeks

    https://x.com/OpenAI/status/1706280618429141022?s=20

    The tech is speeding away at an incredible rate
  • LeonLeon Posts: 54,557
    "Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool."

    https://x.com/lilianweng/status/1706544602906530000?s=20

    IT'S HERE

    BRACE
  • Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.

    Can I express my outrage that John Durival Kemp was not elected? If we are going to make a choice of who gets to make laws based on who their daddy was, it is an outrage that we didn't stick Viscount Rochdale in.

    No, strike that. What an absurd spectacle. We shouldn't have a house of peers, and certainly not members who are their because an ancestor was mates with the king.
  • bondegezoubondegezou Posts: 10,640

    Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.

    When you want the best electoral system, as for the Lords or for the tense situation in Northern Ireland, you of course go for STV.
  • Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Maybe. On the one occasion I met him - an encounter that lasted an hour or two and apologies for the name-dropping - he certainly came across as intelligent but not remarkably so. He didn't say anything exceptionally interesting or insightful but he didn't say anything stupid and appeared able to think quickly on his feet and respond fluently. And boy was he confident. Scarily confident.
  • Foxy said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
    Please don’t tarnish Cambridge with Oxford.

    Cambridge creates nothing but modest self effacing people.
    And Soviet spies of course.
    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Vernon Bogdanor said Cameron was the cleverest student he'd ever taught. Imagine how all the other Brasenose boys must have squirmed when they read that.
  • LostPasswordLostPassword Posts: 17,972
    edited September 2023

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Yes. There are lots of qualities and abilities that you can't test with an essay. No-one would think of doling out driving licenses to people who wrote a good essay on the fundamentals of safe driving.

    Why is it the test of choice for so much else?
    Because it's an easy way of doing the assessment. Whether or not it accurately reflects the knowledge or skills of the testee is secondary.

    IMHO, the best way of doing a test is scenario-based. "You are x, in situation y. You need to provide outcome z. You have access to everything you would have in a real life situation [eg open book/access to internet, etc] other than contacting someone else to get them to do it for you. You have three hours to provide z."

    Because that's what employers or anyone wanting your output will be wanting. Only thing is that it's difficult and resource-intensive to provide this way of doing things.

    So we do what behavioural psychologists call "changing the question," which is what we do when the answer is too hard: we come up with something that we can convince ourselves provides similar outcomes that's far easier to do. Hence essays and closed-book exams.
    Kinda ironically, it's the exact same laziness and "you're only cheating yourself" sort of behaviour that the kids using AI are being accused of.

    Especially with teaching becoming ever more dominated by teaching to the test, because the teachers are judged by the results too. Such an effing waste of time.
  • Jim_MillerJim_Miller Posts: 2,933
    The commenter who suggested helicopters instead of HS2 is, I think, on to something. Although I guessed he was being sarcastic, I think, in many places, some of the newer forms of air travel make more sense -- for people -- than trains do. Not helicopters, but aircraft that can do what helicopters do, more safely and more cheaply.

    For instance, recently I happened to see an experimental craft which can fly at about 60 miles an hour, and do 30 on a road. So you could fly from your home to work in it, and then park it in an (underground, of course) garage.

    (As it happens, I have ridden on trains in many places, and have generally enjoyed the experience. But I don't see why taxpayers should subsidize my trips.)
  • PhilPhil Posts: 2,225
    edited September 2023
    Taz said:

    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ

    And? You could say the same about any job with decent pension provision - 25% of the cost of employment will be going into the pension fund to pay for the pension entitlements that come with the job.

    (Excepting those government jobs where the pensions are paid out of general taxation of course.)
  • bondegezoubondegezou Posts: 10,640
    Phil said:

    Taz said:

    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ

    And? You could say the same about any job with decent pension provision - 25% of the cost of employment will be going into the pension fund to pay for the pension entitlements that come with the job.

    (Excepting those government jobs where the pensions are paid out of general taxation of course.)
    And most of the cost of a university and of higher education is staff.
  • LeonLeon Posts: 54,557

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Maybe. On the one occasion I met him - an encounter that lasted an hour or two and apologies for the name-dropping - he certainly came across as intelligent but not remarkably so. He didn't say anything exceptionally interesting or insightful but he didn't say anything stupid and appeared able to think quickly on his feet and respond fluently. And boy was he confident. Scarily confident.
    Ah, I don't doubt his confidence, I just can't see serious intelligence

    But maybe I set too much store by good writing. His lacklustre, boring and feeble memoir is the main basis for this opinion
  • The weather has denied England a world record ODI total.
  • rcs1000rcs1000 Posts: 56,690
    Leon said:

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    Mataculus thinks AGI will arrive around 2026-2030. Elon Musk reckons by 2029 , possibly sooner


    https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029/



    Intriguingly that was Kurzweil's prediction 6 years ago, years before ChatGPT

    "At the 2017 SXSW Conference in Austin, Texas, Kurzweil gave a typically pinpoint prediction.

    “By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”"


    The DeepMind founder says "in the next few years, at most a decade", others say 5 years, and so on and so forth

    So the idea this is "remote" is either fanciful - or wishful thinking. This is now close
    I know Demis, so I will ask him for his more nuanced views on when AGI is reached :-)

  • rcs1000rcs1000 Posts: 56,690
    edited September 2023
    AlsoLei said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Harold Wilson? Youngest C20th Oxford don. Probably also one of the highest-rating PMs on the most of the other requirements you mention, at least for his first term in office.

    ...but despite that, I'm not sure many would put him at the top of their personal "best PMs" list.
    He's top of Lady Falkender's list.
  • rcs1000rcs1000 Posts: 56,690

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
    Is there a record of what's gone through ChatGPT or is it private?

    EG I believe universities for instance are concerned about this and trying to crackdown on it, but presumably in the case of getting it to write it. If you were to eg put the draft of an essay in and say "what have I missed" or something like that, would it be able to handle that? And would that be risking getting done for cheating?
    Do you remember this: https://en.wikipedia.org/wiki/AOL_search_log_release
  • MaxPBMaxPB Posts: 38,518
    Good news, my mother in law is on the plane back to Switzerland! My wife has suggested we don't have her visit again until baby number two arrives next year, I concurred.
  • MexicanpeteMexicanpete Posts: 27,993
    Suella stirring the migration pot for Trump.

    God, I detest this woman.
  • MalmesburyMalmesbury Posts: 49,411

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, get you code that does the wrong thing. But it can suggest chunks of code to simple task
  • MexicanpeteMexicanpete Posts: 27,993

    Suella stirring the migration pot for Trump.

    God, I detest this woman.

    Edit. Is she pitching for Prime Minister or President?
  • ChrisChris Posts: 11,705
    Leon said:

    Relatedly, you will be able to talk to ChatGPT and even show it pictures, inside the next two weeks

    Can we show it naked selfies?
  • MalmesburyMalmesbury Posts: 49,411

    The commenter who suggested helicopters instead of HS2 is, I think, on to something. Although I guessed he was being sarcastic, I think, in many places, some of the newer forms of air travel make more sense -- for people -- than trains do. Not helicopters, but aircraft that can do what helicopters do, more safely and more cheaply.

    For instance, recently I happened to see an experimental craft which can fly at about 60 miles an hour, and do 30 on a road. So you could fly from your home to work in it, and then park it in an (underground, of course) garage.

    (As it happens, I have ridden on trains in many places, and have generally enjoyed the experience. But I don't see why taxpayers should subsidize my trips.)

    More that things like people carrying drones with a range of 100 miles are well on the way to reality. It's not hard to imagine that they would be popular for airport transfers and similar.
  • LeonLeon Posts: 54,557
    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    Mataculus thinks AGI will arrive around 2026-2030. Elon Musk reckons by 2029 , possibly sooner


    https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029/



    Intriguingly that was Kurzweil's prediction 6 years ago, years before ChatGPT

    "At the 2017 SXSW Conference in Austin, Texas, Kurzweil gave a typically pinpoint prediction.

    “By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”"


    The DeepMind founder says "in the next few years, at most a decade", others say 5 years, and so on and so forth

    So the idea this is "remote" is either fanciful - or wishful thinking. This is now close
    I know Demis, so I will ask him for his more nuanced views on when AGI is reached :-)

    Tell him I’m enjoying his book
  • LeonLeon Posts: 54,557
    If ChatGPT is good at voice chat that is absolutely going to destroy Alexa, Siri etc

    I was thinking yesterday how (relatively) crap they are. Incapable of proper conversation. I mainly use them for cooking timers, weather, switching things on, etc

    Imagine a voice assistant that will really listen and give you serious or kind or funny or helpful answers, and continue a dialogue indefinitely. That’s quite revolutionary
  • MalmesburyMalmesbury Posts: 49,411
    Andy_JS said:

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    The simple fact that he called a referendum on EU membership shows that he isn't the brightest person out there.
    Events. Even a year or two earlier, the referendum would have been 65-35 Remain and would have settled the question emphatically.
  • LeonLeon Posts: 54,557

    Andy_JS said:

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    The simple fact that he called a referendum on EU membership shows that he isn't the brightest person out there.
    Events. Even a year or two earlier, the referendum would have been 65-35 Remain and would have settled the question emphatically.
    Osborne told him not to call it. Apparently. Of the two Osborne seems notably cleverer and sharper
  • Suella stirring the migration pot for Trump.

    God, I detest this woman.

    Edit. Is she pitching for Prime Minister or President?
    Führer.
  • Sean_FSean_F Posts: 37,068

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
    If that just represents national polling, then we have to add on a by-election factor. By-elections usually show bigger swings. In which case, this should be a walk in the park for Labour.

    Or have they already done that?
    The Conservative vote share is down about 17%, from 2019, not 26% as shown in this projection. The Labour vote share is up about 12%, not 18%. So, it would seem that any boost from a by-election is already being factored in.
  • Can I get some recommendations for restaurants in Henley-on-Thames please?
  • MexicanpeteMexicanpete Posts: 27,993
    edited September 2023

    Can I get some recommendations for restaurants in Henley-on-Thames please?

    If you have a vehicle with you, Tom Kerridge's places in Marlow, just up the road are as good as the hype. Bird in Hand is great and he has a butchers shop that doubles as a restaurant (Butcher's tap) was a good lunch option.
  • LeonLeon Posts: 54,557
    Another prediction. Self teaching AI next year, the economy overturned by 2026

    https://x.com/simeon_cps/status/1706621453100048410?s=46&t=bulOICNH15U6kB0MwE6Lfw
  • nico679nico679 Posts: 6,141
    Multiculturalism has failed according to Braverman failing to see the irony of her argument given she’s from immigrant parents !

  • ChrisChris Posts: 11,705
    Leon said:

    If ChatGPT is good at voice chat that is absolutely going to destroy Alexa, Siri etc

    I was thinking yesterday how (relatively) crap they are. Incapable of proper conversation. I mainly use them for cooking timers, weather, switching things on, etc

    Imagine a voice assistant that will really listen and give you serious or kind or funny or helpful answers, and continue a dialogue indefinitely. That’s quite revolutionary

    Imagine a device that could simulate the actions of the human hand, coupled with a voice assistant that could give you kind, funny or helpful conversation while working that device - while simulating a voice of your choice - you'd have heaven on earth, wouldn't you? Or the closest thing to it, pending the whole-body version.
  • NigelbNigelb Posts: 70,216

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Maybe. On the one occasion I met him - an encounter that lasted an hour or two and apologies for the name-dropping - he certainly came across as intelligent but not remarkably so. He didn't say anything exceptionally interesting or insightful but he didn't say anything stupid and appeared able to think quickly on his feet and respond fluently. And boy was he confident. Scarily confident.
    Confidence is a massively underrated quality regarding climbing the greasy pole.
    It's somewhat overrated with respect to making good decisions.
  • Suella stirring the migration pot for Trump.

    God, I detest this woman.

    Edit. Is she pitching for Prime Minister or President?
    Leader of the Opposition.
  • VerulamiusVerulamius Posts: 1,535

    Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.

    Can I express my outrage that John Durival Kemp was not elected? If we are going to make a choice of who gets to make laws based on who their daddy was, it is an outrage that we didn't stick Viscount Rochdale in.

    No, strike that. What an absurd spectacle. We shouldn't have a house of peers, and certainly not members who are their because an ancestor was mates with the king.
    The two peerages have quite different pedigrees.

    The first Baron Meston was a financial civil servant in India and became a peer in 1919. He later became President of the Liberal Party.

    In contrast the first Baron De Clifford was a soldier and was sent to quell the Scots in 1296 and became Lord Warden of the Marches. He received the writ of peerage in 1299. He died at the Battle of Bannockburn.
  • NigelbNigelb Posts: 70,216
    There are eight Democratic seats up in swing or red states in 2024. Six of the Dems in/running for those seats have called for Menendez to resign (Brown, Tester, Casey, Baldwin, Rosen, Slotkin)

    The only two who haven't: Joe Manchin and Kyrsten Sinema

    https://twitter.com/stevemorris__/status/1706682579569565713
  • Can I get some recommendations for restaurants in Henley-on-Thames please?

    I have been to the Villa Marina a few times during the Regatta, it was good
  • Jim_MillerJim_Miller Posts: 2,933
    edited September 2023
    Farooq said: "when you drive somewhere, taxpayers subsidise that trip too"

    In the US, users pay for most roads, through motor fuel taxes. In some places they also pay through tolls on the roads, and they often pay tolls on new bridges.

    Many states use part of their motor fuel taxes to subsidize public transit, including rail transit: https://reason.org/policy-brief/how-much-gas-tax-money-states-divert-away-from-roads/

    (I think the first question one should ask about any transportation project is whether it can be paid for by user fees. Not the only question, of course; you will also want to look at both negative and positive externalities. But if you care at all about effficiency, you should start with that user fee question.)



  • NigelbNigelb Posts: 70,216
    Host range, transmissibility and antigenicity of a pangolin coronavirus

    https://www.nature.com/articles/s41564-023-01476-x
    The pathogenic and cross-species transmission potential of SARS-CoV-2-related coronaviruses (CoVs) remain poorly characterized. Here we recovered a wild-type pangolin (Pg) CoV GD strain including derivatives encoding reporter genes using reverse genetics. In primary human cells, PgCoV replicated efficiently but with reduced fitness and showed less efficient transmission via airborne route compared with SARS-CoV-2 in hamsters. PgCoV was potently inhibited by US Food and Drug Administration approved drugs, and neutralized by COVID-19 patient sera and SARS-CoV-2 therapeutic antibodies in vitro. A pan-Sarbecovirus antibody and SARS-CoV-2 S2P recombinant protein vaccine protected BALB/c mice from PgCoV infection. In K18-hACE2 mice, PgCoV infection caused severe clinical disease, but mice were protected by a SARS-CoV-2 human antibody. Efficient PgCoV replication in primary human cells and hACE2 mice, coupled with a capacity for airborne spread, highlights an emergence potential. However, low competitive fitness, pre-immune humans and the benefit of COVID-19 countermeasures should impede its ability to spread globally in human populations...
  • carnforthcarnforth Posts: 4,468
    nico679 said:

    Multiculturalism has failed according to Braverman failing to see the irony of her argument given she’s from immigrant parents !

    Yes, immigrants must hold political views which are crude extrapolations of the circumstances of their birth. Otherwise they are being naughty. Can't have them thinking for themselves.
  • "This is going to be "fun".

    Developer gets permission to build two blocks of flats.

    Developer builds flats to a different design and starts selling homes in flats.

    Council issues order to demolish the blocks and rebuilding them correctly."

    https://twitter.com/ianvisits/status/1706647481814069378

    "The two buildings differ in both external and internal design. Councillors approved two glass-clad blocks, but instead they were given metal features and grey cladding.

    Greenwich says that other breaches include:
    * Residents have poorer quality accommodation than was promised
    * Promised roof gardens and children’s play areas have not been built
    * The footprint of the towers is bigger than was promised
    * “accessible” apartments for wheelchair users have steps to their balconies, meaning residents cannot use them
    * car parking has replaced a promised landscaped garden
    * a residents’ gym has replaced commercial floorspace"

    The developer will, I expect, go bust, leaving the flatowners with an even bigger problem. But my question is why the council did not notice these massive lapses of the planning permission much earlier; say, before anyone moved in.
  • nico679nico679 Posts: 6,141
    Only 1.5% of asylum applications last year were related to sexual orientation . So hardly the problem that the odious witch has implied.
  • bondegezoubondegezou Posts: 10,640
    Nigelb said:

    Host range, transmissibility and antigenicity of a pangolin coronavirus

    https://www.nature.com/articles/s41564-023-01476-x
    The pathogenic and cross-species transmission potential of SARS-CoV-2-related coronaviruses (CoVs) remain poorly characterized. Here we recovered a wild-type pangolin (Pg) CoV GD strain including derivatives encoding reporter genes using reverse genetics. In primary human cells, PgCoV replicated efficiently but with reduced fitness and showed less efficient transmission via airborne route compared with SARS-CoV-2 in hamsters. PgCoV was potently inhibited by US Food and Drug Administration approved drugs, and neutralized by COVID-19 patient sera and SARS-CoV-2 therapeutic antibodies in vitro. A pan-Sarbecovirus antibody and SARS-CoV-2 S2P recombinant protein vaccine protected BALB/c mice from PgCoV infection. In K18-hACE2 mice, PgCoV infection caused severe clinical disease, but mice were protected by a SARS-CoV-2 human antibody. Efficient PgCoV replication in primary human cells and hACE2 mice, coupled with a capacity for airborne spread, highlights an emergence potential. However, low competitive fitness, pre-immune humans and the benefit of COVID-19 countermeasures should impede its ability to spread globally in human populations...

    Don't snog a sneezing pangolin - got it.
This discussion has been closed.