Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

Options

This will make you even more confused about HS2 – politicalbetting.com

123468

Comments

  • Options
    algarkirkalgarkirk Posts: 10,706

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    That's an interesting observation. Universities are shitting themselves (or should be) about AI and students using it to cheat in assessment. We have already had to ditch most online assessment due to cheating issues (students chatting on WhatsApp during exams etc) and now any essay based questions in an online exam are so easily answered with ChatGPT and the like.

    So it will be back to exam halls, and handwritten papers. We use software such as Turnitin to detect plagiarism in work, but as far as I know Turnitin does not pick up ChatGPT, and will struggle to. Your test is an interesting example where cheating can be found.

    However, I think humans can detect ChatGPT answers at the moment, at least in my limited field. We had some examples in a chemistry re-sit exam. The language used to answer some of the longer form questions is clearly not from the student (non- English extraction).
    I started university 50 years ago exactly. It is usually in the top 10 or so UK unis in the current lists. We had a stellar outcome in my department in finals - 1976. Nearly 8% got firsts; the rest equally divided between 2.1 and 2.2. In many departments there were no firsts at all. In those days that was a sign of a truly rigorously academic department.

    The classification depended entirely on 9 three hour papers in an exam room over 2 weeks of that lovely summer.

    There is much to be said for both elements of this experience - Firsts being really rare, and performance completely immune from the possibility of cheating.

    There was the added bliss of knowing that you could spend quite a bit of time doing extra curricular stuff without pressures of graded coursework, dissertations and modular exams every fortnight. Our much maligned and wonderful young people could do with a bit of that.
  • Options
    turbotubbsturbotubbs Posts: 15,513
    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    They get told not to cheat in exams using WhatApp but they do it anyway. We have electronic resources for our content, and when students access it, it leaves a log, i.e. we can see who accesses it and when. This summer a student went to the toiled during an exam (allowed) for 20 minutes (suspicious) and was shown to have accessed the content on line from a hidden phone (very much not allowed).

    Anyone who cheats generally thinks they are (a) cleverer than they actually are and (b) are not going to get caught.

    They are very often wrong.
  • Options
    AlsoLeiAlsoLei Posts: 749
    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    I've just refreshed my CV, and turned to ChatGPT, Bard, and Bloom for help.

    They were most useful for compressing what I'd written to convey the same points in about 3/4 of the space, and also did a good job with making some of my achievements sound a bit more concrete.

    On the other hand, they tended to undersell me a bit - one turned every instance of "Principal" and "Lead" into "Senior". The also had a tendency to fiddle around with numbers, turning 12 years' experience into 5, and a team of 45 people into one of 20.

    So, as ever: pretty useful, as long as you keep an eye on them.
  • Options
    ChrisChris Posts: 11,150

    ...

    Andy_JS said:

    HYUFD said:

    TimS said:

    New Yougov out:

    Con: 27% (+3 from 13-14 Sep)
    Lab: 43% (-2)
    Lib Dems: 10% (+1)
    Reform UK: 8% (=)
    Green: 7% (-2)
    SNP: 4% (+1)

    https://x.com/YouGov/status/1706625693302325430?s=20

    Fieldwork all post Rishi's announcement cancelling climate change. Three polls now so I think we can declare a bounce, of around 4-5%.

    I think this shows the impact when you have one party opening up a contentious issue where the public is probably more evenly split than polling VI. It got a huge amount of press, and if even say 35% of people agreed with Rishi that might have been enough to push the polling up.

    Long term it's a reversion to the polling numbers earlier in the summer, before quite a marked dip the week before the net zero anouncements.

    Conservatives back ahead in the South too, 35% to 34% for Labour and 14% for the LDs.

    Conservatives lead amongst Leavers extends to 47% to 21% for Labour and 17% for RefUK.

    Conservatives also lead 51% to 20% with over 65s and narrow Labour's lead with 50-64s to 37% to 32%

    https://d3nkl3psvxxpe9.cloudfront.net/documents/TheTimes_VI_230922_W.pdf
    Con + Reform is relatively close to Labour in a lot of recent polls. And most of those voters (Reform) may go back to the Tories in certain circumstances.
    Call me cynical, but I have my doubts that a decent proportion of Reform voters are enthusiastic about having a Prime Minister and Home Secretary whose families hail from the Indian subcontinent. Relying on those with racist tendencies to come "home" under such circumstances may be wishful thinking. Mind you, it is clearly a tactic that gives Suella some confidence.
    Thankfully, right-wing zanies are often not very bright.
  • Options
    eek said:

    darkage said:

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.

    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.

    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.


    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.
    Its not entirely this. They are also expensive to build, complex land assembly issues etc.
    Costs money and because of the way the Treasury looks at everything only the cheapest options are allowed...
    Friend of a friend owned some land in west London and wanted to develop underground parking. Kensington & Chelsea said no because they were looking to reduce the number of parking spaces in the borough
  • Options
    Andy_JS said:

    HYUFD said:

    TimS said:

    New Yougov out:

    Con: 27% (+3 from 13-14 Sep)
    Lab: 43% (-2)
    Lib Dems: 10% (+1)
    Reform UK: 8% (=)
    Green: 7% (-2)
    SNP: 4% (+1)

    https://x.com/YouGov/status/1706625693302325430?s=20

    Fieldwork all post Rishi's announcement cancelling climate change. Three polls now so I think we can declare a bounce, of around 4-5%.

    I think this shows the impact when you have one party opening up a contentious issue where the public is probably more evenly split than polling VI. It got a huge amount of press, and if even say 35% of people agreed with Rishi that might have been enough to push the polling up.

    Long term it's a reversion to the polling numbers earlier in the summer, before quite a marked dip the week before the net zero anouncements.

    Conservatives back ahead in the South too, 35% to 34% for Labour and 14% for the LDs.

    Conservatives lead amongst Leavers extends to 47% to 21% for Labour and 17% for RefUK.

    Conservatives also lead 51% to 20% with over 65s and narrow Labour's lead with 50-64s to 37% to 32%

    https://d3nkl3psvxxpe9.cloudfront.net/documents/TheTimes_VI_230922_W.pdf
    Con + Reform is relatively close to Labour in a lot of recent polls. And most of those voters (Reform) may go back to the Tories in certain circumstances.
    If you're going to assume a slippage of RefUK to the Tories, I think you probably also need to assume a similar slippage of Green to Labour.

    I'd be surprised not to see both parties do a fair bit worse when push comes to shove than they are polling now. Greens have done well in local elections recently but will have their work cut out holding their single current seat, and neither party has any real prospect of making gains at the General Election. So they are both going to struggle to cut through and are very likely to be squeezed, particularly in the relatively marginal seats that matter.

    One benefit the Tories do have on this is, whereas Greens are pretty likely to stand in most places, it's not clear RefUK will.
  • Options
    turbotubbsturbotubbs Posts: 15,513

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
  • Options
    FarooqFarooq Posts: 10,798

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    They get told not to cheat in exams using WhatApp but they do it anyway. We have electronic resources for our content, and when students access it, it leaves a log, i.e. we can see who accesses it and when. This summer a student went to the toiled during an exam (allowed) for 20 minutes (suspicious) and was shown to have accessed the content on line from a hidden phone (very much not allowed).

    Anyone who cheats generally thinks they are (a) cleverer than they actually are and (b) are not going to get caught.

    They are very often wrong.
    Can you say "often"? Since by definition you don't know how many successful cheating attempts went undetected.
  • Options
    LeonLeon Posts: 47,730
    rcs1000 said:

    Regarding the video, I don't think it's AI generated, but I do think it is sped-up compared to reality. If you look at the moments when a block is pushed by a human, or when one falls slightly, they happen far too rapidly. (Indeed, they look like the block is snapping into place.)

    That's a tell tale sign that it has been sped up.

    Edit to add: I think the speeding up is also the reason it looks fake. There are too many moments where it seems slightly off.

    Fascinating. Someone else said this on TwitterX: slow it down to .25 speed

    I'm gonna have a go now
  • Options
    MexicanpeteMexicanpete Posts: 25,475
    edited September 2023
    Not long now before Suella takes to the boards.

    Let's see how her audition to become next PM works out.

    Are we all excited?
  • Options
    glwglw Posts: 9,556

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    That's an interesting observation. Universities are shitting themselves (or should be) about AI and students using it to cheat in assessment. We have already had to ditch most online assessment due to cheating issues (students chatting on WhatsApp during exams etc) and now any essay based questions in an online exam are so easily answered with ChatGPT and the like.

    So it will be back to exam halls, and handwritten papers. We use software such as Turnitin to detect plagiarism in work, but as far as I know Turnitin does not pick up ChatGPT, and will struggle to. Your test is an interesting example where cheating can be found.

    However, I think humans can detect ChatGPT answers at the moment, at least in my limited field. We had some examples in a chemistry re-sit exam. The language used to answer some of the longer form questions is clearly not from the student (non- English extraction).
    I think TurnItIn were looking at a ChatGPT detection tool, but nothing they’ve done is very accurate yet. We’ve decided not to trust any AI-detection software.
    Hypothetically if I was going to offer CaaS (Cheating as a Service) I would definitely train and test against detection tools. ChatGPT isn't the software you should worry about, they at least want to appear to be good, but there are almost certainly other groups training AI to do bad things like generate essays on demand. So being able to catch ChatGPT offers a false sense of security, a bit like all the rubbish about watermarking AI generated images when the tools we should be worrying about will be watermark free.
  • Options
    FarooqFarooq Posts: 10,798
    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?
  • Options
    rcs1000rcs1000 Posts: 54,245
    AlsoLei said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    I've just refreshed my CV, and turned to ChatGPT, Bard, and Bloom for help.

    They were most useful for compressing what I'd written to convey the same points in about 3/4 of the space, and also did a good job with making some of my achievements sound a bit more concrete.

    On the other hand, they tended to undersell me a bit - one turned every instance of "Principal" and "Lead" into "Senior". The also had a tendency to fiddle around with numbers, turning 12 years' experience into 5, and a team of 45 people into one of 20.

    So, as ever: pretty useful, as long as you keep an eye on them.
    Have you considered the possibility that the AI knows about your habit of embellishing, and was just trying to make things more... realistic?
  • Options
    Sean_FSean_F Posts: 36,013
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I'd make the Conservatives slight favourites to hold Tamworth, on the back of that poll.
  • Options
    algarkirk said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    That's an interesting observation. Universities are shitting themselves (or should be) about AI and students using it to cheat in assessment. We have already had to ditch most online assessment due to cheating issues (students chatting on WhatsApp during exams etc) and now any essay based questions in an online exam are so easily answered with ChatGPT and the like.

    So it will be back to exam halls, and handwritten papers. We use software such as Turnitin to detect plagiarism in work, but as far as I know Turnitin does not pick up ChatGPT, and will struggle to. Your test is an interesting example where cheating can be found.

    However, I think humans can detect ChatGPT answers at the moment, at least in my limited field. We had some examples in a chemistry re-sit exam. The language used to answer some of the longer form questions is clearly not from the student (non- English extraction).
    I started university 50 years ago exactly. It is usually in the top 10 or so UK unis in the current lists. We had a stellar outcome in my department in finals - 1976. Nearly 8% got firsts; the rest equally divided between 2.1 and 2.2. In many departments there were no firsts at all. In those days that was a sign of a truly rigorously academic department.

    The classification depended entirely on 9 three hour papers in an exam room over 2 weeks of that lovely summer.

    There is much to be said for both elements of this experience - Firsts being really rare, and performance completely immune from the possibility of cheating.

    There was the added bliss of knowing that you could spend quite a bit of time doing extra curricular stuff without pressures of graded coursework, dissertations and modular exams every fortnight. Our much maligned and wonderful young people could do with a bit of that.
    I think you have to be very clear what it is that you are testing and make sure the test reflects that.

    Sometimes, if the student can use AI to get the correct results then that's fine. People in the real world can use AI too. If you want to test that someone holds the relevant knowledge in their head, then test in exam conditions.

    I think it forces you to think more carefully about what the purpose of the test is, rather than simply to set an essay question.
  • Options
    turbotubbsturbotubbs Posts: 15,513
    Farooq said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    They get told not to cheat in exams using WhatApp but they do it anyway. We have electronic resources for our content, and when students access it, it leaves a log, i.e. we can see who accesses it and when. This summer a student went to the toiled during an exam (allowed) for 20 minutes (suspicious) and was shown to have accessed the content on line from a hidden phone (very much not allowed).

    Anyone who cheats generally thinks they are (a) cleverer than they actually are and (b) are not going to get caught.

    They are very often wrong.
    Can you say "often"? Since by definition you don't know how many successful cheating attempts went undetected.
    Which is a fair comment, and I did think about saying it.

    I think we have very little cheating during in person, invigilated exams.

    We had a lot more during online exams - that had no way of invigilating (unlike some online exams we did not even try to see what the students did). We detected a fair amount of it (in one exam of mine, then same wrong answer occurred where a chemical structure had been copied from the paper incorrectly. The chances of more than 10 students making exactly the same mistake are small.

    We had collusion in other papers and some of the culprits coughed to it.

    But yes, I am certain some did get away with it.

    And so its back to 19th C methods of assessment...
  • Options
    148grss said:

    148grss said:

    Leon said:

    148grss said:

    darkage said:

    148grss said:

    Leon said:

    148grss said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    There are tantalising rumours on TwitterX that we have met aliens who can travel across space to visit us.

    It's bizarre that more people aren't talking about this; if it is true* it is one of the biggest news stories in human history.





    * It is not true
    It really might be true

    When I bang on about AI, @Benpointer always says “get back to me when a robot can stack my dishwasher”. And it’s a fair point

    Well, now a robot can easily stack a dishwasher, and what’s more it can learn this simply by watching you do it first

    https://x.com/tesla_optimus/status/1705728820693668189?s=46&t=bulOICNH15U6kB0MwE6Lfw

    “With enough strength and dexterity, Tesla's Bot could handle almost all physical tasks by simply looking at video clips of people doing said tasks.

    Picking up a vacuum and running it through the house. Sorting and folding laundry. Tidying up the house. Moving material from point A to point B. Picking up trash and placing it in a bin. Pushing a lawnmower. Monitor an area for safety-related concerns. Laying bricks. Hammering nails. Using power tools. Clean dishes... etcetera, etcetera, etcetera.”

    https://x.com/farzyness/status/1706006003135779299?s=46&t=bulOICNH15U6kB0MwE6Lfw
    I can find you a parrot that can recite poetry - doesn't mean it can write you any.

    Are learning machines cool? Yes. But at the end of the day they're automatons that can, at a basic level, do simple tasks in relatively stable environments. Complex tasks in other scenarios are out of reach. All the hype is just sales - of course people who own stocks in AI companies would claim it would end the world or be the gadget of the future, because they'll rake the money in.

    It is not coincidental that the new "AI is going to be able to do everything" line came after the "Meta is going to be the new frontier" fell through and the "NFTs and the blockchain are going to revolutionise everything" idea was proven false. Capitalism always needs a new frontier, to exploit and sell and commodify, and tech bros think they can build the next one. So far they're failing.
    In my view the free chat programmes available online have written reasoning capabilities that exceed that of most graduate level professionals with over 20 years of high level report writing experience. They can write better than people who have been doing decision making and report writing for their entire career. From a management point of view they surpass most humans in knowing how to respond to situations in difficult correspondence exercises.

    It is an inevitable human reaction to deny this or not look at it, but it won't help.
    Do you mean ChatGPT, or is there a specific programme you mean? I think the most convincing ones are good at creating the approximation of human writing, until you learn it is either just lying (making up references and quotes and just general facts) or spewing nonsense (this often happens with coding where the coding looks correct, but is really just nonsense).

    The way this stuff currently works is by taking the input, analysing words that are associated with the words relevant to that topic, and picking each word based on the likelihood that it is the most common word to follow the previous word. That requires it to read (and arguably steal) the work of existing people. It cannot think - it is not creating. It is a parrot - a big parrot, a complex parrot, a parrot that can maybe do some simple things - but a parrot. And that's selling parrots short, because I believe parrots have the ability of cognition.
    So, is the robot video real, or not?

    You still haven't told us, and you still don't realise the significance of that
    Do I personally believe in the reality of a video on Twitter? I don't. Do I personally know it isn't real? No - that's why I have said I will await for credible sources to do reporting rather than just trust people chatting on a notoriously untrustworthy social media platform about a topic where there is so much undue hype.
    That's nothing. They also developed a robot that can play ping-pong!

    https://twitter.com/i/status/1687690852456402944
    I hear they have also devised an automaton that can do the work of a police officer - although it does need some organic matter. Apparently it is half man, half machine - all cop.
    That was devised before you were born!
  • Options
    LeonLeon Posts: 47,730
    rcs1000 said:

    Regarding the video, I don't think it's AI generated, but I do think it is sped-up compared to reality. If you look at the moments when a block is pushed by a human, or when one falls slightly, they happen far too rapidly. (Indeed, they look like the block is snapping into place.)

    That's a tell tale sign that it has been sped up.

    Edit to add: I think the speeding up is also the reason it looks fake. There are too many moments where it seems slightly off.

    I think you're right, I just had a go at various speeds. Looks most realistic at somewhere between 0.5x and 0.75x the original speed, so it has been accelerated - but not massively

    What an odd thing to do. The robot is impressive enough as is?
  • Options
    FarooqFarooq Posts: 10,798
    glw said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    That's an interesting observation. Universities are shitting themselves (or should be) about AI and students using it to cheat in assessment. We have already had to ditch most online assessment due to cheating issues (students chatting on WhatsApp during exams etc) and now any essay based questions in an online exam are so easily answered with ChatGPT and the like.

    So it will be back to exam halls, and handwritten papers. We use software such as Turnitin to detect plagiarism in work, but as far as I know Turnitin does not pick up ChatGPT, and will struggle to. Your test is an interesting example where cheating can be found.

    However, I think humans can detect ChatGPT answers at the moment, at least in my limited field. We had some examples in a chemistry re-sit exam. The language used to answer some of the longer form questions is clearly not from the student (non- English extraction).
    I think TurnItIn were looking at a ChatGPT detection tool, but nothing they’ve done is very accurate yet. We’ve decided not to trust any AI-detection software.
    Hypothetically if I was going to offer CaaS (Cheating as a Service) I would definitely train and test against detection tools. ChatGPT isn't the software you should worry about, they at least want to appear to be good, but there are almost certainly other groups training AI to do bad things like generate essays on demand. So being able to catch ChatGPT offers a false sense of security, a bit like all the rubbish about watermarking AI generated images when the tools we should be worrying about will be watermark free.
    If I were running a cheating detection as a service (CDaaS) I would detect large volumes of inputs as a likely attempt to train a CaaS and feed it nonsense results, or some weirdly overfitted responses that punished the use of semi-common words like "study", "confidence", or "historical", forcing your CaaS to write weirdly contorted essays excluding those words.
  • Options
    Sean_FSean_F Posts: 36,013
    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    Certainly, when I was doing my MA, I was impressed by just how much useful material there is online, which you can find quite easily by typing in certain key words (albeit, most archival material is not online. Much of it is not even catalogued, eg the Clinton Papers, in Manchester University, which I found very useful).
  • Options
    glwglw Posts: 9,556
    Farooq said:

    If I were running a cheating detection as a service (CDaaS) I would detect large volumes of inputs as a likely attempt to train a CaaS and feed it nonsense results, or some weirdly overfitted responses that punished the use of semi-common words like "study", "confidence", or "historical", forcing your CaaS to write weirdly contorted essays excluding those words.

    For sure it will be an arms race. But nobody should kid themselves that there will be easy fixes to the abuse of AI.
  • Options

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
  • Options
    Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    In the rare event that I have to refer to the website in question I find that, "Elon Musk's fascist-friendly plaything," does the job perfectly.
  • Options
    NigelbNigelb Posts: 63,154

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    Is that a problem - or a useful skill ?
  • Options
    TazTaz Posts: 11,411

    Not long now before Suella takes to the boards.

    Let's see how her audition to become next PM works out.

    Are we all excited?

    I’m so excited
    I just can’t hide it
    I’m about to lose control
    And I think I like it…….
  • Options
    darkage said:



    (Snip)

    I had a similar view on GPT to you prior to this. But a lawyer who runs an outsourcing firm explained they thought it was rapidly advancing and indicated they were starting to use it in work contracted out to them by local authorities, automatically generating correspondence relating to civil law breaches and also drafting legal notices, albeit corrected by humans. So what is clear is that it is already here and taking over work done by professionals.

    That helps with productivity initially - you replace the low level legal work done by associates and reviewed by partners. Saves money and makes the partners richer.

    But means we don’t train up the next generation of lawyers.

    Some might argue that is a good thing…
  • Options

    eek said:

    darkage said:

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.

    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.

    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.

    Eabhal said:

    Taz said:

    Foxy said:
    So.

    The market is deciding. The hysteria about the announcement last week was partly synthetic and partly misplaced. Just because people can sell something doesn’t mean they will.

    Auto makers work on cycle times of years on products and platforms. They’d not be likely to chop and change at the govts whim.
    But this can't be true. It was Keir Starmer forcing Nissan et al to ditch petrol. Sunak saved people from having to buy an electric car, it was in all the right newspapers and TV news shows. Thanks to Rishi making Long-Term Decisions for a Brighter Future, the dread threat of all EV by 2030 was removed.

    Nissan must be mistaken .
    Nissan don't make new cars at the £13k entry level range of the market like the Kia Picanto etc

    The Nissan Juke is their cheapest car at £21k: https://www.nissan.co.uk/vehicles/new-vehicles.html

    By 2030 it seems entirely plausible that an electric juke will be as cheap as a petrol Juke, but it does not look likely that an electric Picanto would be available as cheap as a petrol Picanto.

    So again, if in 6 years time if you could get a cheap petrol vehicle like the Picanto for £13k in real terms, but if the cheapest electric is in real terms £21k (currently £27k is cheapest) then should the Picanto be outlawed and people who want to buy it be forced to pay eight grand more?

    We need to continue with what the market has been doing from Tesla onwards which is to start at the top of the market and work down with electrification, not the other way around. If in 2030 the only petrol vehicles the market still offers is 1.0 litre runarounds like the Picanto simply because electrification of them isn't affordably ready yet, then what's the harm in that?
    We shouldn't really go by capital price but by the cost paid per month by the purchaser.
    Given that only a very small fraction of people buying new cars pay cash on the full price for them, and 80-80% get PCP (we can debate the wisdom of going the PCP route, but for this, we simply recognise that such is the default route to new car purchase at the moment and therefore what the market will be following), we need to look at the main monthly expenditure of the purchaser.

    Which is PCP monthly payment plus petrol or electricity costs.

    Petrol comes in at c. £1.50 per litre at the moment.

    The majority of those buying electric cars will be recharging at home overnight (70%+: there's a need to address the needs of those who cannot do this, but, again, the overall market is driven by those who can do this. And the core need would be to fill in the gap for those who can't). At the moment, an EV tariff from Octopus gives £0.075 per kWh overnight.

    The Picanto does c. 13 miles per litre. Assuming the default given by Kia on their finance calculator of 10,000 miles per year, that costs £1,155 per year in petrol, or £96.30 per month. The finance calculator for the Picanto gives (at 10% down payment of £1,350) a cost of £206.58 per month on PCP. This leads to a cost on PCP plus fuel of £302.88 per month to the purchaser.

    The Ceed comes in at £21k, so the finance for a putative £21k Kia EV can be looked at on the same site (which helps) and comes out at £342.02 per month (using the same £1350 deposit, which is under 10% this time and probably incurs a slightly higher interest rate, but we need it to be comparable for the purchaser). If the EV has an efficiency similar to the MGZ4 (3.8 miles/kWh), it would cost £197.37 per year in electricity, or £16.45 per month).
    Cost is then £358.37 per month to the purchaser for PCP plus electricity.


    The difference is therefore 18% more expensive to the purchaser for the 21k EV over the 13k ICE rather than the 61% of the sticker price. You only need the price to fall to about £18k to be the same affordability as a £13k ICE to the purchaser, to all intents and purposes.
    A large portion of people buying cheap small cars are parking on the street not off road. I certainly was.

    Even based on your average that 30% will not be recharging at home overnight that needs to be included in the maths, but I strongly suspect that 30% is disproportionately those buying smaller, cheaper vehicles.

    Compare like-for-like by comparing recharging rates at commercial charging stations and redo your maths.

    Want to fix electric for everyone? Addressing the charging issue is the biggest issue to tackle, not quibble over a year or two for the transition to electric.
    Perhaps we should ban on-road parking,
    like the Japanese?

    Would free up space equivalent to 16 motorways.
    My Spanish Father-in-Law doesn't understand why the UK doesn't build underground carparks as all the towns in Spain seem to have. Simples - because we are incompetent and corrupt. And they are not.

    Hang on - I hear right wing voices say - the Spanish ARE corrupt. And that is true. And yet they can stick underground car parks into their towns and we can't afford to...
    Planners don’t approve them because more parking encourages driving. Or something.
    Its not entirely this. They are also expensive to build, complex land assembly issues etc.
    Costs money and because of the way the Treasury looks at everything only the cheapest options are allowed...
    Friend of a friend owned some land in west London and wanted to develop underground parking. Kensington & Chelsea said no because they were looking to reduce the number of parking spaces in the borough
    Camden is similarly anti-car, on ideological grounds.
  • Options

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
  • Options
    TazTaz Posts: 11,411
    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ
  • Options
    Nigelb said:

    Meanwhile, Weekend at Bernie's, or not ?

    Admiral Viktor Sokolov, the commander of Russia’s Black Sea Fleet, is apparently not dead, according to this photo released by the MOD today, despite Ukraine’s claims to have killed him last week.
    https://twitter.com/maxseddon/status/1706624970535669817

    That chair looks like a hospital bed that has been cranked up
  • Options
    WhisperingOracleWhisperingOracle Posts: 8,503
    edited September 2023
    Nigelb said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    Is that a problem - or a useful skill ?
    I'm not quite sure, because, as OnlyLivingBoy says, there are all sorts of issues.

    Copying and then re-editing content in our own voice, is something that we all increasingly do. But is that a good thing ?

    But there's another issue, because my nephew also mentioned some quite specific things.

    Rather than only re-editing in your own voice, it seems a lot of students will enjoy handing in essays where about half the paragraphs are entirely the work of Chat GPT, and about half their own, and enjoy re-editing to make it all like a whole. I do think some of these things will raise future issues of how we learn, and what we are devoting time to learn to do.
  • Options
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
  • Options
    kinabalukinabalu Posts: 39,501
    Sean_F said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I'd make the Conservatives slight favourites to hold Tamworth, on the back of that poll.
    I got 4.4 on them the other day. Seemed generous - even to a Labour Landslide pundit like my good self.
  • Options

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
  • Options
    bondegezoubondegezou Posts: 7,998
    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
  • Options
    MexicanpeteMexicanpete Posts: 25,475
    ...
    Taz said:

    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ

    So you're back on board with the Conservatives. Excellent, well done!
  • Options
    MalmesburyMalmesbury Posts: 44,816
    Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    Twatter (TM by D. Cameron)
  • Options

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Yes. There are lots of qualities and abilities that you can't test with an essay. No-one would think of doling out driving licenses to people who wrote a good essay on the fundamentals of safe driving.

    Why is it the test of choice for so much else?
  • Options
    Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    Left Twix or Right Twix?
  • Options
    TimS said:

    New Yougov out:

    Con: 27% (+3 from 13-14 Sep)
    Lab: 43% (-2)
    Lib Dems: 10% (+1)
    Reform UK: 8% (=)
    Green: 7% (-2)
    SNP: 4% (+1)

    https://x.com/YouGov/status/1706625693302325430?s=20

    Fieldwork all post Rishi's announcement cancelling climate change. Three polls now so I think we can declare a bounce, of around 4-5%.

    I think this shows the impact when you have one party opening up a contentious issue where the public is probably more evenly split than polling VI. It got a huge amount of press, and if even say 35% of people agreed with Rishi that might have been enough to push the polling up.

    Long term it's a reversion to the polling numbers earlier in the summer, before quite a marked dip the week before the net zero anouncements.

    Broken, sleazy Labour and Greens on the slide :lol:
  • Options
    MalmesburyMalmesbury Posts: 44,816

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Most ultra-brainiacs, in my experience, couldn't organise a piss up in a brewery. They can find the Higgs, given a lot of real, expert, project managers to turn their ideas into steel, copper, concrete etc.

    Without Oppenheimer to cat herd the scientists, the Bomb might have taken longer. Without General Groves, the project might well have meandered to failure.

    I like the story of his first meeting with the principles. He hammered out how he was going to run the thing, then said he had to dash (to cut the meeting short). Dash, that was to catch a train to finalise buying a zillion acres of land needed for the project - something that had been held up for months.
  • Options
    LeonLeon Posts: 47,730

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    Mataculus thinks AGI will arrive around 2026-2030. Elon Musk reckons by 2029 , possibly sooner


    https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029/



    Intriguingly that was Kurzweil's prediction 6 years ago, years before ChatGPT

    "At the 2017 SXSW Conference in Austin, Texas, Kurzweil gave a typically pinpoint prediction.

    “By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”"


    The DeepMind founder says "in the next few years, at most a decade", others say 5 years, and so on and so forth

    So the idea this is "remote" is either fanciful - or wishful thinking. This is now close
  • Options
    Andy_JSAndy_JS Posts: 27,157
    HS2 should have linked up with HS1, and going to Euston was always a stupid idea, according to this article.

    https://reaction.life/mark-bostock-has-been-proved-totally-right-about-hs2/

    "It is hard to imagine a greater procurement disaster than HS2, the transformative high speed rail line between London and Scotland, currently being axed bit by bit, as the costs go through the roof.

    Mark Bostock, a former Arup consultant who successfully led the construction of HS1 from St Pancras to the Channel Tunnel and a former client of ours, would have had a few things to say about it. Sadly he passed away in August but he has been proven totally right about HS2. In fact, it is the greatest vindication in UK transport policy since promoters of the Stockton & Darlington Railway said it would be better than relying on canals.

    Mark led a proposal on behalf of Arup which would have seen HS2 go via a different route. It would link up with HS1 north of St Pancras. The route would have gone via a hub station connecting with Heathrow and the Great Western Railway near Iver. As now, the route would come into Old Oak Common, but never come into Euston which is simply too small. I can hear him saying now “They’ve got the alignment wrong, the most important decision in a railway. It is going to be a disaster.”"
  • Options
    AlsoLeiAlsoLei Posts: 749

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Harold Wilson? Youngest C20th Oxford don. Probably also one of the highest-rating PMs on the most of the other requirements you mention, at least for his first term in office.

    ...but despite that, I'm not sure many would put him at the top of their personal "best PMs" list.
  • Options
    Andy_JSAndy_JS Posts: 27,157
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I wish I'd posted my prediction for this seat yesterday because it was very similar to this, with Con + Reform very likely to get about 50% of the vote between them.
  • Options
    MexicanpeteMexicanpete Posts: 25,475
    AlsoLei said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Harold Wilson? Youngest C20th Oxford don. Probably also one of the highest-rating PMs on the most of the other requirements you mention, at least for his first term in office.

    ...but despite that, I'm not sure many would put him at the top of their personal "best PMs" list.
    Here's one who would.
  • Options

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
  • Options
    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
  • Options
    MexicanpeteMexicanpete Posts: 25,475
    edited September 2023
    Andy_JS said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I wish I'd posted my prediction for this seat yesterday because it was very similar to this, with Con + Reform very likely to get about 50% of the vote between them.
    Tories most likely to win Tamworth, but on their own terms. Why are Reform going to fall into line? Some will, surely many won't.
  • Options
    AlsoLeiAlsoLei Posts: 749
    .

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    I suspect that one of the next milestones on the path from current generative AI towards AGI will be some form of continuous re-training. Retrieval Augmentation / RETRO is the hot new thing, and certainly points in that direction - but it's going to take much, much more computational power to get there.
  • Options
    Andy_JSAndy_JS Posts: 27,157
    edited September 2023
    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I don't think this is a constituency poll as such. It's a projection based on other things, which may be national polling, demographics, etc. Happy to be corrected if wrong.
  • Options
    bondegezoubondegezou Posts: 7,998
    edited September 2023

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
    If that just represents national polling, then we have to add on a by-election factor. By-elections usually show bigger swings. In which case, this should be a walk in the park for Labour.

    Or have they already done that?
  • Options
    MalmesburyMalmesbury Posts: 44,816

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
  • Options
    TimSTimS Posts: 9,962
    Andy_JS said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I don't think this is a constituency poll as such. It's a projection based on other things, which may be national polling, demographics, etc.
    Oh, MRP is it? That's annoying. If that's the case then I think Labour have a very good chance. A by-election should always give a worse result for the incumbent than MRP.
  • Options

    Andy_JS said:

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    I wish I'd posted my prediction for this seat yesterday because it was very similar to this, with Con + Reform very likely to get about 50% of the vote between them.
    Tories most likely to win Tamworth, but on their own terms. Why are Reform going to fall into line? Some will, surely many won't.
    One thing you have to admire about the typical Reform voter. Agree or disagree, they will defend to the death the right to free non-woke speech.
  • Options
    Andy_JS said:

    HS2 should have linked up with HS1, and going to Euston was always a stupid idea, according to this article.

    https://reaction.life/mark-bostock-has-been-proved-totally-right-about-hs2/

    "It is hard to imagine a greater procurement disaster than HS2, the transformative high speed rail line between London and Scotland, currently being axed bit by bit, as the costs go through the roof.

    Mark Bostock, a former Arup consultant who successfully led the construction of HS1 from St Pancras to the Channel Tunnel and a former client of ours, would have had a few things to say about it. Sadly he passed away in August but he has been proven totally right about HS2. In fact, it is the greatest vindication in UK transport policy since promoters of the Stockton & Darlington Railway said it would be better than relying on canals.

    Mark led a proposal on behalf of Arup which would have seen HS2 go via a different route. It would link up with HS1 north of St Pancras. The route would have gone via a hub station connecting with Heathrow and the Great Western Railway near Iver. As now, the route would come into Old Oak Common, but never come into Euston which is simply too small. I can hear him saying now “They’ve got the alignment wrong, the most important decision in a railway. It is going to be a disaster.”"

    Sure - what we are building is a little mad. But the way to make it less mad is to give it a purpose. Building it for the assumed new 400kph standard and then running at 300kph or less, building it for a lot of trains an hour running to a lot of destinations and then barely run any - that is truly bonkers.
  • Options
    bondegezoubondegezou Posts: 7,998

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
  • Options

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
    Is there a record of what's gone through ChatGPT or is it private?

    EG I believe universities for instance are concerned about this and trying to crackdown on it, but presumably in the case of getting it to write it. If you were to eg put the draft of an essay in and say "what have I missed" or something like that, would it be able to handle that? And would that be risking getting done for cheating?
  • Options
    Andy_JSAndy_JS Posts: 27,157

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
    Explains why they've got the Greens on 6% which I'm 100% certain isn't going to happen.
  • Options
    NigelbNigelb Posts: 63,154
    No question that Hunter Biden is a dodgy character.
    But now that his plea bargain has been blown up, and he is facing trial in court, his lawyers are not unreasonably asking questions about the reliability of some of the evidence against him.

    The reliability of the digital evidence - mobile phones as well as laptop - and the legal propriety of some of the searches (quite possibly criminal), are seriously in question.

    I understand most people are bored rigid by the details of this case, but for those who aren't, this is a very interesting account.
    https://www.emptywheel.net/2023/07/18/wapo-is-suppressing-information-that-might-debunk-devlin-barretts-latest-spin/
  • Options

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
    Please don’t tarnish Cambridge with Oxford.

    Cambridge creates nothing but modest self effacing people.
  • Options
    LeonLeon Posts: 47,730

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
  • Options
    Andy_CookeAndy_Cooke Posts: 4,819

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Yes. There are lots of qualities and abilities that you can't test with an essay. No-one would think of doling out driving licenses to people who wrote a good essay on the fundamentals of safe driving.

    Why is it the test of choice for so much else?
    Because it's an easy way of doing the assessment. Whether or not it accurately reflects the knowledge or skills of the testee is secondary.

    IMHO, the best way of doing a test is scenario-based. "You are x, in situation y. You need to provide outcome z. You have access to everything you would have in a real life situation [eg open book/access to internet, etc] other than contacting someone else to get them to do it for you. You have three hours to provide z."

    Because that's what employers or anyone wanting your output will be wanting. Only thing is that it's difficult and resource-intensive to provide this way of doing things.

    So we do what behavioural psychologists call "changing the question," which is what we do when the answer is too hard: we come up with something that we can convince ourselves provides similar outcomes that's far easier to do. Hence essays and closed-book exams.
  • Options

    Farooq said:

    Can we agree on what to call Twitter now please?
    I've seen Twitter, X, TwitterX, the artist formerly known as Twitter.

    How about we settle on Twix?

    Twatter (TM by D. Cameron)
    Too many Twix make a Twax?
  • Options
    FoxyFoxy Posts: 44,995

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
    Please don’t tarnish Cambridge with Oxford.

    Cambridge creates nothing but modest self effacing people.
    And Soviet spies of course.
  • Options
    VerulamiusVerulamius Posts: 1,438
    Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.
  • Options
    bondegezoubondegezou Posts: 7,998

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
    Is there a record of what's gone through ChatGPT or is it private?

    EG I believe universities for instance are concerned about this and trying to crackdown on it, but presumably in the case of getting it to write it. If you were to eg put the draft of an essay in and say "what have I missed" or something like that, would it be able to handle that? And would that be risking getting done for cheating?
    ChatGPT has a record, but I don't think anyone can see it. (Maybe with a court order?)

    ChatGPT can handle this usage. The university has to lay out what the rules are. If the university says no ChatGPT, then it's cheating. If the university says this usage is allowed, then it's not cheating. My university has this: https://www.ucl.ac.uk/teaching-learning/generative-ai-hub/using-ai-tools-assessment Basically, there are three tiers and we state at the beginning which is being applied for each assignment.

    Tier 1: you can't use LLMs
    Tier 2: you can't get the LLM to write your assignment, but you can use it in support (e.g. the use case described above), but have to declare this
    Tier 3: the assignment intimately uses generative AI as part of the task

    While I'm here, https://openai.com/blog/chatgpt-can-now-see-hear-and-speak is presumably what got people excited on Twitter. It's not remotely AGI, but it's a nice (and expected) increase in ChatGPT's functionality.
  • Options
    Andy_JS said:

    HS2 should have linked up with HS1, and going to Euston was always a stupid idea, according to this article.

    https://reaction.life/mark-bostock-has-been-proved-totally-right-about-hs2/

    "It is hard to imagine a greater procurement disaster than HS2, the transformative high speed rail line between London and Scotland, currently being axed bit by bit, as the costs go through the roof.

    Mark Bostock, a former Arup consultant who successfully led the construction of HS1 from St Pancras to the Channel Tunnel and a former client of ours, would have had a few things to say about it. Sadly he passed away in August but he has been proven totally right about HS2. In fact, it is the greatest vindication in UK transport policy since promoters of the Stockton & Darlington Railway said it would be better than relying on canals.

    Mark led a proposal on behalf of Arup which would have seen HS2 go via a different route. It would link up with HS1 north of St Pancras. The route would have gone via a hub station connecting with Heathrow and the Great Western Railway near Iver. As now, the route would come into Old Oak Common, but never come into Euston which is simply too small. I can hear him saying now “They’ve got the alignment wrong, the most important decision in a railway. It is going to be a disaster.”"

    I vaguely remember that Arup proposal, and it was interesting - especially as I was never fully happy with either the Brum or London terminals wrt connections (or the Leeds or Manchester, either...)

    But I can see why the decisions were made, and the idea that *any* proposal in London would not face mahoosive compromises if they wanted any connectivity with the rest of the transport network is rather fantastic. We can all draw lines on a map; lines that can actually achieve what we want in reality is a very different matter.

    Personally, I would go back thirty years and plan Corsrail be able to take two HS2 trains per hour; HS1 is next to Crossrail at Old Oak Common, and Crossrail is near HS1 at Stratford. That would have been really cool, and been connected-up thinking. It would also have increased Crossrail's costs a bit, as well.
  • Options
    Andy_JSAndy_JS Posts: 27,157
    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    The simple fact that he called a referendum on EU membership shows that he isn't the brightest person out there.
  • Options
    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Surely his crap autobiography is down to the crap ghost writer he enlisted. Do politicians ever write any of these things themselves?
  • Options
    LeonLeon Posts: 47,730
    Relatedly, you will be able to talk to ChatGPT and even show it pictures, inside the next two weeks

    https://x.com/OpenAI/status/1706280618429141022?s=20

    The tech is speeding away at an incredible rate
  • Options
    LeonLeon Posts: 47,730
    "Just had a quite emotional, personal conversation w/ ChatGPT in voice mode, talking about stress, work-life balance. Interestingly I felt heard & warm. Never tried therapy before but this is probably it? Try it especially if you usually just use it as a productivity tool."

    https://x.com/lilianweng/status/1706544602906530000?s=20

    IT'S HERE

    BRACE
  • Options

    Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.

    Can I express my outrage that John Durival Kemp was not elected? If we are going to make a choice of who gets to make laws based on who their daddy was, it is an outrage that we didn't stick Viscount Rochdale in.

    No, strike that. What an absurd spectacle. We shouldn't have a house of peers, and certainly not members who are their because an ancestor was mates with the king.
  • Options
    bondegezoubondegezou Posts: 7,998

    Last week the results of the Crossbencher hereditary peers by-election was announced with Lord Meston and Lord De Clifford elected.

    https://www.parliament.uk/globalassets/documents/lords-information-office/2023/hereditary-peers-by-election-result-palmer-hylton.pdf

    The election was by STV and is a good example of transferring the surplus votes for Lord Meston who was elected on first preferences before the elimination of lower ranked peers.

    When you want the best electoral system, as for the Lords or for the tense situation in Northern Ireland, you of course go for STV.
  • Options
    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Maybe. On the one occasion I met him - an encounter that lasted an hour or two and apologies for the name-dropping - he certainly came across as intelligent but not remarkably so. He didn't say anything exceptionally interesting or insightful but he didn't say anything stupid and appeared able to think quickly on his feet and respond fluently. And boy was he confident. Scarily confident.
  • Options
    Foxy said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    Oxbridge is great at burnishing egos. Which is good. Most people need an ego boost and a half. Not so good, however, when you start with someone who's already a narcissist.
    Please don’t tarnish Cambridge with Oxford.

    Cambridge creates nothing but modest self effacing people.
    And Soviet spies of course.
    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Vernon Bogdanor said Cameron was the cleverest student he'd ever taught. Imagine how all the other Brasenose boys must have squirmed when they read that.
  • Options
    LostPasswordLostPassword Posts: 15,605
    edited September 2023

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Yes. There are lots of qualities and abilities that you can't test with an essay. No-one would think of doling out driving licenses to people who wrote a good essay on the fundamentals of safe driving.

    Why is it the test of choice for so much else?
    Because it's an easy way of doing the assessment. Whether or not it accurately reflects the knowledge or skills of the testee is secondary.

    IMHO, the best way of doing a test is scenario-based. "You are x, in situation y. You need to provide outcome z. You have access to everything you would have in a real life situation [eg open book/access to internet, etc] other than contacting someone else to get them to do it for you. You have three hours to provide z."

    Because that's what employers or anyone wanting your output will be wanting. Only thing is that it's difficult and resource-intensive to provide this way of doing things.

    So we do what behavioural psychologists call "changing the question," which is what we do when the answer is too hard: we come up with something that we can convince ourselves provides similar outcomes that's far easier to do. Hence essays and closed-book exams.
    Kinda ironically, it's the exact same laziness and "you're only cheating yourself" sort of behaviour that the kids using AI are being accused of.

    Especially with teaching becoming ever more dominated by teaching to the test, because the teachers are judged by the results too. Such an effing waste of time.
  • Options
    Jim_MillerJim_Miller Posts: 2,550
    The commenter who suggested helicopters instead of HS2 is, I think, on to something. Although I guessed he was being sarcastic, I think, in many places, some of the newer forms of air travel make more sense -- for people -- than trains do. Not helicopters, but aircraft that can do what helicopters do, more safely and more cheaply.

    For instance, recently I happened to see an experimental craft which can fly at about 60 miles an hour, and do 30 on a road. So you could fly from your home to work in it, and then park it in an (underground, of course) garage.

    (As it happens, I have ridden on trains in many places, and have generally enjoyed the experience. But I don't see why taxpayers should subsidize my trips.)
  • Options
    PhilPhil Posts: 1,953
    edited September 2023
    Taz said:

    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ

    And? You could say the same about any job with decent pension provision - 25% of the cost of employment will be going into the pension fund to pay for the pension entitlements that come with the job.

    (Excepting those government jobs where the pensions are paid out of general taxation of course.)
  • Options
    bondegezoubondegezou Posts: 7,998
    Phil said:

    Taz said:

    An interesting twitter thread.

    University tuition fees. In 2019 a quarter of the cost of universities are going towards pensions.

    Yet the students merrily support the strikers, because, Tories innit.


    https://x.com/ironeconomist/status/1693597906299756810?s=61&t=s0ae0IFncdLS1Dc7J0P_TQ

    And? You could say the same about any job with decent pension provision - 25% of the cost of employment will be going into the pension fund to pay for the pension entitlements that come with the job.

    (Excepting those government jobs where the pensions are paid out of general taxation of course.)
    And most of the cost of a university and of higher education is staff.
  • Options
    LeonLeon Posts: 47,730

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    Maybe. On the one occasion I met him - an encounter that lasted an hour or two and apologies for the name-dropping - he certainly came across as intelligent but not remarkably so. He didn't say anything exceptionally interesting or insightful but he didn't say anything stupid and appeared able to think quickly on his feet and respond fluently. And boy was he confident. Scarily confident.
    Ah, I don't doubt his confidence, I just can't see serious intelligence

    But maybe I set too much store by good writing. His lacklustre, boring and feeble memoir is the main basis for this opinion
  • Options
    The weather has denied England a world record ODI total.
  • Options
    FarooqFarooq Posts: 10,798

    The commenter who suggested helicopters instead of HS2 is, I think, on to something. Although I guessed he was being sarcastic, I think, in many places, some of the newer forms of air travel make more sense -- for people -- than trains do. Not helicopters, but aircraft that can do what helicopters do, more safely and more cheaply.

    For instance, recently I happened to see an experimental craft which can fly at about 60 miles an hour, and do 30 on a road. So you could fly from your home to work in it, and then park it in an (underground, of course) garage.

    (As it happens, I have ridden on trains in many places, and have generally enjoyed the experience. But I don't see why taxpayers should subsidize my trips.)

    when you drive somewhere, taxpayers subsidise that trip too
  • Options
    rcs1000rcs1000 Posts: 54,245
    Leon said:

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    Mataculus thinks AGI will arrive around 2026-2030. Elon Musk reckons by 2029 , possibly sooner


    https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029/



    Intriguingly that was Kurzweil's prediction 6 years ago, years before ChatGPT

    "At the 2017 SXSW Conference in Austin, Texas, Kurzweil gave a typically pinpoint prediction.

    “By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”"


    The DeepMind founder says "in the next few years, at most a decade", others say 5 years, and so on and so forth

    So the idea this is "remote" is either fanciful - or wishful thinking. This is now close
    I know Demis, so I will ask him for his more nuanced views on when AGI is reached :-)

  • Options
    rcs1000rcs1000 Posts: 54,245
    edited September 2023
    AlsoLei said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    "essay-crisis Prime Ministers like Cameron and Johnson."

    TBF I don't think Brown or Blair were much better.

    Besides, I'd argue that raw knowledge and intelligence are only minor characteristics a PM requires. There are much more important requirements, such as being able to persuade people (in cabinet, the party, the civil service and the wider public), knowing who to trust, having good ideas, being able to organise effectively, etc, etc.

    None of these are directly based on intelligence or knowledge.

    Which is probably why ultra-brainiac professors have never been PMs. (I think?)
    Harold Wilson? Youngest C20th Oxford don. Probably also one of the highest-rating PMs on the most of the other requirements you mention, at least for his first term in office.

    ...but despite that, I'm not sure many would put him at the top of their personal "best PMs" list.
    He's top of Lady Falkender's list.
  • Options
    rcs1000rcs1000 Posts: 54,245

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, gets you code that does the wrong thing. But it can suggest chunks of code - ideas, things to follow up on.
    Is there a record of what's gone through ChatGPT or is it private?

    EG I believe universities for instance are concerned about this and trying to crackdown on it, but presumably in the case of getting it to write it. If you were to eg put the draft of an essay in and say "what have I missed" or something like that, would it be able to handle that? And would that be risking getting done for cheating?
    Do you remember this: https://en.wikipedia.org/wiki/AOL_search_log_release
  • Options
    MaxPBMaxPB Posts: 37,667
    Good news, my mother in law is on the plane back to Switzerland! My wife has suggested we don't have her visit again until baby number two arrives next year, I concurred.
  • Options
    MexicanpeteMexicanpete Posts: 25,475
    Suella stirring the migration pot for Trump.

    God, I detest this woman.
  • Options
    MalmesburyMalmesbury Posts: 44,816

    rcs1000 said:

    Sean_F said:

    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Skills change.

    Now the key skill is coming up with the right prompts for ChatGPT, and being able to make sure what it produces doesn't look AI generated.
    As of now, AI is at the standard of a pretty average A Level student.
    If you know how to use AI tools like ChatGPT, they can be very powerful tools.

    Let me give two examples.

    (1) I was writing a proposal for a European insurance company, and wanted to write a summary of a particular country's market. I asked ChatGPT to summarise market size, major players, key industry dynamics, etc. I used that as a template for my work. Essentially nothing from ChatGPT survived the rounds of edits, fact checking and the like, but it saved me a couple of hours because I was starting from work that was not terrible.

    (2) My son was writing a history essay for school. I told him he couldn't use AI to write his answer, but he could use it to provide feedback. So, he said (roughly): the question was this, and this was my answer, what did I miss? ChatGPT gave him two or three points that he hadn't written about, that he went away and wrote about. He came top of the class. Would he have done so without ChatGPT telling him about things he'd missed? Probably not.
    That second one is a really clever use.
    Yes - and it is how ChatGPT is actually useful for various tasks. Asking it to write more than simple bits of code, get you code that does the wrong thing. But it can suggest chunks of code to simple task
  • Options
    MexicanpeteMexicanpete Posts: 25,475

    Suella stirring the migration pot for Trump.

    God, I detest this woman.

    Edit. Is she pitching for Prime Minister or President?
  • Options
    ChrisChris Posts: 11,150
    Leon said:

    Relatedly, you will be able to talk to ChatGPT and even show it pictures, inside the next two weeks

    Can we show it naked selfies?
  • Options
    MalmesburyMalmesbury Posts: 44,816

    The commenter who suggested helicopters instead of HS2 is, I think, on to something. Although I guessed he was being sarcastic, I think, in many places, some of the newer forms of air travel make more sense -- for people -- than trains do. Not helicopters, but aircraft that can do what helicopters do, more safely and more cheaply.

    For instance, recently I happened to see an experimental craft which can fly at about 60 miles an hour, and do 30 on a road. So you could fly from your home to work in it, and then park it in an (underground, of course) garage.

    (As it happens, I have ridden on trains in many places, and have generally enjoyed the experience. But I don't see why taxpayers should subsidize my trips.)

    More that things like people carrying drones with a range of 100 miles are well on the way to reality. It's not hard to imagine that they would be popular for airport transfers and similar.
  • Options
    LeonLeon Posts: 47,730
    rcs1000 said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    Leon said:

    There are tantalising rumours on TwitterX that we are alarmingly close to AGI - true Artificial Intelligence - or, that OpenAI have actually achieved it already

    It’s bizarre that more people aren’t talking about this; if it is true it is one of the biggest news stories in human history

    Thanks, mate. Keep us posted.
    I can keep you posted on this.

    It’s not happening today or this year, and there are a lot of gullible people on Twitter.
    What would it look like if/when it does happen?
    It's hard to say what something we haven't built will look like because we haven't built it or anything remotely like it.

    I would guess there will be multiple steps to an AGI. It's not just going to appear overnight fully formed. There will be impressive jumps in what LLMs and generative AI can do along the way. An AGI will be able to reason from first principles, which means solving tasks without having these vast databases of everything that's ever been on the Internet. An AGI also won't need prompts! ChatGPT is great, but it answers you. AGI would, by definition, be like a person, able to hold up its end of a conversation!
    Mataculus thinks AGI will arrive around 2026-2030. Elon Musk reckons by 2029 , possibly sooner


    https://venturebeat.com/ai/elon-musk-reveals-xai-efforts-predicts-full-agi-by-2029/



    Intriguingly that was Kurzweil's prediction 6 years ago, years before ChatGPT

    "At the 2017 SXSW Conference in Austin, Texas, Kurzweil gave a typically pinpoint prediction.

    “By 2029, computers will have human-level intelligence,” he said. “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario. It’s here, in part, and it’s going to accelerate.”"


    The DeepMind founder says "in the next few years, at most a decade", others say 5 years, and so on and so forth

    So the idea this is "remote" is either fanciful - or wishful thinking. This is now close
    I know Demis, so I will ask him for his more nuanced views on when AGI is reached :-)

    Tell him I’m enjoying his book
  • Options
    LeonLeon Posts: 47,730
    If ChatGPT is good at voice chat that is absolutely going to destroy Alexa, Siri etc

    I was thinking yesterday how (relatively) crap they are. Incapable of proper conversation. I mainly use them for cooking timers, weather, switching things on, etc

    Imagine a voice assistant that will really listen and give you serious or kind or funny or helpful answers, and continue a dialogue indefinitely. That’s quite revolutionary
  • Options
    MalmesburyMalmesbury Posts: 44,816
    Andy_JS said:

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    The simple fact that he called a referendum on EU membership shows that he isn't the brightest person out there.
    Events. Even a year or two earlier, the referendum would have been 65-35 Remain and would have settled the question emphatically.
  • Options
    LeonLeon Posts: 47,730

    Andy_JS said:

    Leon said:

    Leon said:

    Andy_JS said:

    Dura_Ace said:

    On the boring subject of 'AI'... (I don't know how that differs from normal software and I don't care to find out) I've noticed that if I give my students a translation exercise with a completely fictitious word (that is a word looks like a French or Russian word but I've just made it up) then the ones who cheat with ChatGPT (or whatever) submit a translation in which the 'AI' has tried to infer the meaning of my made up word. The ones who tried to do it for real leave it blank and ask what the word means.

    Maybe I'm very naive but I find it odd that some students who've presumably been told not to use ChatGPT decide to go ahead and use it anyway.
    Soon there will be chatbots that are entirely indistinguishable from humans, and undetectable as AI. Lord knows what educators (and others) do then

    My older daughter has been composing her personal statement for Uni application. She did a REALLY good job and I was proud of her. And yet, as I read it, I got the sinking feeling that in about 6 months ChatGPT5 will be able to outdo her - it can already outdo a few of her friends (she showed me some other statements when I asked)
    Hello again and a good sunny afternoon again, all.

    My nephew teaches at one of the more liberal-intellectual of the top public schools ( St Paul's, Westminster , Winchester etc, without naming which ) . He says that ChatGPT is a "growing problem" " particularly among the more lazy but also more able students, which I found interesting. It seems a lot of cleverer students enjoy the challenge of succesfully integrating ChatGpT's work with their own, thus simultaneously saving a lot of time, and simlutaneously outwitting the staff. This is apparently the latest thing as a trendy new skill among the pupils, which the teachers are trying to train themselves to recognise, and know when they see them.
    So I think using ChatGPT and similar as a resource is just a step further on from using google, wikipedia etc. All that has happened is that the search engine has taken the hits and written the essay too. If a student takes that as a start point, checks the facts, re-writes into their own voice, add appropriate referencing, then I have no issue. I am fairly sure my next research article will have some input done in just this way.

    Sadly the weaker and more lazy students will just take the ChatGPT answer and try to use it as their own.

    As generations of teachers would say, "you are only cheating yourselves..."
    Indeed,
    Surely the essay is there to demonstrate to yourself and your teacher that you know stuff. If you get AI to write the essay, even if you then edit the content, then you probably don't know the stuff. This will surely be demonstrated when it comes to the exam. The reality is, they are only cheating themselves.
    Also, writing an essay really isn't that hard. And if you do find it hard you're not going to get any better at it if you never practice. And, if you find it hard to structure an essay, and never practice it, then go into an exam and try to do it under exam conditions... Again, utterly self-defeating, like all forms of cheating.
    I guess the only question is whether AI makes the acquisition of knowledge and the structuring of our thoughts and composition of an argument superfluous. But if it does, then we might as well just declare human civilisation to be at an end.
    Using essays as the yardstick by which to judge knowledge and intelligence is how we ended up with essay-crisis Prime Ministers like Cameron and Johnson.

    It was already a pretty poor way of judging whether people had the desired knowledge, but it was a convenient default to avoid thinking of a more creative and useful way to structure a test.

    If essay-crisis AIs lead to better ways of testing knowledge and proficiency then that will be a good thing.
    I think that the essays Cameron and Johnson wrote in their Oxford examinations (in which they earned a 1st in PPE and a 2:1 in classics respectively) were an accurate yardstick for measuring their intellectual abilities. Both are clearly intelligent men. Cameron's problem is that he overestimates himself, a typical characteristic of those with an elitist upbringing, and this led him to be lazy and take stupid risks like the EU referendum. Johnson's problem is that he is a congenital liar and narcissist. In both cases these are flaws of character, not intelligence, and I would argue were apparent before either of them took the top job. I wouldn't blame Oxford for this, except to the extent that it further burnished their egos and provided them with additional elite contacts to further their political goals.
    I cannot see evidence of Cameron having notable intelligence. His autobiography was alarmingly poor in terms of prose, and it also revealed that total lack of self awareness which you touch on

    Indeed, I reckon he is living proof that a really good education can punt a fairly mediocre brain an awful long way: ie into Oxford, onto a First, into Number 10

    It was only in Number 10 that his mediocrity became apparent
    The simple fact that he called a referendum on EU membership shows that he isn't the brightest person out there.
    Events. Even a year or two earlier, the referendum would have been 65-35 Remain and would have settled the question emphatically.
    Osborne told him not to call it. Apparently. Of the two Osborne seems notably cleverer and sharper
  • Options

    Suella stirring the migration pot for Trump.

    God, I detest this woman.

    Edit. Is she pitching for Prime Minister or President?
    Führer.
  • Options
    Sean_FSean_F Posts: 36,013

    TimS said:

    New constituency poll alert:

    Lab and Con neck and neck in Tamworth

    https://x.com/BNHWalker/status/1706656062571483487?s=20

    A fairly healthy 11% Green and LD vote to squeeze if those numbers are correct, with a 10% Ref vote who I suspect might not turn out unless they're suddenly drawn to Motorists' Friend and scourge of woke climatologists Sunak.

    That is NOT a constituency poll. It is an extrapolation from national polling.
    If that just represents national polling, then we have to add on a by-election factor. By-elections usually show bigger swings. In which case, this should be a walk in the park for Labour.

    Or have they already done that?
    The Conservative vote share is down about 17%, from 2019, not 26% as shown in this projection. The Labour vote share is up about 12%, not 18%. So, it would seem that any boost from a by-election is already being factored in.
This discussion has been closed.