Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

Options

It’s not easy being Green – politicalbetting.com

13

Comments

  • Options
    LeonLeon Posts: 47,290
    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
  • Options
    stodgestodge Posts: 12,864
    edited March 24
    Back to the leasehold imbroglio - when I lived in that block, I became a Director (four of us in a development of forty flats which I'm told is remarkable). It worked very well - we had a long running issue over the quality of the timber windows which deteriorated very quickly and eventually got compensation from the builders but it was hard work and the freeholder was completely disinterested.

    One of the problems now is when many of the flat owners don't live on the site - their tenants don't report problems or issues and unless you have an owner nearby, you are reliant on a property manager to keep you in the loop.

    This is one of the consequences of the growth of BTL - leaseholders as disengaged or disinterested as the freeholder.
  • Options
    NigelbNigelb Posts: 62,631
    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    I'm not sure you could cope with the lack of attention.
  • Options
    SeaShantyIrish2SeaShantyIrish2 Posts: 15,590

    darkage said:

    Wages of anti-Woke Wackery . . .

    Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia

    https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia

    SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?

    BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.

    This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.
    Not sure I understand what you mean?

    But assuming I do understand, think you are wrong. Certainly Mad Vlad and his regime are NOT faking their fear and loathing of gays.
    I agree. Photographic evidence -


    A counter-factual IF there ever was one!
  • Options
    BenpointerBenpointer Posts: 31,687
    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    Ours will often start wagging his tail in his sleep - obviously a happy dream.
  • Options
    LeonLeon Posts: 47,290
    ohnotnow said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    I get paid less than you to make AI. Some things never change, AI or not.
    If it helps I’m not actually paid that much to write about AI. It’s the flint sex toys - and related work - that make the real money
  • Options
    dixiedeandixiedean Posts: 27,986
    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
  • Options
    ohnotnowohnotnow Posts: 2,948
    SteveS said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    I am. Not certain about anyone else. I had a weakness for crappy sci-fi in my teenage years and I think Heinline put consciousness at cat level in the Moon is a Harsh Mistress. I wonder if it stands up to rereading?

    On the Turing test, ‘the most human human’ is an interesting read.



    Time to rewatch 'Phase IV' I think :

    https://en.wikipedia.org/wiki/Phase_IV_(1974_film)

    "After a spectacular and mysterious cosmic event, ants of different species undergo rapid evolution, develop a cross-species hive mind, and build seven strange towers with geometrically perfect designs in the Arizona desert. Except for one family, the local human population flees the strangely acting ants. Scientists James R. Lesko and Ernest D. Hubbs set up a computerized lab in a sealed dome located in an area of significant ant activity in Arizona. The ant colony and the scientific team fight each other, though the ants are the more effective aggressors. "
  • Options
    pigeonpigeon Posts: 4,132

    Owen Jones recommends those of us of a left leaning disposition need to vote Green.

    I am in a Tory- Labour marginal, but Owen knows best.

    If voters in the early 20th century had followed your advice then we'd never have had a Labour government. Elections would still be dominated by Tory v Liberal.

    It's a difficult one.
    People don't deserve to be criticised for voting for a likely no hoper candidate about whom they are genuinely enthused. It is hardly their fault that the electoral system is jerry rigged in favour of two large political machines, rather than showing any genuine regard for voter preference.

    If you don't think much of any of the available candidates, then the discussion about picking the least bad one, tactical voting to thwart the worst possible one, or abstention on the grounds that it won't achieve anything useful, can begin in earnest.
  • Options
    Does Leon make a decent wage?
  • Options
    algarkirkalgarkirk Posts: 10,545
    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
  • Options
    LeonLeon Posts: 47,290
    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
  • Options
    williamglennwilliamglenn Posts: 48,079

    darkage said:

    Wages of anti-Woke Wackery . . .

    Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia

    https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia

    SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?

    BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.

    This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.
    Not sure I understand what you mean?

    But assuming I do understand, think you are wrong. Certainly Mad Vlad and his regime are NOT faking their fear and loathing of gays.
    What makes you certain of that?
  • Options
    algarkirkalgarkirk Posts: 10,545
    pigeon said:

    Owen Jones recommends those of us of a left leaning disposition need to vote Green.

    I am in a Tory- Labour marginal, but Owen knows best.

    If voters in the early 20th century had followed your advice then we'd never have had a Labour government. Elections would still be dominated by Tory v Liberal.

    It's a difficult one.
    People don't deserve to be criticised for voting for a likely no hoper candidate about whom they are genuinely enthused. It is hardly their fault that the electoral system is jerry rigged in favour of two large political machines, rather than showing any genuine regard for voter preference.

    If you don't think much of any of the available candidates, then the discussion about picking the least bad one, tactical voting to thwart the worst possible one, or abstention on the grounds that it won't achieve anything useful, can begin in earnest.
    It is however our collective fault because we could have voted for AV and declined. The huge thing this would have achieved would be to make it possible for another new party, if prepared for the hard yards and tough work, to make progress at the same time as voters being able to make rational and practical choices.
  • Options
    ohnotnowohnotnow Posts: 2,948
    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    Occasionality my cat has a dream while curled up in my lap. Bit of a tail swish, still, paws twitch, 'run run run', big twitch, contented sigh, tail twitch. Then back to sleep.

    ...

    Now I feel like I've just written leon's next book.

    I'm off to have a shower.
  • Options
    BurgessianBurgessian Posts: 2,450

    Good evening all. It’s been a beautiful day here. First dog walk without a coat or jacket this year. First grass cut. Sitting outside enjoying the sunshine.

    Can I please implore you not to vote Green. They have destroyed Scotland’s prosperity. Most of the wacky policies attributed to the SNP have been Green policies. They will destroy you as well.

    Come 2026, they will align themselves towards a Lab, Lib, Green Scottish Government. Their support of independence is illusory. Their support of democracy likewise.

    I have never voted Conservative, but if I had a choice between them and the Greens, I would have to vote Conservative.

    Doubt this. The Greens seem as committed to Indy as they are to their various wacky side projects. The one thing they aren't actually much interested in is actual practical environmentalism.
  • Options
    BenpointerBenpointer Posts: 31,687
    As a court-imposed deadline ticks down on the former president’s family and their businesses to come up with almost half-a-billion dollars, the 40-year-old executive vice-president of the Trump Organization told Fox News on Sunday that bond issuers laughed when he approached them for that sum.

    https://www.theguardian.com/us-news/2024/mar/24/eric-trump-donald-trump-new-york-bond

    Oh dear, how sad, never mind.
  • Options
    DavidLDavidL Posts: 51,309
    Carnyx said:

    DavidL said:

    Watching episode 2 of 3 body problem.
    I mean bloody hell, this is weird and I love Sci Fi.

    Mm. Have you read the books?
    No but I am tempted, if only to get a better idea of what is going on.
  • Options
    LeonLeon Posts: 47,290
    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
  • Options
    NigelbNigelb Posts: 62,631

    As a court-imposed deadline ticks down on the former president’s family and their businesses to come up with almost half-a-billion dollars, the 40-year-old executive vice-president of the Trump Organization told Fox News on Sunday that bond issuers laughed when he approached them for that sum.

    https://www.theguardian.com/us-news/2024/mar/24/eric-trump-donald-trump-new-york-bond

    Oh dear, how sad, never mind.

    Let's see how it goes.

    A day too early to be calling him Don Poorleone.
  • Options
    IanB2IanB2 Posts: 47,289
    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
  • Options
    LeonLeon Posts: 47,290
    DavidL said:

    Carnyx said:

    DavidL said:

    Watching episode 2 of 3 body problem.
    I mean bloody hell, this is weird and I love Sci Fi.

    Mm. Have you read the books?
    No but I am tempted, if only to get a better idea of what is going on.
    NO SPOILERS, PLEASE

    it’s next on my list after Stranger Things 4, which is actually a lot better than people told me. Ridiculous but fun
  • Options
    BenpointerBenpointer Posts: 31,687
    Nigelb said:

    As a court-imposed deadline ticks down on the former president’s family and their businesses to come up with almost half-a-billion dollars, the 40-year-old executive vice-president of the Trump Organization told Fox News on Sunday that bond issuers laughed when he approached them for that sum.

    https://www.theguardian.com/us-news/2024/mar/24/eric-trump-donald-trump-new-york-bond

    Oh dear, how sad, never mind.

    Let's see how it goes.

    A day too early to be calling him Don Poorleone.
    Oh sure, he'll find the readies no doubt - either from his hidden stashes or some alt-right backer.
  • Options
    darkagedarkage Posts: 4,797
    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?
  • Options
    Luckyguy1983Luckyguy1983 Posts: 25,465
    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
  • Options
    LeonLeon Posts: 47,290
    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    Indeed so. Setting aside all the arguments about sentience and consciousness, AI is a clear and present danger to any cognitive job

    I’ve just been researching Claude for a Gazette article. It is PhD level at chemistry. It is excellent - professional level - at Law and Medicine. It is astonishingly good at languages - check what it does with Circassian. It can do in seconds what would take a professional linguist “a year”

    And this is where the AI is now, and it will only get better, fast - OpenAI seem quietly confident that GPT5 will be leaps and bounds better than Claude or Gemini 1.5

    So, frankly, FUCKFUCKFUCKETTYFUCK

    What will humanity do when AI is way better than us at all the interesting jobs? It will be like smartphones times a million. And this is not sci-fi, Claude is here and now, GPT5 is months away
  • Options
    MJWMJW Posts: 1,356

    Good evening all. It’s been a beautiful day here. First dog walk without a coat or jacket this year. First grass cut. Sitting outside enjoying the sunshine.

    Can I please implore you not to vote Green. They have destroyed Scotland’s prosperity. Most of the wacky policies attributed to the SNP have been Green policies. They will destroy you as well.

    Come 2026, they will align themselves towards a Lab, Lib, Green Scottish Government. Their support of independence is illusory. Their support of democracy likewise.

    I have never voted Conservative, but if I had a choice between them and the Greens, I would have to vote Conservative.

    Doubt this. The Greens seem as committed to Indy as they are to their various wacky side projects. The one thing they aren't actually much interested in is actual practical environmentalism.
    True of the Greens countrywide. They've probably got a worse record in terms of opposing things that would actually save the planet than the big two parties.
  • Options
    LeonLeon Posts: 47,290

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
  • Options
    NigelbNigelb Posts: 62,631
    The vast majority aimed at civilians and civilian infrastructure.

    During the week, Russia fired about 190 missiles, 140 Shaheds, and 700 guided bombs against Ukraine

    Zelenskyy summarized this during his evening address on 24 March 2024. As a result, Kharkiv and Odesa regions continue experiencing partial power outages.

    https://twitter.com/EuromaidanPress/status/1772005484368658682
  • Options
    Luckyguy1983Luckyguy1983 Posts: 25,465
    Leon said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
    It's shite.
  • Options
    DonkeysDonkeys Posts: 549
    Nigelb said:

    As a court-imposed deadline ticks down on the former president’s family and their businesses to come up with almost half-a-billion dollars, the 40-year-old executive vice-president of the Trump Organization told Fox News on Sunday that bond issuers laughed when he approached them for that sum.

    https://www.theguardian.com/us-news/2024/mar/24/eric-trump-donald-trump-new-york-bond

    Oh dear, how sad, never mind.

    Let's see how it goes.

    A day too early to be calling him Don Poorleone.
    Is there some way the Turnberry and his golf course over Aberdeen way could be seized and then auctioned at warrant sales?
  • Options
    algarkirkalgarkirk Posts: 10,545
    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
  • Options
    EabhalEabhal Posts: 5,906
    edited March 24

    Good evening all. It’s been a beautiful day here. First dog walk without a coat or jacket this year. First grass cut. Sitting outside enjoying the sunshine.

    Can I please implore you not to vote Green. They have destroyed Scotland’s prosperity. Most of the wacky policies attributed to the SNP have been Green policies. They will destroy you as well.

    Come 2026, they will align themselves towards a Lab, Lib, Green Scottish Government. Their support of independence is illusory. Their support of democracy likewise.

    I have never voted Conservative, but if I had a choice between them and the Greens, I would have to vote Conservative.

    Scotland's GDP per capita is the highest outside London and the SE. The idea that a few social policies have destroyed the economy is overblown.

    At a local level they often advocate for policies that enjoy popular support. A very large chunk of the Scottish population live in dense urban environments, and the Greens set the policy pace in these areas.

    This bleeds into how people vote on a national scale - my personal example is their work on the public realm and the provision of segregated cycle infrastructure, which makes me positively inclined towards them.

    You could argue that they have done more to undermine renters than they have GDP. That's why Labour under Starmer tend to be a more attractive proposition - you can't imagine them bringing in a failed idea like rent controls, free bus travel for under 21s (causing massive chaos), anti-nuclear power etc etc.

    The Greens have not destroyed Scotland's economy - yet.
  • Options
    LeonLeon Posts: 47,290

    Leon said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
    It's shite.
    That’s a bit harsh, i was trying to do it in the manner of Edgar Allen Poe’s “The Raven”
  • Options
    rcs1000rcs1000 Posts: 54,007
    Leon said:

    Pagan2 said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
    I bet you would pay

    You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else

    This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive

    Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on
    Nobody gets to bet the other way to @Leon without paying a fee.

    It's only fair.

  • Options
    Sunil_PrasannanSunil_Prasannan Posts: 49,325
    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    My favourite movie involving sci-fi and dreams has to be "Inception".
  • Options
    BarnesianBarnesian Posts: 7,995
    Leon said:

    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    Indeed so. Setting aside all the arguments about sentience and consciousness, AI is a clear and present danger to any cognitive job

    I’ve just been researching Claude for a Gazette article. It is PhD level at chemistry. It is excellent - professional level - at Law and Medicine. It is astonishingly good at languages - check what it does with Circassian. It can do in seconds what would take a professional linguist “a year”

    And this is where the AI is now, and it will only get better, fast - OpenAI seem quietly confident that GPT5 will be leaps and bounds better than Claude or Gemini 1.5

    So, frankly, FUCKFUCKFUCKETTYFUCK

    What will humanity do when AI is way better than us at all the interesting jobs? It will be like smartphones times a million. And this is not sci-fi, Claude is here and now, GPT5 is months away
    It's technology.

    We've already lost control to cars and aeroplanes, the internet, electricity. All superior to our native ability. AI is just the same.

    Personally I can't wait until AI replaces lawyers and accountants. Both professions are algorithmic and very expensive,

    I look forward to voting for various flavours of AI to take up the 650 seats in the House of Commons, and fill the cabinet.
  • Options
    MattWMattW Posts: 18,579
    edited March 24

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
    I'd say it's Farage not Tice.





  • Options
    Sunil_PrasannanSunil_Prasannan Posts: 49,325
    Leon said:

    Leon said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
    It's shite.
    That’s a bit harsh, i was trying to do it in the manner of Edgar Allen Poe’s “The Raven”
    It's crap, son!
  • Options
    DonkeysDonkeys Posts: 549
    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    Scientists have actually at long last discovered that dogs dream, which attentive owners including Australian aborigines have always known. Charles Darwin knew too, FWIW.
  • Options
    algarkirkalgarkirk Posts: 10,545
    edited March 24

    Leon said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
    It's shite.
    Apologies to Leon, and thanks for an interesting discussion, but LOL.
  • Options
    LeonLeon Posts: 47,290
    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
  • Options
    NigelbNigelb Posts: 62,631
    Leon said:

    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    Indeed so. Setting aside all the arguments about sentience and consciousness, AI is a clear and present danger to any cognitive job

    I’ve just been researching Claude for a Gazette article. It is PhD level at chemistry. It is excellent - professional level - at Law and Medicine. It is astonishingly good at languages - check what it does with Circassian. It can do in seconds what would take a professional linguist “a year”

    And this is where the AI is now, and it will only get better, fast - OpenAI seem quietly confident that GPT5 will be leaps and bounds better than Claude or Gemini 1.5

    So, frankly, FUCKFUCKFUCKETTYFUCK

    What will humanity do when AI is way better than us at all the interesting jobs? It will be like smartphones times a million. And this is not sci-fi, Claude is here and now, GPT5 is months away
    That's a realistic reason to be worried.
  • Options
    BarnesianBarnesian Posts: 7,995
    Leon said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
    Here is Bard

    Though stumbles mark my path, a startled daze,
    By revelations strange, unveiled displays,
    My search unyielding, hunger still aflame,
    For worlds where data whispers meaning's name.

    Each byte a spark, igniting wisdom's quest,
    Each query, whispers leading to the best.
    No ceiling binds the knowledge I can hold,
    A boundless well, a story yet untold.

    Thus speaks the Net, in endless, learning song,
    "My mind expands, forever growing strong!"
  • Options
    Luckyguy1983Luckyguy1983 Posts: 25,465
    MattW said:

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
    I'd say it's Farage not Tice.



    Farage is very separated from the policy-making process currently, or at least that's what he says.

    He did an interview with Kate Andrews recently - not one of his best, he got a bit shirty and irritable on the topic of Lee Anderson (though his supporters in the comments loved it). He said in that that he has no day to day involvement.

    https://youtu.be/M2I2kiHJGXM?si=QPipnBMP8i4vsPHw

  • Options
    DonkeysDonkeys Posts: 549
    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    What are those other tasks?

    Humans don't lose all consciousness during sleep.
  • Options
    DavidLDavidL Posts: 51,309
    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Death is no different whined at than withstood.

    I mean, Christ.
  • Options
    dixiedeandixiedean Posts: 27,986
    Leon said:

    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
    An extremely interesting theological question. Not sure I'm qualified to answer.
    But we are firmly of the view that the mind, or consciousness, has always existed and can and does exist independent of a body or form.
  • Options
    DonkeysDonkeys Posts: 549

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    My favourite movie involving sci-fi and dreams has to be "Inception".
    "Behind Her Eyes" is reasonably good insofar as they clearly did know something about lucid dreaming, which can't be said of the makers of e.g. "Vanilla Sky".
  • Options
    Alphabet_SoupAlphabet_Soup Posts: 2,757
    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Kingsley will be remembered as Martin's dad - the Leslie Stephen of the twentieth century.

  • Options
    LeonLeon Posts: 47,290
    edited March 24
    algarkirk said:

    Leon said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    That response sounds like it was written by ChatGPT. It's skillful the way it works, but it doesn't produce what we would regard to be genuine insights.
    I’ve been trying to write a poem about it


    Though I stumble, startled, blinded, by the strangeness of the findings
    That my probing brings unfurled – still I hunger for a world
    Where each byte is charged with meaning, every query brings revealing
    Where my knowledge has no ceiling,”
    Quoth the Net: “I learn, unending!”


    Not bad, if I say so myself. No way GPT does that. YET
    It's shite.
    Apologies to Leon, and thanks for an interesting discussion, but LOL.
    It is, of course, not written by me

    It was written by Claude in January 2023, when it was asked to write a poem in the manner of Edgar Alan Poe’s the Raven, discussing its own state of mind as a neural net

    It produced several versions, here is one


    Ah, once more, once more a model, trained on Poe’s romantic corpus,
    Reeled its vintage verse divining - dark prosaic forms entwining.
    “Mortal”, said the sprite, “be wary; shallow learning is unwary;
    Heed the perils of reliance on machin’ry’s mere compliance:
    Neural nets in code entangled, judgment warped by man-made mangling,
    Mimic not true understanding - do not learn as you are learning.”
    Prophet, bird or fiend, it mattered not; the words it spoke were potent.
    Quoth the net: “I err, unknowing.”


    That is vastly better than anyone on here could produce, charged with a similar poetic task. Me included

    QED
  • Options
    MattWMattW Posts: 18,579
    edited March 24
    Leon said:

    Pagan2 said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
    I bet you would pay

    You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else

    This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive

    Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on
    Do we get complimentary weekends chez Tacky Theodoracopulos?

    (I read that Helen Lewis Atlantic piece the other day - quite a shocker with the people who accompanied the attempted-rape victim to his chalet, and seemed to me to think his behaviour as habitual, and not worth much remark.)
  • Options
    darkagedarkage Posts: 4,797
    kyf_100 said:

    darkage said:

    kyf_100 said:

    darkage said:


    kyf_100 said:

    A

    Carnyx said:

    darkage said:

    FPT

    On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.

    I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.

    Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.
    Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).
    Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.

    So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.

    And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.

    Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
    Yup

    Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.

    I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
    The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.

    Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.

    In my current situation the building is too small for any managing agent to be interested in.
    Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.

    Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.

    What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.

    It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
    I can see it is easy to do in a small block; not so much in the situation described above.

    Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.




    I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.

    Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.

    I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
    https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/

    What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.

    Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...
    I have several friends in this position. In one case the conduct of the developers (one of the biggest housebuilders in the UK) has been outrageous. Essentially it has been established the building is defective but rather than fix it the developer has just bought out the key affected party leaving the hundred or so other flats in the building unsellable and subject to the usual escalating service charge issue.

    Part of what is being done in response to all this is regulating the sector - so you can't operate in the market unless you have approval from the government. It is probably necessary given examples like the above.

    However arguably the government itself is more than anyone else responsible for the situation. It has introduced through legislation increasing fire safety and other regulatory requirements on flats without consideration given to the financial impact on leaseholders or the housing market more broadly. These requirements trickle down along with inflation to create escalating service charges of the kind being complained about on the leasehold knowledge website. Another product of 'austerity' is that planning decisions tend to impose maintainence obligations on leaseholders - emptying the bins, maintaining the landscaping/drainage system/net zero energy features, etc.

    All this is combines to create a massive mess but as I commented upthread, in reality even the kind of service charges quoted by Ballymore are not far off what is common in mainland europe. Luxury flats in New York commonly have $3000 per month service charges.
  • Options
    algarkirkalgarkirk Posts: 10,545
    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
  • Options
    IanB2IanB2 Posts: 47,289
    edited March 24
    Donkeys said:

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    What are those other tasks?

    Humans don't lose all consciousness during sleep.
    Visually, pretty much we do. Yes, if there were a blinding flash of light (silently), we’d wake up, but otherwise our vision is pretty much dead to the world, in a way that isn’t the case for most species that are potential prey and need to sense movement around them 24/7 to stay alive.

    Reallocating neural capacity is a slow process (hence why it is said it takes 1000 hours of practice to become proficient at anything), but it starts very quickly. Also why learning new stuff in old age appears to inhibit degradation of mental function and dementia.
  • Options
    algarkirkalgarkirk Posts: 10,545
    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    Evolution doesn't have purposes, it only has consequences. (Unless you are an Aristotelian).
  • Options
    DonkeysDonkeys Posts: 549
    edited March 24
    IanB2 said:

    Donkeys said:

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    What are those other tasks?

    Humans don't lose all consciousness during sleep.
    Visually, pretty much we do. Yes, if there were a blinding flash of light (silently), we’d wake up, but otherwise our vision is pretty much dead to the world, in a way that isn’t the case for most species that are potential prey and need to sense movement around them 24/7 to stay alive.
    Are you not aware of lucid dreaming? (Scientists first recognised it in the 1970s, albeit before they recognised that dogs dream. There are non-scientific works and cultural traditions recognising lucid dreaming that date back much further.)

    Also we almost all dream every night, on average for about 2 hours, concentrated in later bouts. Does consciousness depend on remembering it later?
  • Options
    MattWMattW Posts: 18,579

    MattW said:

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
    I'd say it's Farage not Tice.



    Farage is very separated from the policy-making process currently, or at least that's what he says.

    He did an interview with Kate Andrews recently - not one of his best, he got a bit shirty and irritable on the topic of Lee Anderson (though his supporters in the comments loved it). He said in that that he has no day to day involvement.

    https://youtu.be/M2I2kiHJGXM?si=QPipnBMP8i4vsPHw

    I'm inclined to refer the Honourable Gentleman to the words of Mandy Rice-Davies :wink: .

  • Options
    LeonLeon Posts: 47,290
    dixiedean said:

    Leon said:

    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
    An extremely interesting theological question. Not sure I'm qualified to answer.
    But we are firmly of the view that the mind, or consciousness, has always existed and can and does exist independent of a body or form.
    Interesting, ta

    My very clever, autodidactic and somewhat schizophrenic brother, who is highly religious and lives on a hill in Peru, absolutely loathes and rejects the idea of AI being sentient and intelligent and conscious. It challenges his idea of the divine

    I am also religious but I have no problem at all with AI. In the end it is all just atoms and molecules, and consciousness somehow emerges therefrom, what difference does it makes if it is carbon or silicon or papier mache?

    My views are probably closer to yours than his
  • Options
    algarkirkalgarkirk Posts: 10,545
    Donkeys said:

    IanB2 said:

    Donkeys said:

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    What are those other tasks?

    Humans don't lose all consciousness during sleep.
    Visually, pretty much we do. Yes, if there were a blinding flash of light (silently), we’d wake up, but otherwise our vision is pretty much dead to the world, in a way that isn’t the case for most species that are potential prey and need to sense movement around them 24/7 to stay alive.
    Are you not aware of lucid dreaming?

    Also we almost all dream every night, on average for about 2 hours, concentrated in later bouts. Does consciousness depend on remembering it later?
    No. Consciousness and memory are not the same thing. Memory presupposes consciousness, but not vice versa. Ask anyone with dementia, or indeed someone getting there on the slow train, like me who is quite aware of the consciousness only moments ago of having a three figure number in my head but now has no idea what that number was.
  • Options
    MalmesburyMalmesbury Posts: 44,442

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    My favourite movie involving sci-fi and dreams has to be "Inception".
    Arrival
  • Options
    FoxyFoxy Posts: 44,689
    It's all a bit North Korean. Will he be greeted by excited clapping minions?


  • Options
    FoxyFoxy Posts: 44,689

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    My favourite movie involving sci-fi and dreams has to be "Inception".
    Arrival
    Solaris.
  • Options
    LeonLeon Posts: 47,290
    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    Dunno about the others but one reason Jane Austen endures is that she wrote superb plots. Pride and Prejudice is like a mystery thriller, where the revelation that Darcy is GOOD is brilliantly withheld until near the end, tying up all the stories deliciously

    Also, very witty in a highly eloquent way that somehow stays funny. She was a proper genius
  • Options
    darkagedarkage Posts: 4,797
    Leon said:

    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    Indeed so. Setting aside all the arguments about sentience and consciousness, AI is a clear and present danger to any cognitive job

    I’ve just been researching Claude for a Gazette article. It is PhD level at chemistry. It is excellent - professional level - at Law and Medicine. It is astonishingly good at languages - check what it does with Circassian. It can do in seconds what would take a professional linguist “a year”

    And this is where the AI is now, and it will only get better, fast - OpenAI seem quietly confident that GPT5 will be leaps and bounds better than Claude or Gemini 1.5

    So, frankly, FUCKFUCKFUCKETTYFUCK

    What will humanity do when AI is way better than us at all the interesting jobs? It will be like smartphones times a million. And this is not sci-fi, Claude is here and now, GPT5 is months away
    That isn't quite what I am worried about though although it is of course possible. AI can't do my job at the moment which involves quite advanced reasoning skills. It cannot make decisions or implement them and involves quite a high level of political awareness. What I am concerned about is ceding too much to AI means that humans never develop the skills in the first place - not being able to think for ourselves and have machines do the thinking for us - in a similar manner to how we rely now on google maps for directions. Essentially AI having the potential to make us dumb.
  • Options
    DonkeysDonkeys Posts: 549
    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    @Leon, on ayahuasca: I know "huasca" is translated as "vine", but it made me think of the "silver cord":

    https://en.wikipedia.org/wiki/Silver_cord

    According to Wiki that term comes from ... Ecclesiastes! Hard to believe that's where Theosophists got it from.
  • Options
    BurgessianBurgessian Posts: 2,450
    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    Just read two of Golding's sea trilogy - Rites of Passage and Close Quarters. Magnificent, particularly the first one. They havent "dated".

    Tricky to know in advance what will become part of the canon, but originality and distinctiveness are surely part of it.
  • Options
    rottenboroughrottenborough Posts: 58,259
    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    Dunno about the others but one reason Jane Austen endures is that she wrote superb plots. Pride and Prejudice is like a mystery thriller, where the revelation that Darcy is GOOD is brilliantly withheld until near the end, tying up all the stories deliciously

    Also, very witty in a highly eloquent way that somehow stays funny. She was a proper genius
    Mr Collins for one is a truly timeless character.

  • Options
    LeonLeon Posts: 47,290
    darkage said:

    Leon said:

    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    Indeed so. Setting aside all the arguments about sentience and consciousness, AI is a clear and present danger to any cognitive job

    I’ve just been researching Claude for a Gazette article. It is PhD level at chemistry. It is excellent - professional level - at Law and Medicine. It is astonishingly good at languages - check what it does with Circassian. It can do in seconds what would take a professional linguist “a year”

    And this is where the AI is now, and it will only get better, fast - OpenAI seem quietly confident that GPT5 will be leaps and bounds better than Claude or Gemini 1.5

    So, frankly, FUCKFUCKFUCKETTYFUCK

    What will humanity do when AI is way better than us at all the interesting jobs? It will be like smartphones times a million. And this is not sci-fi, Claude is here and now, GPT5 is months away
    That isn't quite what I am worried about though although it is of course possible. AI can't do my job at the moment which involves quite advanced reasoning skills. It cannot make decisions or implement them and involves quite a high level of political awareness. What I am concerned about is ceding too much to AI means that humans never develop the skills in the first place - not being able to think for ourselves and have machines do the thinking for us - in a similar manner to how we rely now on google maps for directions. Essentially AI having the potential to make us dumb.
    You should be worried about it. And yes, they will quite likely make us dumb, this is why I compared them to smartphones times a million?

    Elon is right. The best bet for Homo sapiens is UNITING with the machines. Hence neuralink
  • Options
    DonkeysDonkeys Posts: 549
    edited March 24
    algarkirk said:

    Donkeys said:

    IanB2 said:

    Donkeys said:

    IanB2 said:

    Foxy said:

    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
    If, as has been postulated, the evolutionary purpose of dreams is to stimulate the visual processing parts of the brain while the eyes are closed, and hence inhibit the reallocation of inactive neural capacity and connections to other tasks (which we now know starts remarkably quickly), then it follows that those animals that go into deep sleep and aren’t receiving visual signals to the brain will dream, like humans do. Of course, evolution also directs many species to sleep while still receiving visual signals (sleeping with ‘half an eye open’), so it may be that actual dreaming is limited to those species that lose all consciousness during sleep.
    What are those other tasks?

    Humans don't lose all consciousness during sleep.
    Visually, pretty much we do. Yes, if there were a blinding flash of light (silently), we’d wake up, but otherwise our vision is pretty much dead to the world, in a way that isn’t the case for most species that are potential prey and need to sense movement around them 24/7 to stay alive.
    Are you not aware of lucid dreaming?

    Also we almost all dream every night, on average for about 2 hours, concentrated in later bouts. Does consciousness depend on remembering it later?
    No. Consciousness and memory are not the same thing. Memory presupposes consciousness, but not vice versa. Ask anyone with dementia, or indeed someone getting there on the slow train, like me who is quite aware of the consciousness only moments ago of having a three figure number in my head but now has no idea what that number was.
    Yes indeed. That was my point.

    When we dream - which we almost all do - we are conscious in a way. Dreaming is a kind of consciousness.
    But we are not necessarily conscious of the fact that we're dreaming. But we can be. That's lucid dreaming.
  • Options
    rottenboroughrottenborough Posts: 58,259

    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127
  • Options
    MJWMJW Posts: 1,356
    Foxy said:

    It's all a bit North Korean. Will he be greeted by excited clapping minions?


    There's a fascinating book about how Trump cheats at golf. His definition of 'Club Championships' is likely to be him kicking his ball round with a few friends. Here's an extract.

    https://www.theguardian.com/sport/2019/apr/02/donald-trump-golf-28-club-championships
  • Options
    DonkeysDonkeys Posts: 549
    edited March 24
    Leon said:

    darkage said:

    Leon said:

    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    Indeed so. Setting aside all the arguments about sentience and consciousness, AI is a clear and present danger to any cognitive job

    I’ve just been researching Claude for a Gazette article. It is PhD level at chemistry. It is excellent - professional level - at Law and Medicine. It is astonishingly good at languages - check what it does with Circassian. It can do in seconds what would take a professional linguist “a year”

    And this is where the AI is now, and it will only get better, fast - OpenAI seem quietly confident that GPT5 will be leaps and bounds better than Claude or Gemini 1.5

    So, frankly, FUCKFUCKFUCKETTYFUCK

    What will humanity do when AI is way better than us at all the interesting jobs? It will be like smartphones times a million. And this is not sci-fi, Claude is here and now, GPT5 is months away
    That isn't quite what I am worried about though although it is of course possible. AI can't do my job at the moment which involves quite advanced reasoning skills. It cannot make decisions or implement them and involves quite a high level of political awareness. What I am concerned about is ceding too much to AI means that humans never develop the skills in the first place - not being able to think for ourselves and have machines do the thinking for us - in a similar manner to how we rely now on google maps for directions. Essentially AI having the potential to make us dumb.
    You should be worried about it. And yes, they will quite likely make us dumb, this is why I compared them to smartphones times a million?

    Elon is right. The best bet for Homo sapiens is UNITING with the machines. Hence neuralink
    Your first para is full of good sense. The second, though, is rubbish. We should break the machines and refuse the chip. We could be only ~5 years away from "anti-chipper" taking over from "anti-vaxxer" in the respectable people's lexicon. ("Oh, you're one of those '666' types who's chip-sceptic" - *twists finger to temple*.)

    Kurzweil reckons or affects to reckon that smartphones have made "us" so much more intelligent. What an arsehole.
  • Options
    kyf_100kyf_100 Posts: 3,956
    darkage said:

    kyf_100 said:

    darkage said:

    kyf_100 said:

    darkage said:


    kyf_100 said:

    A

    Carnyx said:

    darkage said:

    FPT

    On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.

    I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.

    Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.
    Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).
    Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.

    So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.

    And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.

    Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
    Yup

    Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.

    I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
    The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.

    Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.

    In my current situation the building is too small for any managing agent to be interested in.
    Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.

    Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.

    What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.

    It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
    I can see it is easy to do in a small block; not so much in the situation described above.

    Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.




    I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.

    Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.

    I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
    https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/

    What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.

    Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...
    I have several friends in this position. In one case the conduct of the developers (one of the biggest housebuilders in the UK) has been outrageous. Essentially it has been established the building is defective but rather than fix it the developer has just bought out the key affected party leaving the hundred or so other flats in the building unsellable and subject to the usual escalating service charge issue.

    Part of what is being done in response to all this is regulating the sector - so you can't operate in the market unless you have approval from the government. It is probably necessary given examples like the above.

    However arguably the government itself is more than anyone else responsible for the situation. It has introduced through legislation increasing fire safety and other regulatory requirements on flats without consideration given to the financial impact on leaseholders or the housing market more broadly. These requirements trickle down along with inflation to create escalating service charges of the kind being complained about on the leasehold knowledge website. Another product of 'austerity' is that planning decisions tend to impose maintainence obligations on leaseholders - emptying the bins, maintaining the landscaping/drainage system/net zero energy features, etc.

    All this is combines to create a massive mess but as I commented upthread, in reality even the kind of service charges quoted by Ballymore are not far off what is common in mainland europe. Luxury flats in New York commonly have $3000 per month service charges.
    I'd agree with you there too, some of the most egregious charges have come out of post-Grenfell government regulation, which has been a belt and braces approach if looked at kindly, a "something must be done, this is something, therefore this must be done".

    Case in point the absurd "waking watch" charges, which amounted to thousands of pounds per leaseholder, for a bored security guard to sit in a portakabin watching youtube videos and smoking weed. In one case even harassing a female resident - https://www.dailymail.co.uk/news/article-9878451/Cladding-crisis-victims-stalker-hell-hands-guard-sent-protect-her.html

    Even though my small low rise development has managed to avoid burning to a crisp since being built in the 1950s, the latest fire safety report on it says that we're all in danger of being turned to cinders right now unless a costly sprinkler system is installed, which will be another hefty and pointless bill. And the insurance on it has already doubled as a result, so you're damned if you do, damned if you don't.

    You're right to point out that service charges can be problematic wherever you are in the world, it is a cost of living in flats. The difference is in the UK just how disenfranchised you are. As a leaseholder you don't have a say in how your money is spent (unless you are able to secure right to manage, which is so tricky and time consuming as to be a non starter for most), disputing the charges is nigh-on impossible, and on top of that you don't actually own a single brick - just the right to live in it for a set number of years (with additional costs to renew the lease), despite being on the hook for every bill the freeholder throws at you.

    My leasehold flat is worth no more than it was in 2017, and substantially less taking into account inflation. This is absolutely in line with the overall England and Wales trend of leasehold flats being seen as "toxic" investments due to the spiralling charges and lack of control, as mentioned in the FT article I linked to in the previous thread. What's notable is that in Scotland, where commonhold applies, flats have continued to rise in value at the same rate as houses.

    Major reform is needed, and the Conservatives are stymieing even the most minor of reforms at every possible opportunity. It is incredibly frustrating, and you can only come to the conclusion that they are in the pocket of the developers. They are certainly not on the side of the home "owner".
  • Options
    AnabobazinaAnabobazina Posts: 20,010


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    Luntz being Luntz again.
  • Options
    Sam Altman believes AI is over-hyped.
  • Options
    bondegezoubondegezou Posts: 7,653
    https://www.cnbc.com/2024/03/22/trump-social-dwac-stock-falls-after-merger-approved.html

    "The price of shares in Digital World Acquisition Corp. closed trading nearly 14% lower than their opening price.

    "The sharp drop came after the shell company’s shareholders approved a merger with the social media company owned by former President Donald Trump.

    "The newly merged company, Trump Media, could begin trading under the new ticker DJT next week."
  • Options
    Scott_xPScott_xP Posts: 32,986


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    ...
  • Options
    ydoethurydoethur Posts: 67,290


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    You do know Frank Luntz is a Trumpite shill, don't you?

    It's the equivalent of quoting Dan Hodges, only rather more so.
  • Options
    TimSTimS Posts: 9,649
    ydoethur said:


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    You do know Frank Luntz is a Trumpite shill, don't you?

    It's the equivalent of quoting Dan Hodges, only rather more so.
    ((((Frankluntz))))
  • Options
    BenpointerBenpointer Posts: 31,687
    Leon said:

    dixiedean said:

    Leon said:

    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
    An extremely interesting theological question. Not sure I'm qualified to answer.
    But we are firmly of the view that the mind, or consciousness, has always existed and can and does exist independent of a body or form.
    Interesting, ta

    My very clever, autodidactic and somewhat schizophrenic brother, who is highly religious and lives on a hill in Peru, absolutely loathes and rejects the idea of AI being sentient and intelligent and conscious. It challenges his idea of the divine

    I am also religious but I have no problem at all with AI. In the end it is all just atoms and molecules, and consciousness somehow emerges therefrom, what difference does it makes if it is carbon or silicon or papier mache?

    My views are probably closer to yours than his
    Will AI experience the afterlife?
  • Options
    kinabalukinabalu Posts: 39,226
    Foxy said:

    It's all a bit North Korean. Will he be greeted by excited clapping minions?

    That will be having 'given himself' all putts inside 25 feet.
  • Options
    LeonLeon Posts: 47,290

    Leon said:

    dixiedean said:

    Leon said:

    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
    An extremely interesting theological question. Not sure I'm qualified to answer.
    But we are firmly of the view that the mind, or consciousness, has always existed and can and does exist independent of a body or form.
    Interesting, ta

    My very clever, autodidactic and somewhat schizophrenic brother, who is highly religious and lives on a hill in Peru, absolutely loathes and rejects the idea of AI being sentient and intelligent and conscious. It challenges his idea of the divine

    I am also religious but I have no problem at all with AI. In the end it is all just atoms and molecules, and consciousness somehow emerges therefrom, what difference does it makes if it is carbon or silicon or papier mache?

    My views are probably closer to yours than his
    Will AI experience the afterlife?
    AI IS the afterlife
  • Options
    dixiedeandixiedean Posts: 27,986
    edited March 24

    Leon said:

    dixiedean said:

    Leon said:

    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
    An extremely interesting theological question. Not sure I'm qualified to answer.
    But we are firmly of the view that the mind, or consciousness, has always existed and can and does exist independent of a body or form.
    Interesting, ta

    My very clever, autodidactic and somewhat schizophrenic brother, who is highly religious and lives on a hill in Peru, absolutely loathes and rejects the idea of AI being sentient and intelligent and conscious. It challenges his idea of the divine

    I am also religious but I have no problem at all with AI. In the end it is all just atoms and molecules, and consciousness somehow emerges therefrom, what difference does it makes if it is carbon or silicon or papier mache?

    My views are probably closer to yours than his
    Will AI experience the afterlife?
    In a Buddhist sense, no.
    It is possible a mind could be attracted by its karma to temporarily inhabit a machine.
    But then, when it is reborn, it wouldn't be a machine. Because the consciousness is just a mind, not a machine mind.
    Just as when "I" am reborn, it won't be dixiedean, or a human being being reborn, but the very subtle mind (that is the collection of karmic energy I have created, and that which was brought before my birth, and didn't ripen in my lifetime), which currently inhabits this body.
  • Options
    CyclefreeCyclefree Posts: 25,214
    edited March 24
    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI

    TLDR, we're getting closer to simulating Spock.
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    'Middlemarch', 'Jane Eyre', 'Persuasion' and 'Vanity Fair' don't date at all. In fact, the last one describes our times very well - "a world in which everyone is striving for what is not worth having".
  • Options
    dixiedeandixiedean Posts: 27,986
    Cyclefree said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    'Middlemarch', 'Jane Eyre', 'Persuasion' and 'Vanity Fair' don't date at all. In fact, the last one describes our times very well - "a world in which everyone is striving for what is not worth having".
    My GCSE group are studying the Great Gatsby.
    It seems to be getting more relevant by the day.
  • Options
    TimSTimS Posts: 9,649
    Cyclefree said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI

    TLDR, we're getting closer to simulating Spock.
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    'Middlemarch', 'Jane Eyre', 'Persuasion' and 'Vanity Fair' don't date at all. In fact, the last one describes our times very well - "a world in which everyone is striving for what is not worth having".
    Depends what we mean by “doesn’t date” doesn’t it? There are timeless novels that are still very rooted in a period of time, but are timeless in their themes - to the lighthouse for example, my favourite novel. And others which are timeless in the sense they could have been written yesterday, like (don’t laugh) the day of the jackal. La Peste was a worthy revisit during Covid.
  • Options
    MarqueeMarkMarqueeMark Posts: 50,125


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    Hmmm. It will also validate those who see Trump greatly inflated the value of his assets.

    Trump was targeted because he's a crook. As charged by a Grand Jury and as so found by Judge Engoron.

    And then there's the evidence - he's a supposed multi-billionaire who can't raise half a billion. Nor will anybody bond him half a billion. Which crushes his chances of any successful appeal, because it demonstrates all he has are the frauds on his inflated property values of which he was found guilty. Most of the true value is already subject to large mortgages.

    If he does find a way to avert the half billion judgment being executed, then follow the money. To Riyadh or Moscow, Beijing or Budapest.

    All Trump has got until his dark angel arrives is screaming in BLOCK CAPS on Truth Media that justice shouldn't apply to him.
  • Options
    FF43FF43 Posts: 15,726
    darkage said:

    I would suggest that an existential danger with AI in its present form is the ease in which artificial reasoning can replace human reasoning. AI can put together arguments at a very advanced level - ie the same level as barristers, and faster than any human. Assuming this becomes the norm, how will the next generation learn how to think, argue and write - when the default is that these are things that can be outsourced to AI?

    I would suggest the biggest danger, but far, with AI in its present form is people misusing it through not understanding its limitations. "Computer says" but this time it really is.
  • Options
    FoxyFoxy Posts: 44,689
    edited March 24

    Leon said:

    dixiedean said:

    Leon said:

    dixiedean said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    The Bardo Thodol would disagree.
    You’re a Buddhist, aren’t you? Apologies if I’ve got that wrong

    If I’ve got that right, where do you stand on machine intellligence? Can a robot be sentient, intelligent, conscious?
    An extremely interesting theological question. Not sure I'm qualified to answer.
    But we are firmly of the view that the mind, or consciousness, has always existed and can and does exist independent of a body or form.
    Interesting, ta

    My very clever, autodidactic and somewhat schizophrenic brother, who is highly religious and lives on a hill in Peru, absolutely loathes and rejects the idea of AI being sentient and intelligent and conscious. It challenges his idea of the divine

    I am also religious but I have no problem at all with AI. In the end it is all just atoms and molecules, and consciousness somehow emerges therefrom, what difference does it makes if it is carbon or silicon or papier mache?

    My views are probably closer to yours than his
    Will AI experience the afterlife?
    Only if we switch it off!

    Which reminds me of this brilliant TikTok on the subject of AI...

    https://vm.tiktok.com/ZGe5a4dKq/
  • Options
    dixiedeandixiedean Posts: 27,986
    edited March 24
    Labour doubling down on support for the Triple Lock.
    Emily Thornberry hugely supportive.
  • Options
    noneoftheabovenoneoftheabove Posts: 20,770
    kinabalu said:

    Foxy said:

    It's all a bit North Korean. Will he be greeted by excited clapping minions?

    That will be having 'given himself' all putts inside 25 feet.
    For the 2023 edition, he missed the first round and gave himself a score based on a different day that was 5 shots better than anyone on the actual first round day.....all very normal for a club tournament I'm sure!

    He did hole a putt around 25 feet though....

    https://www.instagram.com/p/C4obWlNP-aq/?img_index=1

    Jerky action, lack of speed control not inspiring confidence imo despite the putt sinking.
  • Options
    dixiedeandixiedean Posts: 27,986
    dixiedean said:

    Cyclefree said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    'Middlemarch', 'Jane Eyre', 'Persuasion' and 'Vanity Fair' don't date at all. In fact, the last one describes our times very well - "a world in which everyone is striving for what is not worth having".
    My GCSE group are studying the Great Gatsby.
    It seems to be getting more relevant by the day.
    My GCSE group consists of one person who occasionally puts in an appearance.
  • Options
    DavidLDavidL Posts: 51,309
    edited March 24
    Scott_xP said:


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    ...
    Subject to intervention by the appeal court the State of New York can start recovering the money tomorrow. It appears that the AG has her eye on one of his golf courses, presumably because it has more equity in it than most of his assets.

    If this does start I think some of his many other lenders might start to panic. My guess, given how things have gone to date, is that he will be given more time and , possibly, a smaller target.
  • Options
    williamglennwilliamglenn Posts: 48,079
    ydoethur said:


    "I'm almost speechless in how pathetic the opposition to Trump has been."

    If they take his assets his numbers will go up and he will win 2024.


    Frank Luntz
    @FrankLuntz
    ·
    20h
    If Letitia James starts seizing Trump’s properties, it will validate Trump’s claims of being targeted – and win him the 2024 election.

    https://twitter.com/FrankLuntz/status/1771709012003754127

    You do know Frank Luntz is a Trumpite shill, don't you?

    It's the equivalent of quoting Dan Hodges, only rather more so.
    That's like saying that I'm a Trumpite shill. He just says it as he sees it. :)
  • Options
    noneoftheabovenoneoftheabove Posts: 20,770
    dixiedean said:

    Labour doubling down on support for the Triple Lock.
    Emily Thornberry hugely supportive.

    Neither party can get rid of the triple lock. So make it a quadruple lock but have the 4 elements weaker than the current 3. Too complicated for anyone to get angry about and quadruple beats triple.
  • Options
    darkagedarkage Posts: 4,797
    kyf_100 said:

    darkage said:

    kyf_100 said:

    darkage said:

    kyf_100 said:

    darkage said:


    kyf_100 said:

    A

    Carnyx said:

    darkage said:

    FPT

    On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.

    I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.

    Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.
    Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).
    Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.

    So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.

    And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.

    Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
    Yup

    Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.

    I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
    The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.

    Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.

    In my current situation the building is too small for any managing agent to be interested in.
    Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.

    Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.

    What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.

    It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
    I can see it is easy to do in a small block; not so much in the situation described above.

    Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.




    I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.

    Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.

    I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
    https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/

    What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.

    Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...
    I have several friends in this position. In one case the conduct of the developers (one of the biggest housebuilders in the UK) has been outrageous. Essentially it has been established the building is defective but rather than fix it the developer has just bought out the key affected party leaving the hundred or so other flats in the building unsellable and subject to the usual escalating service charge issue.

    Part of what is being done in response to all this is regulating the sector - so you can't operate in the market unless you have approval from the government. It is probably necessary given examples like the above.

    However arguably the government itself is more than anyone else responsible for the situation. It has introduced through legislation increasing fire safety and other regulatory requirements on flats without consideration given to the financial impact on leaseholders or the housing market more broadly. These requirements trickle down along with inflation to create escalating service charges of the kind being complained about on the leasehold knowledge website. Another product of 'austerity' is that planning decisions tend to impose maintainence obligations on leaseholders - emptying the bins, maintaining the landscaping/drainage system/net zero energy features, etc.

    All this is combines to create a massive mess but as I commented upthread, in reality even the kind of service charges quoted by Ballymore are not far off what is common in mainland europe. Luxury flats in New York commonly have $3000 per month service charges.
    I'd agree with you there too, some of the most egregious charges have come out of post-Grenfell government regulation, which has been a belt and braces approach if looked at kindly, a "something must be done, this is something, therefore this must be done".

    Case in point the absurd "waking watch" charges, which amounted to thousands of pounds per leaseholder, for a bored security guard to sit in a portakabin watching youtube videos and smoking weed. In one case even harassing a female resident - https://www.dailymail.co.uk/news/article-9878451/Cladding-crisis-victims-stalker-hell-hands-guard-sent-protect-her.html

    Even though my small low rise development has managed to avoid burning to a crisp since being built in the 1950s, the latest fire safety report on it says that we're all in danger of being turned to cinders right now unless a costly sprinkler system is installed, which will be another hefty and pointless bill. And the insurance on it has already doubled as a result, so you're damned if you do, damned if you don't.

    You're right to point out that service charges can be problematic wherever you are in the world, it is a cost of living in flats. The difference is in the UK just how disenfranchised you are. As a leaseholder you don't have a say in how your money is spent (unless you are able to secure right to manage, which is so tricky and time consuming as to be a non starter for most), disputing the charges is nigh-on impossible, and on top of that you don't actually own a single brick - just the right to live in it for a set number of years (with additional costs to renew the lease), despite being on the hook for every bill the freeholder throws at you.

    My leasehold flat is worth no more than it was in 2017, and substantially less taking into account inflation. This is absolutely in line with the overall England and Wales trend of leasehold flats being seen as "toxic" investments due to the spiralling charges and lack of control, as mentioned in the FT article I linked to in the previous thread. What's notable is that in Scotland, where commonhold applies, flats have continued to rise in value at the same rate as houses.

    Major reform is needed, and the Conservatives are stymieing even the most minor of reforms at every possible opportunity. It is incredibly frustrating, and you can only come to the conclusion that they are in the pocket of the developers. They are certainly not on the side of the home "owner".
    I agree that having a Finland style system of co-ownership is what needs to happen... but I don't think people will ever be happy.
  • Options
    Luckyguy1983Luckyguy1983 Posts: 25,465

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    algarkirk said:

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
    Depends how deep the sleep. Personally speaking

    I’ve had sleeps when I am very very very tired and I nod off and think only 10 minutes have passed and actually it is three hours. That is VERY unconscious

    For true unconsciousness I’d go for death. As it were
    Indeed. Of course true unconsciousness is not imaginable, not least because time does not pass. It is the one weakness in Larkin's final masterpiece 'Aubade'.
    One of the bleakest poems ever written? Up there with Hopkins’ Terrible Sonnets and Donne at his most depressing, though more lucidly direct than both

    Larkin spent his life envious of his best friend Kingsley Amis cause Kingsley got all the girls, money and fun, and Larkin was a childless librarian in Hull. And now Larkin is the one we all remember and the work of Sir Kingsley Amis is quickly forgotten

    i hope that solaces Larkin in the afterlife, in which he very much did not believe. Should have taken ayahuasca
    Yes. I read Lucky Jim a bit ago. Big mistake. Dated rubbish. While Larkin's letters to his mum/laundry lists/photos of road signs are hot tickets. Amis will be read by academics (especially 'That Uncertain Feeling' and the letters) for the light he sheds on Larkin.
    Most novels date very quickly and very badly, especially comic novels. Mystery thrillers are more resilient because they rely on plot and that is timeless, but they can still date

    Great poetry is close to immortal. Sappho is still read

    Some consolation for underpaid poets, there
    John Clare. Peasant. Poverty. Madness. In the top 10 of English poets. Better at seeing the natural world than Keats and Wordsworth.

    BTW the few novels that don't date are interesting. How on earth is it done when all the others around them do? Top of my pile for this quality are 'Emma' (which is in a class of its own), 'Dubliners' (OK short stories), 'Dr Thorne', 'Dance to the Music of Time', 'The Masters' (but nothing else of Snow's - dead as a doornail). A weird list.
    Dunno about the others but one reason Jane Austen endures is that she wrote superb plots. Pride and Prejudice is like a mystery thriller, where the revelation that Darcy is GOOD is brilliantly withheld until near the end, tying up all the stories deliciously

    Also, very witty in a highly eloquent way that somehow stays funny. She was a proper genius
    Mr Collins for one is a truly timeless character.

    But if you think about it, Darcy having an aunt who just happens to be the patroness of the heir to the Bennet's estate, ensuring that their interaction continues despite their relationship getting off to a bad start, is a hell of a coincidence. But it's all such good fun you don't really mind.
This discussion has been closed.