Howdy, Stranger!

It looks like you're new here. Sign in or register to get started.

It’s not easy being Green – politicalbetting.com

24

Comments

  • AugustusCarp2AugustusCarp2 Posts: 225
    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    "The Greens have the right ideas, though." Really? I sometimes wonder what type of "policing" would be required to ensure compliance with Green policies and ideologies.
  • bondegezoubondegezou Posts: 11,044
    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.
  • logical_songlogical_song Posts: 9,913

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.
    One?, Two?
  • LeonLeon Posts: 55,239
    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
  • BenpointerBenpointer Posts: 34,653
    edited March 24
    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
  • Northern_AlNorthern_Al Posts: 8,375
    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Only up to a point. The Greens ran Brighton Council for a few years recently, and made such a mess of it that it tarnished their reputation. They were slaughtered (by Labour) last May, losing most of their seats - down to seven now. It's another reason why they may well lose Caroline Lucas's seat.
  • darkagedarkage Posts: 5,398
    edited March 24
    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    Claude 3 just suggested it will be 5-10 years before it can do a task I assigned it (identifying phrases in a text that contradict a policy).

    I am in awe of it absolutely but also see that it has practical problems when trying to make it work for a productive purpose.

    Interestingly it set out on reflection its own limitations as follows (slightly edited):

    "Contextual reasoning - Fully grasping the context around principles like judicial independence, the roles of different parties, and the balance of priorities and expertise requires very advanced reasoning abilities that modern AI doesn't yet possess reliably.

    Real-world grounding - Having a deep understanding of the real-world dynamics, norms, and potential conflicts involved in a process like those described is tremendously difficult to encode in AI models trained primarily on text data.

    Lack of broad training - Most large language models are trained on a wide breadth of data, but may lack specific, extensive training on domains where nuances manifest.

    Subjective interpretation - Identifying subjective phrasings like "significant" that inject ambiguity requires meta-level skills in linguistic analysis and understanding imprecise wording implications."
  • algarkirkalgarkirk Posts: 12,496
    edited March 24
    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Indeed so. Consciousness is possibly a property of matter in some configurations - brains being a candidate - even though we have no idea by what means a mental state could be a property of matter. This thought is mind blowing (no pun intended) and incomprehensible. and you can't restrict what sort of matter could be a candidate from the sub atomic upwards.

    Or possibly consciousness isn't a property of matter at all, which is equally incomprehensible, and equally mind blowing. This places no limits at all on where it may be located, or of course it may exist in extra-locational form, like time does.

    Something like one or other of these has to be true. Both seem implausible to the highest degree, and (as Thomas Nagel is fond of pointing out) places limits on how far we can take empirical enquiry to be exhaustive of reality.
  • TimSTimS Posts: 12,984

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Only up to a point. The Greens ran Brighton Council for a few years recently, and made such a mess of it that it tarnished their reputation. They were slaughtered (by Labour) last May, losing most of their seats - down to seven now. It's another reason why they may well lose Caroline Lucas's seat.
    Yes, actually being competent is an important part of the whole strategy.
  • OmniumOmnium Posts: 10,758
    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
  • MexicanpeteMexicanpete Posts: 28,368
    Owen Jones recommends those of us of a left leaning disposition need to vote Green.

    I am in a Tory- Labour marginal, but Owen knows best.
  • bondegezoubondegezou Posts: 11,044

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.
    One?, Two?
    I’d say 0-1 for both of them. I think the Greens probably have a higher chance of getting 1, but maybe between Reform UK and Greens, in terms of who is more likely to get higher than 1, maybe that’s RefUK.
  • glwglw Posts: 9,906
    edited March 24
    Leon said:

    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize

    A relevant and very good article in The Atlantic.

    Conscious AI Is the Second-Scariest Kind
    A cutting-edge theory of mind suggests a new type of doomsday scenario.
    By Peter Watts

    https://www.theatlantic.com/ideas/archive/2024/03/ai-consciousness-science-fiction/677659/

    If that link doesn't work try the one from Reddit.
    https://www.reddit.com/r/printSF/comments/1bk9p0p/peter_watts_conscious_ai_is_the_secondscariest/
  • LeonLeon Posts: 55,239

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Hives of bees?

    Trees?

    Planets?

    Fungi?

    Ant colonies?

    in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"

    This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?

    Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens

  • bondegezoubondegezou Posts: 11,044

    Owen Jones recommends those of us of a left leaning disposition need to vote Green.

    I am in a Tory- Labour marginal, but Owen knows best.

    If it was a Tory-Labour marginal in 2019, polling would suggest it will be a safe Labour win now. Or do you mean it’s a marginal on current polling?
  • stodgestodge Posts: 13,871
    Back (way, way back) to the conversation on Leaseholds earlier.

    I've been a Leaseholder - when I moved into a new block of flats in the mid-90s, we had the opportunity to buy our freehold but for most of us it was either a) too much money on top of buying the flat or b) no one saw themselves staying for long so what was the point.

    A property investment company ended up our Freeholder and the issue was them wasn't the ground rent but the insurance. They could get whatever policy they wanted and we had to pay the premium - there was no interest for them in getting a better priced policy so we found ourselves routinely having to pay well above inflation increases in buildings insurance. If I were drafting legislation, I'd be looking at that rather than the concept per se.

    As others have said, freehold property investment, whether it be pension funds or local councils, exists and I struggle to see the overall economic benefit of abolishing it. I'd change the law to make freeholders more accountable to and more responsive to leaseholders.
  • BenpointerBenpointer Posts: 34,653
    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Where does that deeper spiritual meaning thing you discovered last week fit in?
    I think consciousness is fire. Humans were the first species to harness fire, just as we are the first to really harness our own consciousness. But now we have set a new blaze going, and it is AI, and it may devour us in its flames, or send us to the stars
    No, I think you confuse extremely powerful information processing with consciousness. I can't prove you wrong but my intuition firmly tells me that you are.

    That is not to say that AI won't take over many human jobs nor that it doesn't pose a threat to human life, but I think that latter is where it is used by humans for evil ends.
  • LostPasswordLostPassword Posts: 18,354
    darkage said:

    On the Green vote, Brighton is instructive. There are three Brighton constituencies (Pavilion, Kemptown and Hove). All three are chock-full of Green sympathisers, as evidenced by council election results. But here's the thing: Greens will vote tactically to keep the Tories out. Look at the Green vote at 2019 GE:

    Pavilion: 57.2% (Lucas)
    Kemptown: 4.6%
    Hove: 4.4%.

    The ridiculously low Green vote in the last two demonstrates that, as the Greens had no chance of winning either, the Green-inclined voted Labour to make absolutely sure that the Tories had no chance (in seats that they won not that long ago). Huge Labour majorities resulted in both - one moderate (Peter Kyle in Hove), and one proper leftie (Russell-Moyle in Kemptown).

    I know it's only one area, but it rather suggests that many Greens prioritise keeping Tories out. Pavilion will be interesting, but I can't see the Greens winning Debbonnaire's seat.

    And Labour will still never do any deal - or work with - the Green Party
    They don't need to. Prospective Green voters are willing to vote Labour without one.
  • logical_songlogical_song Posts: 9,913
    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
  • stodgestodge Posts: 13,871

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.
    It's my view (and we saw this in the Clacton constituency polling last month) Reform are strongest in the areas where the Conservatives are strongest. They are tearing off chunks of the 2019 Conservative votes in those seats which Boris Johnson was able to carry with huge majorities last time. That may not be enough to win Reform any seats but it might help Labour come through the middle in some of these seats.

    The Clacton numbers had the Conservatives down 34, Reform up 18 and Labour up 15. Clacton is the ninth safest Conservative seat in the country - if the Conservatives are just four points ahead there what's happening elsewhere?
  • OmniumOmnium Posts: 10,758
    Leon said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Hives of bees?

    Trees?

    Planets?

    Fungi?

    Ant colonies?

    in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"

    This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?

    Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens

    (Leon's test result:Piffle)
  • SeaShantyIrish2SeaShantyIrish2 Posts: 17,559
    Wages of anti-Woke Wackery . . .

    Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia

    https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia

    SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?

    BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.
  • LeonLeon Posts: 55,239
    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
  • algarkirkalgarkirk Posts: 12,496
    edited March 24
    Leon said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Hives of bees?

    Trees?

    Planets?

    Fungi?

    Ant colonies?

    in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"

    This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?

    Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens

    If the standard empirical assumption of consciousness is true, as of course it may well be, an obvious but overlooked consequence is that our planet developed in evolutionary terms for billions of years without anything at any point feeling or being aware of anything at all.

    Anyone painting an imagined picture of the planet in the early billions of years would be wrong, for the painting would assume a viewpojnt which was lit by the sun. They would be wrong. Until there was sight there was only the dark.

    This seems to me strangely horrifying.
  • logical_songlogical_song Posts: 9,913

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Where does that deeper spiritual meaning thing you discovered last week fit in?
    I think consciousness is fire. Humans were the first species to harness fire, just as we are the first to really harness our own consciousness. But now we have set a new blaze going, and it is AI, and it may devour us in its flames, or send us to the stars
    No, I think you confuse extremely powerful information processing with consciousness. I can't prove you wrong but my intuition firmly tells me that you are.

    That is not to say that AI won't take over many human jobs nor that it doesn't pose a threat to human life, but I think that latter is where it is used by humans for evil ends.
    We don't know where consciousness comes from, could it be emergent?
  • stodgestodge Posts: 13,871

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
    The leadership of Reform (Tice and Farage) are basically small state Thatcherites who want big tax cuts. The membership of Reform are very different - anti-EU, anti-immigrant but basically supportive of the Boris Johnson agenda of "levelling up" as defined by spending lots of money in WWC areas to improve them.
  • logical_songlogical_song Posts: 9,913
    Has everybody seen this?
    "How to Get Rid of Top Ten Worst Tories"
    https://www.youtube.com/watch?v=yKCfN0yKK7U&t=8s
  • DecrepiterJohnLDecrepiterJohnL Posts: 27,865

    Cyclefree said:

    Rewards for failure, Example 4,782 -


    There is also a phenomenon of "Big Enough To Take The Hit" - contractors on government projects have to deal with the inevitable failure due to dysfunctional process on the government side (ha!). So this tends to make the short list of those who will even bid quite short.

    So you see the same names associated with failure repeatedly asked to do other work.

    It is entertaining to see the correlation vs causation on this - and it happens around the world.
    There is also the support clause. Often, any halfway competent software house could write the system but the tender still goes to the usual suspects because only they have the capacity to support thousands of users.
  • SeaShantyIrish2SeaShantyIrish2 Posts: 17,559

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    Ouch! That's a bad (as in good) burn.
  • logical_songlogical_song Posts: 9,913
    stodge said:

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
    The leadership of Reform (Tice and Farage) are basically small state Thatcherites who want big tax cuts. The membership of Reform are very different - anti-EU, anti-immigrant but basically supportive of the Boris Johnson agenda of "levelling up" as defined by spending lots of money in WWC areas to improve them.
    Well, that would help explain why they don't really pursue the 'build up Councillors' route.
  • algarkirkalgarkirk Posts: 12,496
    Leon said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Hives of bees?

    Trees?

    Planets?

    Fungi?

    Ant colonies?

    in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"

    This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?

    Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens

    Not quite. The Turing test is not a test for consciousness, it is a test for intelligence. The consciousness of an object other than the introspective Cartesian self examiner is always deniable under all currently known conditions.
  • turbotubbsturbotubbs Posts: 17,392
    algarkirk said:

    Leon said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Hives of bees?

    Trees?

    Planets?

    Fungi?

    Ant colonies?

    in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"

    This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?

    Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens

    If the standard empirical assumption of consciousness is true, as of course it may well be, an obvious but overlooked consequence is that our planet developed in evolutionary terms for billions of years without anything at any point feeling or being aware of anything at all.

    Anyone painting an imagined picture of the planet in the early billions of years would be wrong, for the painting would assume a viewpojnt which was lit by the sun. They would be wrong. Until there was sight there was only the dark.

    This seems to me strangely horrifying.
    No idea of what you are trying to say (no doubt it’s my slow brain tonight) other than to say ‘in the land of the blind, the one eyed man is king’. The first animal to develop some vision would have had a huge advantage.
  • SteveSSteveS Posts: 182

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    I am. Not certain about anyone else. I had a weakness for crappy sci-fi in my teenage years and I think Heinline put consciousness at cat level in the Moon is a Harsh Mistress. I wonder if it stands up to rereading?

    On the Turing test, ‘the most human human’ is an interesting read.



  • IanB2IanB2 Posts: 49,859
    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    As I posted on Friday, their steady stream of by-election wins has taken the LibDems back above 3,000 principal authority councillors.
  • LostPasswordLostPassword Posts: 18,354

    Owen Jones recommends those of us of a left leaning disposition need to vote Green.

    I am in a Tory- Labour marginal, but Owen knows best.

    If voters in the early 20th century had followed your advice then we'd never have had a Labour government. Elections would still be dominated by Tory v Liberal.

    It's a difficult one.
  • CyclefreeCyclefree Posts: 25,307

    Cyclefree said:

    Rewards for failure, Example 4,782 -


    There is also a phenomenon of "Big Enough To Take The Hit" - contractors on government projects have to deal with the inevitable failure due to dysfunctional process on the government side (ha!). So this tends to make the short list of those who will even bid quite short.

    So you see the same names associated with failure repeatedly asked to do other work.

    It is entertaining to see the correlation vs causation on this - and it happens around the world.
    There is also the support clause. Often, any halfway competent software house could write the system but the tender still goes to the usual suspects because only they have the capacity to support thousands of users.
    Fujitsu support - of course you'd want that, with its track record.

    😱
  • OmniumOmnium Posts: 10,758
    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Bugger off with your 'piffly'. Well I know the wasp in conscious too.

    We should only treat AI as whatever it wants to be when it asks us to treat it as such. I see no reason at all to imagine that if AI becomes intelligent in some way that it'll be anything like our definitions.

    There is obviously something missing in our understanding of 'brains' (or whatever). I know enough about AI (with a really good understanding of one narrow area) to see that there's nothing there. At best it's a photo snapshot of intelligence. Of course that suggests an easy path to something better, but refreshing the models in realtime isn't something that seems feasible.

    Something is missing.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 63,027
    edited March 24
    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?
  • algarkirkalgarkirk Posts: 12,496

    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Where does that deeper spiritual meaning thing you discovered last week fit in?
    I think consciousness is fire. Humans were the first species to harness fire, just as we are the first to really harness our own consciousness. But now we have set a new blaze going, and it is AI, and it may devour us in its flames, or send us to the stars
    No, I think you confuse extremely powerful information processing with consciousness. I can't prove you wrong but my intuition firmly tells me that you are.

    That is not to say that AI won't take over many human jobs nor that it doesn't pose a threat to human life, but I think that latter is where it is used by humans for evil ends.
    We don't know where consciousness comes from, could it be emergent?
    if it is a property of matter it could either be emergent from non-consciousness (don't ask how, thus far it is not a knowable item) or else all matter is to some degree conscious - it is just one of the unknowns of the nature of the universe like why is the law of gravity as it is and not otherwise - and human brains have evolved to distil its qualities to a large degree.
  • BenpointerBenpointer Posts: 34,653

    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?

    More to the point, what benefit do they get advertising in the UK a model not available in the UK?

    Where are you seeing the adverts Big_G?
  • kyf_100kyf_100 Posts: 4,938
    darkage said:

    kyf_100 said:

    darkage said:


    kyf_100 said:

    A

    Carnyx said:

    darkage said:

    FPT

    On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.

    I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.

    Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.
    Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).
    Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.

    So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.

    And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.

    Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
    Yup

    Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.

    I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
    The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.

    Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.

    In my current situation the building is too small for any managing agent to be interested in.
    Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.

    Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.

    What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.

    It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
    I can see it is easy to do in a small block; not so much in the situation described above.

    Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.




    I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.

    Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.

    I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
    https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/

    What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.

    Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...
  • Big_G_NorthWalesBig_G_NorthWales Posts: 63,027

    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?

    More to the point, what benefit do they get advertising in the UK a model not available in the UK?

    Where are you seeing the adverts Big_G?
    Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their product
  • BenpointerBenpointer Posts: 34,653

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    Very wise, wish I'd done the same now.
  • LeonLeon Posts: 55,239
    glw said:

    Leon said:

    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize

    A relevant and very good article in The Atlantic.

    Conscious AI Is the Second-Scariest Kind
    A cutting-edge theory of mind suggests a new type of doomsday scenario.
    By Peter Watts

    https://www.theatlantic.com/ideas/archive/2024/03/ai-consciousness-science-fiction/677659/

    If that link doesn't work try the one from Reddit.
    https://www.reddit.com/r/printSF/comments/1bk9p0p/peter_watts_conscious_ai_is_the_secondscariest/
    That’s a fascinating article, thanks. As it says: we have no idea what consciousness IS, and nor does anyone on this site, we all use intuition in one way or another

    And this might be my favourite sentence of the year, so far


    “Another group of scientists has unveiled a neural organoid that taught itself rudimentary voice recognition.”

    That kind of sums up the whole 2020s, to date
  • Richard_TyndallRichard_Tyndall Posts: 32,517
    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
  • CiceroCicero Posts: 3,077

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.

    You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.

  • BenpointerBenpointer Posts: 34,653

    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?

    More to the point, what benefit do they get advertising in the UK a model not available in the UK?

    Where are you seeing the adverts Big_G?
    Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their product
    I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.

    If only we were properly part of Europe, eh?
  • algarkirkalgarkirk Posts: 12,496

    algarkirk said:

    Leon said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Hives of bees?

    Trees?

    Planets?

    Fungi?

    Ant colonies?

    in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"

    This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?

    Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens

    If the standard empirical assumption of consciousness is true, as of course it may well be, an obvious but overlooked consequence is that our planet developed in evolutionary terms for billions of years without anything at any point feeling or being aware of anything at all.

    Anyone painting an imagined picture of the planet in the early billions of years would be wrong, for the painting would assume a viewpojnt which was lit by the sun. They would be wrong. Until there was sight there was only the dark.

    This seems to me strangely horrifying.
    No idea of what you are trying to say (no doubt it’s my slow brain tonight) other than to say ‘in the land of the blind, the one eyed man is king’. The first animal to develop some vision would have had a huge advantage.
    It's all in Schopenhauer, 'The World As Will And Representation, if anyone wants to explore the fascinating thoughts of that 'gloomy bird' (attrib. Bertie Wooster).
  • Big_G_NorthWalesBig_G_NorthWales Posts: 63,027

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    Very wise, wish I'd done the same now.
    I am in the fortunate position of ignoring everything about AI as I am old and cannot be bothered

    Mind you I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class
  • darkagedarkage Posts: 5,398

    Wages of anti-Woke Wackery . . .

    Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia

    https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia

    SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?

    BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.

    This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.
  • BenpointerBenpointer Posts: 34,653
    edited March 24
    Cicero said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.

    You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.
    I was rather mocking myself tbh. Truth is I don't know. (But at least I know that I don't know.)
  • Big_G_NorthWalesBig_G_NorthWales Posts: 63,027

    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?

    More to the point, what benefit do they get advertising in the UK a model not available in the UK?

    Where are you seeing the adverts Big_G?
    Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their product
    I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.

    If only we were properly part of Europe, eh?
    Are you suggesting we need to change to left hand drive - now that would be controversial
  • LeonLeon Posts: 55,239

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
  • OmniumOmnium Posts: 10,758

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.
  • No_Offence_AlanNo_Offence_Alan Posts: 4,513

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    "If the human brain were so simple that we could understand it, we would be so simple that we couldn't." Lyall Watson
  • algarkirkalgarkirk Posts: 12,496
    Cicero said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.

    You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.

    "Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism".
    Thomas Nagel, 'What is it like to be a bat' 1974.

    Many people think this paper, one of the most cited on this subject, is the best available definition. Link here.

    https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/nagel_bat.pdf
  • LeonLeon Posts: 55,239

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


  • MattWMattW Posts: 23,128
    A note to say thanks for the insurance comments this morning.

    I have now ordered my dashcam, and will do so with the insurance this week.
  • LeonLeon Posts: 55,239
    algarkirk said:

    Cicero said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.

    You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.

    "Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism".
    Thomas Nagel, 'What is it like to be a bat' 1974.

    Many people think this paper, one of the most cited on this subject, is the best available definition. Link here.

    https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/nagel_bat.pdf
    I had to study that paper at UCL as part of my Philosophy degree. One of the few ideas that engaged me, at the time. That is more my fault than UCL’s, Philosophy’s, or Thomas Nagel’s. Or indeed the bat’s
  • MattWMattW Posts: 23,128
    Carnyx said:

    DavidL said:

    Oh I don't know. All you need in Scotland is to be a raving lunatic obsessed with sex.

    Wait until you see the morning thread.
    Scotland or greens? The sex is obligatory, anyway.
    Not with the family bike.
    http://news.bbc.co.uk/1/hi/scotland/glasgow_and_west/7095134.stm
  • algarkirkalgarkirk Posts: 12,496

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
  • Alphabet_SoupAlphabet_Soup Posts: 3,243

    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?

    More to the point, what benefit do they get advertising in the UK a model not available in the UK?

    Where are you seeing the adverts Big_G?
    Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their product
    I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.

    If only we were properly part of Europe, eh?
    Are you suggesting we need to change to left hand drive - now that would be controversial
    The Swedes managed it, of course, but not the American Virgin Islands.

    One of the interesting aspects of Wild Strawberries (1957) is seeing people drive on the right in RHD cars. Whereas in the AVI you can still see people driving on the left in LHD cars.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 63,027
    edited March 24
    MattW said:

    A note to say thanks for the insurance comments this morning.

    I have now ordered my dashcam, and will do so with the insurance this week.

    I would just say that with my Mercedes on board dashcam and my locater, parking damage, and locking app my insurer still did not provide a discount

    Mind you insurers have a mind of their own and haven't any consistency
  • LeonLeon Posts: 55,239
    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
  • viewcodeviewcode Posts: 22,068
    Boeing Starliner is said to be ready to launch. Would you fly on a vehicle designed by Boeing recently and which has failed all its IRL flight tests?

    https://www.youtube.com/watch?v=VVo5DT1k9LM

  • LeonLeon Posts: 55,239

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    Very wise, wish I'd done the same now.
    I am in the fortunate position of ignoring everything about AI as I am old and cannot be bothered

    Mind you I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class
    You should look into “longevity escape velocity”. Might change your mind about AI
  • No_Offence_AlanNo_Offence_Alan Posts: 4,513
    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    You would pay us to read them, I presume?
  • Pagan2Pagan2 Posts: 9,870
    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
  • Sunil_PrasannanSunil_Prasannan Posts: 51,814
    viewcode said:

    Boeing Starliner is said to be ready to launch. Would you fly on a vehicle designed by Boeing recently and which has failed all its IRL flight tests?

    https://www.youtube.com/watch?v=VVo5DT1k9LM

    I have a gorgeous, beautiful model from Seattle lying in my bedroom as we speak :blush:


    ;
    ;
    ;
    ;


    ;
    ;
    ;





  • algarkirkalgarkirk Posts: 12,496
    Leon said:

    algarkirk said:

    Cicero said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.

    You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:

    Am I sentient? Yes
    All other humans? Yes
    Other mammals, e.g. dogs? Yes
    Reptiles? Probably
    Fish? Probably
    ...
    Insects? Not sure
    Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
    ...
    Bacteria? No
    Viruses? Definitely No

    So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).

    There, glad I've resolved that one.
    Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.

    You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.

    "Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism".
    Thomas Nagel, 'What is it like to be a bat' 1974.

    Many people think this paper, one of the most cited on this subject, is the best available definition. Link here.

    https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/nagel_bat.pdf
    I had to study that paper at UCL as part of my Philosophy degree. One of the few ideas that engaged me, at the time. That is more my fault than UCL’s, Philosophy’s, or Thomas Nagel’s. Or indeed the bat’s
    He is still publishing 50 years later. One of the modern greats.
  • OmniumOmnium Posts: 10,758
    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    If you paid a pound for every daft comment you make I think that'd be a great help to PB.
  • BenpointerBenpointer Posts: 34,653
    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    How much is he asking you to pay?
  • viewcodeviewcode Posts: 22,068
    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.
    I think dreams are side-effects of moving memories overnight from short-term moment-to-moment storage to longer term. The mindspace is the way the mind makes sense of this process.
  • carnforthcarnforth Posts: 4,574
    edited March 24

    Good evening

    I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition

    I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show

    'this model is not available in the UK'

    Why do they get away with this small print but also why not advertise those that are in the UK ?

    More to the point, what benefit do they get advertising in the UK a model not available in the UK?

    Where are you seeing the adverts Big_G?
    Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their product
    I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.

    If only we were properly part of Europe, eh?
    Are you suggesting we need to change to left hand drive - now that would be controversial
    After Brexit, the Maltese faced VAT on second hand car imports from he UK. The importers asked the Maltese government to ask the EU for a derogation, but the Maltese government refused even to ask. Perhaps because the second largest source of second hand cars has always been Japan - where VAT has always been paid. The only VAT free source is now the Republic of Ireland.
  • dixiedeandixiedean Posts: 29,399
    We have no agreed definition of consciousness.
    Nor of what it is, how it arose, or where it is located.
    So we've no prospect of identifying it in machines anytime soon.
    Personally, I've some time for pan-psychism. At least it is logically consistent.
  • stodge said:

    TimS said:

    stodge said:

    Evening all :)

    The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.

    There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.

    The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.
    Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.
    It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
    The leadership of Reform (Tice and Farage) are basically small state Thatcherites who want big tax cuts. The membership of Reform are very different - anti-EU, anti-immigrant but basically supportive of the Boris Johnson agenda of "levelling up" as defined by spending lots of money in WWC areas to improve them.
    Well, that would help explain why they don't really pursue the 'build up Councillors' route.
    If you're going to be properly populist, the last thing you want is councillors.

    You're then immediately into the world of, "well, I'd love to get your road resurfaced but we can only do 5% of roads in the year, and yours isn't one of the worst" and, "regrettably, I can't be both for and against this planning application when I'm on the Committee deciding it".

    Populism is really difficult when you actually need to get involved in decision making in any way.
  • LeonLeon Posts: 55,239
    Pagan2 said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
    I bet you would pay

    You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else

    This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive

    Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on
  • MalmesburyMalmesbury Posts: 50,213
    edited March 24
    kyf_100 said:

    darkage said:

    kyf_100 said:

    darkage said:


    kyf_100 said:

    A

    Carnyx said:

    darkage said:

    FPT

    On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.

    I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.

    Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.
    Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).
    Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.

    So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.

    And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.

    Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
    Yup

    Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.

    I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
    The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.

    Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.

    In my current situation the building is too small for any managing agent to be interested in.
    Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.

    Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.

    What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.

    It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
    I can see it is easy to do in a small block; not so much in the situation described above.

    Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.




    I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.

    Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.

    I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
    https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/

    What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.

    Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...
    The reason that these things are not change is the effectiveness of the lobbying - both internal and external to government.

    Some her may recall the Rory Stewart episode - he tried to stop funding for a very dodgy "aid group" that hadn't been vetted and was met with a series of lies about how the funding just *had* to continue.

    I'm quite sure that the groups lobbying to prevent leasehold being abolished failed to mention that the total value of the pension funds investments in freeholds was an ant fart compared to the 4 trillion or so they are managing.

    EDIT: Perhaps, if the politicos read PB, we should repeat this fact non-stop, in the style of Carthago delenda est
  • TimSTimS Posts: 12,984

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    Very wise, wish I'd done the same now.
    I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class
    Let that be the final word on the subject of AI.
  • algarkirkalgarkirk Posts: 12,496
    Leon said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”

    it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric

    Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…


    Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.
  • RogerRoger Posts: 19,897
    Andrew Marr-'Rishi isn't very good at politics'. Quite a charitable look at him though...

    https://www.youtube.com/watch?v=QUJbkEz5Vh4
  • LostPasswordLostPassword Posts: 18,354
    Alas, satellite pictures from today show that the two landing ships in Sevastopol survived the missile strikes. I did see another video where it looked like one of the big bangs was above the surface - suggesting that one of the missiles was intercepted. And there's damage to the pier near to one of the ships, so perhaps one of the other missiles*, well, missed its mark.

    * Some superstitious players of 40K have been known to claim that they roll more hits if they say they are shooting rockets, than if they are firing missiles.
  • Big_G_NorthWalesBig_G_NorthWales Posts: 63,027
    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    Very wise, wish I'd done the same now.
    I am in the fortunate position of ignoring everything about AI as I am old and cannot be bothered

    Mind you I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class
    You should look into “longevity escape velocity”. Might change your mind about AI
    I do not mean to be discourteous but I would rather watch paint dry

    At my time of life I just recite the words of the song ' que sera, sera'
  • FairlieredFairliered Posts: 4,930
    Good evening all. It’s been a beautiful day here. First dog walk without a coat or jacket this year. First grass cut. Sitting outside enjoying the sunshine.

    Can I please implore you not to vote Green. They have destroyed Scotland’s prosperity. Most of the wacky policies attributed to the SNP have been Green policies. They will destroy you as well.

    Come 2026, they will align themselves towards a Lab, Lib, Green Scottish Government. Their support of independence is illusory. Their support of democracy likewise.

    I have never voted Conservative, but if I had a choice between them and the Greens, I would have to vote Conservative.
  • dixiedeandixiedean Posts: 29,399
    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    How much will you be paying us?
  • IanB2IanB2 Posts: 49,859
    Pagan2 said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
    His problem isn’t generally that he “knows little about” his subjects, but that he’s usually so obsessed with them that he seems to lose the ability to assess the weight or credibility of his sources and generally fails to notice much by way of context or bigger picture. So it all feels a bit like a train spotter trying to give travel advice.
  • LostPasswordLostPassword Posts: 18,354
    Leon said:

    Pagan2 said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
    I bet you would pay

    You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else

    This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive

    Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on
    What would be fun in a very publicly cruel way would be Edmund's widget working for vanilla, but modified to send PB.com a list of the posters whose comments are being ignored, so that a leaderboard of ignored posters could be published.
  • ohnotnowohnotnow Posts: 3,779
    viewcode said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.
    I think dreams are side-effects of moving memories overnight from short-term moment-to-moment storage to longer term. The mindspace is the way the mind makes sense of this process.
    "Until the End of the World" is still about my favourite films that blends tech and dreams.

    https://en.wikipedia.org/wiki/Until_the_End_of_the_World

    Cracking soundtrack too.
  • FairlieredFairliered Posts: 4,930

    MattW said:

    A note to say thanks for the insurance comments this morning.

    I have now ordered my dashcam, and will do so with the insurance this week.

    I would just say that with my Mercedes on board dashcam and my locater, parking damage, and locking app my insurer still did not provide a discount

    Mind you insurers have a mind of their own and haven't any consistency
    They do have consistency. They consistently shaft their customers.
  • SeaShantyIrish2SeaShantyIrish2 Posts: 17,559
    darkage said:

    Wages of anti-Woke Wackery . . .

    Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia

    https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia

    SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?

    BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.

    This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.
    Not sure I understand what you mean?

    But assuming I do understand, think you are wrong. Certainly Mad Vlad and his regime are NOT faking their fear and loathing of gays.
  • I already have my extension running that allows ignoring of users. The problem is that the threads are now so short as Leon’s posts are removed.
  • IanB2IanB2 Posts: 49,859
    edited March 24
    Leon said:

    Pagan2 said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    Good means we just dont pay and can avoid the drivel about ai a subject you know little about
    I bet you would pay

    You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else

    This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive

    Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on
    If all your comments and the discussion arising could be put in separate, dedicated, clearly marked threads, that sounds like an excellent idea.

    In case we stumble in there by mistake, perhaps they could all be put in purple or green type? Which would be particularly apposite.
  • LeonLeon Posts: 55,239
    dixiedean said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    How much will you be paying us?
    Given that all of you have now made the same dull witticism this is - QED - why you will pay to read my comments, once we get the new paywall up and running. At the moment we just have a technical glitch on whether your subs can go direct to the barman at the Groucho
  • Sunil_PrasannanSunil_Prasannan Posts: 51,814
    ohnotnow said:

    viewcode said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.
    I think dreams are side-effects of moving memories overnight from short-term moment-to-moment storage to longer term. The mindspace is the way the mind makes sense of this process.
    "Until the End of the World" is still about my favourite films that blends tech and dreams.

    https://en.wikipedia.org/wiki/Until_the_End_of_the_World

    Cracking soundtrack too.
    Including "Death's Door" by Depeche Mode:

    https://www.youtube.com/watch?v=_iyZRt_6V54
  • OmniumOmnium Posts: 10,758
    Leon said:

    dixiedean said:

    Leon said:

    Pagan2 said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    I just skip over any of his posts about AI now.

    And I am someone who is currently writing a report about AI.
    And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcome
    Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jones
    Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory system
    How much will you be paying us?
    Given that all of you have now made the same dull witticism this is - QED - why you will pay to read my comments, once we get the new paywall up and running. At the moment we just have a technical glitch on whether your subs can go direct to the barman at the Groucho
    Bye then.
  • MalmesburyMalmesbury Posts: 50,213

    darkage said:

    Wages of anti-Woke Wackery . . .

    Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia

    https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia

    SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?

    BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.

    This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.
    Not sure I understand what you mean?

    But assuming I do understand, think you are wrong. Certainly Mad Vlad and his regime are NOT faking their fear and loathing of gays.
    I agree. Photographic evidence -


  • FoxyFoxy Posts: 48,627
    algarkirk said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Omnium said:

    Leon said:

    Leon said:

    Nigelb said:

    Leon said:

    Ah, I see talk of AI


    Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish


    Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"

    Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.

    In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.

    While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.

    From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.

    It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."


    Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness

    TLDR, we're getting closer to simulating Spock.
    A reply to the above comment I cut and pasted

    "I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."

    Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
    What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?
    Looking at this delicate yellow Colombian wasp staring at me. Right now

    Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes

    If a wasp can be sentient so can AI


    The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.

    AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.

    This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
    No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp

    This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
    Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.

    I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.

    AI (so far as we can observe*) really isn't conscious by most definitions.

    * there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
    You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffle
    Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.
    OK apologies for the “piffle” but it was quite piffly

    My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour

    It is basically a visual Turing Test

    These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious

    I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
    Do androids dream of electric sheep?

    We know that many mammals dream. Is dreaming a necessary indicator of sentience?
    We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!

    Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.
    My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.
This discussion has been closed.