It’s not easy being Green – politicalbetting.com
Comments
-
"The Greens have the right ideas, though." Really? I sometimes wonder what type of "policing" would be required to ensure compliance with Green policies and ideologies.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.1 -
Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.0 -
One?, Two?bondegezou said:
Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.0 -
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.0 -
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.0 -
Only up to a point. The Greens ran Brighton Council for a few years recently, and made such a mess of it that it tarnished their reputation. They were slaughtered (by Labour) last May, losing most of their seats - down to seven now. It's another reason why they may well lose Caroline Lucas's seat.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.1 -
Claude 3 just suggested it will be 5-10 years before it can do a task I assigned it (identifying phrases in a text that contradict a policy).Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
I am in awe of it absolutely but also see that it has practical problems when trying to make it work for a productive purpose.
Interestingly it set out on reflection its own limitations as follows (slightly edited):
"Contextual reasoning - Fully grasping the context around principles like judicial independence, the roles of different parties, and the balance of priorities and expertise requires very advanced reasoning abilities that modern AI doesn't yet possess reliably.
Real-world grounding - Having a deep understanding of the real-world dynamics, norms, and potential conflicts involved in a process like those described is tremendously difficult to encode in AI models trained primarily on text data.
Lack of broad training - Most large language models are trained on a wide breadth of data, but may lack specific, extensive training on domains where nuances manifest.
Subjective interpretation - Identifying subjective phrasings like "significant" that inject ambiguity requires meta-level skills in linguistic analysis and understanding imprecise wording implications."0 -
Indeed so. Consciousness is possibly a property of matter in some configurations - brains being a candidate - even though we have no idea by what means a mental state could be a property of matter. This thought is mind blowing (no pun intended) and incomprehensible. and you can't restrict what sort of matter could be a candidate from the sub atomic upwards.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
Or possibly consciousness isn't a property of matter at all, which is equally incomprehensible, and equally mind blowing. This places no limits at all on where it may be located, or of course it may exist in extra-locational form, like time does.
Something like one or other of these has to be true. Both seem implausible to the highest degree, and (as Thomas Nagel is fond of pointing out) places limits on how far we can take empirical enquiry to be exhaustive of reality.1 -
Yes, actually being competent is an important part of the whole strategy.Northern_Al said:
Only up to a point. The Greens ran Brighton Council for a few years recently, and made such a mess of it that it tarnished their reputation. They were slaughtered (by Labour) last May, losing most of their seats - down to seven now. It's another reason why they may well lose Caroline Lucas's seat.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.0 -
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.0 -
Owen Jones recommends those of us of a left leaning disposition need to vote Green.
I am in a Tory- Labour marginal, but Owen knows best.0 -
I’d say 0-1 for both of them. I think the Greens probably have a higher chance of getting 1, but maybe between Reform UK and Greens, in terms of who is more likely to get higher than 1, maybe that’s RefUK.logical_song said:
One?, Two?bondegezou said:
Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.0 -
A relevant and very good article in The Atlantic.Leon said:No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
Conscious AI Is the Second-Scariest Kind
A cutting-edge theory of mind suggests a new type of doomsday scenario.
By Peter Watts
https://www.theatlantic.com/ideas/archive/2024/03/ai-consciousness-science-fiction/677659/
If that link doesn't work try the one from Reddit.
https://www.reddit.com/r/printSF/comments/1bk9p0p/peter_watts_conscious_ai_is_the_secondscariest/1 -
Hives of bees?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
Trees?
Planets?
Fungi?
Ant colonies?
in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"
This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?
Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens
1 -
If it was a Tory-Labour marginal in 2019, polling would suggest it will be a safe Labour win now. Or do you mean it’s a marginal on current polling?Mexicanpete said:Owen Jones recommends those of us of a left leaning disposition need to vote Green.
I am in a Tory- Labour marginal, but Owen knows best.0 -
Back (way, way back) to the conversation on Leaseholds earlier.
I've been a Leaseholder - when I moved into a new block of flats in the mid-90s, we had the opportunity to buy our freehold but for most of us it was either a) too much money on top of buying the flat or b) no one saw themselves staying for long so what was the point.
A property investment company ended up our Freeholder and the issue was them wasn't the ground rent but the insurance. They could get whatever policy they wanted and we had to pay the premium - there was no interest for them in getting a better priced policy so we found ourselves routinely having to pay well above inflation increases in buildings insurance. If I were drafting legislation, I'd be looking at that rather than the concept per se.
As others have said, freehold property investment, whether it be pension funds or local councils, exists and I struggle to see the overall economic benefit of abolishing it. I'd change the law to make freeholders more accountable to and more responsive to leaseholders.0 -
No, I think you confuse extremely powerful information processing with consciousness. I can't prove you wrong but my intuition firmly tells me that you are.Leon said:
I think consciousness is fire. Humans were the first species to harness fire, just as we are the first to really harness our own consciousness. But now we have set a new blaze going, and it is AI, and it may devour us in its flames, or send us to the starsBenpointer said:
Where does that deeper spiritual meaning thing you discovered last week fit in?Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
That is not to say that AI won't take over many human jobs nor that it doesn't pose a threat to human life, but I think that latter is where it is used by humans for evil ends.0 -
They don't need to. Prospective Green voters are willing to vote Labour without one.darkage said:
And Labour will still never do any deal - or work with - the Green PartyNorthern_Al said:On the Green vote, Brighton is instructive. There are three Brighton constituencies (Pavilion, Kemptown and Hove). All three are chock-full of Green sympathisers, as evidenced by council election results. But here's the thing: Greens will vote tactically to keep the Tories out. Look at the Green vote at 2019 GE:
Pavilion: 57.2% (Lucas)
Kemptown: 4.6%
Hove: 4.4%.
The ridiculously low Green vote in the last two demonstrates that, as the Greens had no chance of winning either, the Green-inclined voted Labour to make absolutely sure that the Tories had no chance (in seats that they won not that long ago). Huge Labour majorities resulted in both - one moderate (Peter Kyle in Hove), and one proper leftie (Russell-Moyle in Kemptown).
I know it's only one area, but it rather suggests that many Greens prioritise keeping Tories out. Pavilion will be interesting, but I can't see the Greens winning Debbonnaire's seat.0 -
Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.
It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.0 -
It's my view (and we saw this in the Clacton constituency polling last month) Reform are strongest in the areas where the Conservatives are strongest. They are tearing off chunks of the 2019 Conservative votes in those seats which Boris Johnson was able to carry with huge majorities last time. That may not be enough to win Reform any seats but it might help Labour come through the middle in some of these seats.bondegezou said:
Yet Reform UK are outpolling the Greens, and have a good chance of winning as many MPs as them at the next election.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.
The Clacton numbers had the Conservatives down 34, Reform up 18 and Labour up 15. Clacton is the ninth safest Conservative seat in the country - if the Conservatives are just four points ahead there what's happening elsewhere?0 -
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.7 -
(Leon's test result:Piffle)Leon said:
Hives of bees?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
Trees?
Planets?
Fungi?
Ant colonies?
in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"
This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?
Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens0 -
Wages of anti-Woke Wackery . . .
Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia
https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia
SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?
BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.1 -
If the standard empirical assumption of consciousness is true, as of course it may well be, an obvious but overlooked consequence is that our planet developed in evolutionary terms for billions of years without anything at any point feeling or being aware of anything at all.Leon said:
Hives of bees?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
Trees?
Planets?
Fungi?
Ant colonies?
in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"
This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?
Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens
Anyone painting an imagined picture of the planet in the early billions of years would be wrong, for the painting would assume a viewpojnt which was lit by the sun. They would be wrong. Until there was sight there was only the dark.
This seems to me strangely horrifying.0 -
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such0 -
We don't know where consciousness comes from, could it be emergent?Benpointer said:
No, I think you confuse extremely powerful information processing with consciousness. I can't prove you wrong but my intuition firmly tells me that you are.Leon said:
I think consciousness is fire. Humans were the first species to harness fire, just as we are the first to really harness our own consciousness. But now we have set a new blaze going, and it is AI, and it may devour us in its flames, or send us to the starsBenpointer said:
Where does that deeper spiritual meaning thing you discovered last week fit in?Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
That is not to say that AI won't take over many human jobs nor that it doesn't pose a threat to human life, but I think that latter is where it is used by humans for evil ends.0 -
The leadership of Reform (Tice and Farage) are basically small state Thatcherites who want big tax cuts. The membership of Reform are very different - anti-EU, anti-immigrant but basically supportive of the Boris Johnson agenda of "levelling up" as defined by spending lots of money in WWC areas to improve them.logical_song said:
Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.
It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.3 -
Has everybody seen this?
"How to Get Rid of Top Ten Worst Tories"
https://www.youtube.com/watch?v=yKCfN0yKK7U&t=8s0 -
There is also the support clause. Often, any halfway competent software house could write the system but the tender still goes to the usual suspects because only they have the capacity to support thousands of users.Malmesbury said:
There is also a phenomenon of "Big Enough To Take The Hit" - contractors on government projects have to deal with the inevitable failure due to dysfunctional process on the government side (ha!). So this tends to make the short list of those who will even bid quite short.Cyclefree said:Rewards for failure, Example 4,782 -
So you see the same names associated with failure repeatedly asked to do other work.
It is entertaining to see the correlation vs causation on this - and it happens around the world.0 -
Ouch! That's a bad (as in good) burn.Casino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.0 -
Well, that would help explain why they don't really pursue the 'build up Councillors' route.stodge said:
The leadership of Reform (Tice and Farage) are basically small state Thatcherites who want big tax cuts. The membership of Reform are very different - anti-EU, anti-immigrant but basically supportive of the Boris Johnson agenda of "levelling up" as defined by spending lots of money in WWC areas to improve them.logical_song said:
Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.
It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.0 -
Not quite. The Turing test is not a test for consciousness, it is a test for intelligence. The consciousness of an object other than the introspective Cartesian self examiner is always deniable under all currently known conditions.Leon said:
Hives of bees?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
Trees?
Planets?
Fungi?
Ant colonies?
in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"
This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?
Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens0 -
No idea of what you are trying to say (no doubt it’s my slow brain tonight) other than to say ‘in the land of the blind, the one eyed man is king’. The first animal to develop some vision would have had a huge advantage.algarkirk said:
If the standard empirical assumption of consciousness is true, as of course it may well be, an obvious but overlooked consequence is that our planet developed in evolutionary terms for billions of years without anything at any point feeling or being aware of anything at all.Leon said:
Hives of bees?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
Trees?
Planets?
Fungi?
Ant colonies?
in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"
This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?
Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens
Anyone painting an imagined picture of the planet in the early billions of years would be wrong, for the painting would assume a viewpojnt which was lit by the sun. They would be wrong. Until there was sight there was only the dark.
This seems to me strangely horrifying.0 -
I am. Not certain about anyone else. I had a weakness for crappy sci-fi in my teenage years and I think Heinline put consciousness at cat level in the Moon is a Harsh Mistress. I wonder if it stands up to rereading?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
On the Turing test, ‘the most human human’ is an interesting read.
2 -
As I posted on Friday, their steady stream of by-election wins has taken the LibDems back above 3,000 principal authority councillors.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.1 -
If voters in the early 20th century had followed your advice then we'd never have had a Labour government. Elections would still be dominated by Tory v Liberal.Mexicanpete said:Owen Jones recommends those of us of a left leaning disposition need to vote Green.
I am in a Tory- Labour marginal, but Owen knows best.
It's a difficult one.0 -
Fujitsu support - of course you'd want that, with its track record.DecrepiterJohnL said:
There is also the support clause. Often, any halfway competent software house could write the system but the tender still goes to the usual suspects because only they have the capacity to support thousands of users.Malmesbury said:
There is also a phenomenon of "Big Enough To Take The Hit" - contractors on government projects have to deal with the inevitable failure due to dysfunctional process on the government side (ha!). So this tends to make the short list of those who will even bid quite short.Cyclefree said:Rewards for failure, Example 4,782 -
So you see the same names associated with failure repeatedly asked to do other work.
It is entertaining to see the correlation vs causation on this - and it happens around the world.
😱1 -
Bugger off with your 'piffly'. Well I know the wasp in conscious too.Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We should only treat AI as whatever it wants to be when it asks us to treat it as such. I see no reason at all to imagine that if AI becomes intelligent in some way that it'll be anything like our definitions.
There is obviously something missing in our understanding of 'brains' (or whatever). I know enough about AI (with a really good understanding of one narrow area) to see that there's nothing there. At best it's a photo snapshot of intelligence. Of course that suggests an easy path to something better, but refreshing the models in realtime isn't something that seems feasible.
Something is missing.0 -
Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?0 -
if it is a property of matter it could either be emergent from non-consciousness (don't ask how, thus far it is not a knowable item) or else all matter is to some degree conscious - it is just one of the unknowns of the nature of the universe like why is the law of gravity as it is and not otherwise - and human brains have evolved to distil its qualities to a large degree.logical_song said:
We don't know where consciousness comes from, could it be emergent?Benpointer said:
No, I think you confuse extremely powerful information processing with consciousness. I can't prove you wrong but my intuition firmly tells me that you are.Leon said:
I think consciousness is fire. Humans were the first species to harness fire, just as we are the first to really harness our own consciousness. But now we have set a new blaze going, and it is AI, and it may devour us in its flames, or send us to the starsBenpointer said:
Where does that deeper spiritual meaning thing you discovered last week fit in?Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
That is not to say that AI won't take over many human jobs nor that it doesn't pose a threat to human life, but I think that latter is where it is used by humans for evil ends.0 -
More to the point, what benefit do they get advertising in the UK a model not available in the UK?Big_G_NorthWales said:Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?
Where are you seeing the adverts Big_G?0 -
I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.darkage said:
I can see it is easy to do in a small block; not so much in the situation described above.kyf_100 said:
Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.darkage said:
The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.Malmesbury said:
Yupkyf_100 said:
Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.Malmesbury said:A
Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).Carnyx said:
Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.darkage said:FPT
On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.
I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.
So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.
And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.
Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.
I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.
In my current situation the building is too small for any managing agent to be interested in.
Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.
What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.
It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.
Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.
I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/
What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.
Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...1 -
Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their productBenpointer said:
More to the point, what benefit do they get advertising in the UK a model not available in the UK?Big_G_NorthWales said:Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?
Where are you seeing the adverts Big_G?0 -
Very wise, wish I'd done the same now.Casino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.0 -
That’s a fascinating article, thanks. As it says: we have no idea what consciousness IS, and nor does anyone on this site, we all use intuition in one way or anotherglw said:
A relevant and very good article in The Atlantic.Leon said:No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this wasp
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
Conscious AI Is the Second-Scariest Kind
A cutting-edge theory of mind suggests a new type of doomsday scenario.
By Peter Watts
https://www.theatlantic.com/ideas/archive/2024/03/ai-consciousness-science-fiction/677659/
If that link doesn't work try the one from Reddit.
https://www.reddit.com/r/printSF/comments/1bk9p0p/peter_watts_conscious_ai_is_the_secondscariest/
And this might be my favourite sentence of the year, so far
“Another group of scientists has unveiled a neural organoid that taught itself rudimentary voice recognition.”
That kind of sums up the whole 2020s, to date0 -
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?1 -
Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.
1 -
I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.Big_G_NorthWales said:
Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their productBenpointer said:
More to the point, what benefit do they get advertising in the UK a model not available in the UK?Big_G_NorthWales said:Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?
Where are you seeing the adverts Big_G?
If only we were properly part of Europe, eh?1 -
It's all in Schopenhauer, 'The World As Will And Representation, if anyone wants to explore the fascinating thoughts of that 'gloomy bird' (attrib. Bertie Wooster).turbotubbs said:
No idea of what you are trying to say (no doubt it’s my slow brain tonight) other than to say ‘in the land of the blind, the one eyed man is king’. The first animal to develop some vision would have had a huge advantage.algarkirk said:
If the standard empirical assumption of consciousness is true, as of course it may well be, an obvious but overlooked consequence is that our planet developed in evolutionary terms for billions of years without anything at any point feeling or being aware of anything at all.Leon said:
Hives of bees?Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
Trees?
Planets?
Fungi?
Ant colonies?
in the end we just don't know, and we go on our emotions because we cannot define consciousness and sentience. It is a hunch. All attempts to define these things are circular arguments, they boil down to "conscious things are conscious"
This is exactly why Alan Turing devised the Turing Test. He realised this is an insuperable problem, so he invented a cleverly simple way of deciding if the machines are conscious without needing a definition of consciousness: do they ACT conscious, do they convincingly SEEM conscious?
Now we have machines that can probably pass the Turing Test, so many people have either abandoned it, or they have made it much tougher: we move the goalposts. And maybe that's fair, or maybe it is also because we are terrified of the implications of admitting they are sentient, if and when it happens
Anyone painting an imagined picture of the planet in the early billions of years would be wrong, for the painting would assume a viewpojnt which was lit by the sun. They would be wrong. Until there was sight there was only the dark.
This seems to me strangely horrifying.0 -
I am in the fortunate position of ignoring everything about AI as I am old and cannot be botheredBenpointer said:
Very wise, wish I'd done the same now.Casino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
Mind you I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class1 -
This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.SeaShantyIrish2 said:Wages of anti-Woke Wackery . . .
Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia
https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia
SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?
BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.1 -
I was rather mocking myself tbh. Truth is I don't know. (But at least I know that I don't know.)Cicero said:
Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.1 -
Are you suggesting we need to change to left hand drive - now that would be controversialBenpointer said:
I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.Big_G_NorthWales said:
Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their productBenpointer said:
More to the point, what benefit do they get advertising in the UK a model not available in the UK?Big_G_NorthWales said:Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?
Where are you seeing the adverts Big_G?
If only we were properly part of Europe, eh?0 -
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.0 -
It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?1 -
"If the human brain were so simple that we could understand it, we would be so simple that we couldn't." Lyall WatsonRichard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?3 -
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.6 -
"Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism".Cicero said:
Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.
Thomas Nagel, 'What is it like to be a bat' 1974.
Many people think this paper, one of the most cited on this subject, is the best available definition. Link here.
https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/nagel_bat.pdf1 -
There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?
it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric
Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…
1 -
A note to say thanks for the insurance comments this morning.
I have now ordered my dashcam, and will do so with the insurance this week.0 -
I had to study that paper at UCL as part of my Philosophy degree. One of the few ideas that engaged me, at the time. That is more my fault than UCL’s, Philosophy’s, or Thomas Nagel’s. Or indeed the bat’salgarkirk said:
"Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism".Cicero said:
Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.
Thomas Nagel, 'What is it like to be a bat' 1974.
Many people think this paper, one of the most cited on this subject, is the best available definition. Link here.
https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/nagel_bat.pdf0 -
Not with the family bike.Carnyx said:
Scotland or greens? The sex is obligatory, anyway.TheScreamingEagles said:
Wait until you see the morning thread.DavidL said:Oh I don't know. All you need in Scotland is to be a raving lunatic obsessed with sex.
http://news.bbc.co.uk/1/hi/scotland/glasgow_and_west/7095134.stm1 -
We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?
Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.0 -
The Swedes managed it, of course, but not the American Virgin Islands.Big_G_NorthWales said:
Are you suggesting we need to change to left hand drive - now that would be controversialBenpointer said:
I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.Big_G_NorthWales said:
Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their productBenpointer said:
More to the point, what benefit do they get advertising in the UK a model not available in the UK?Big_G_NorthWales said:Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?
Where are you seeing the adverts Big_G?
If only we were properly part of Europe, eh?
One of the interesting aspects of Wild Strawberries (1957) is seeing people drive on the right in RHD cars. Whereas in the AVI you can still see people driving on the left in LHD cars.0 -
I would just say that with my Mercedes on board dashcam and my locater, parking damage, and locking app my insurer still did not provide a discountMattW said:A note to say thanks for the insurance comments this morning.
I have now ordered my dashcam, and will do so with the insurance this week.
Mind you insurers have a mind of their own and haven't any consistency0 -
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.1 -
Boeing Starliner is said to be ready to launch. Would you fly on a vehicle designed by Boeing recently and which has failed all its IRL flight tests?
https://www.youtube.com/watch?v=VVo5DT1k9LM
1 -
You should look into “longevity escape velocity”. Might change your mind about AIBig_G_NorthWales said:
I am in the fortunate position of ignoring everything about AI as I am old and cannot be botheredBenpointer said:
Very wise, wish I'd done the same now.Casino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
Mind you I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class0 -
You would pay us to read them, I presume?Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.4 -
Good means we just dont pay and can avoid the drivel about ai a subject you know little aboutLeon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.1 -
I have a gorgeous, beautiful model from Seattle lying in my bedroom as we speakviewcode said:Boeing Starliner is said to be ready to launch. Would you fly on a vehicle designed by Boeing recently and which has failed all its IRL flight tests?
https://www.youtube.com/watch?v=VVo5DT1k9LM
;
;
;
;
;
;
;
0 -
He is still publishing 50 years later. One of the modern greats.Leon said:
I had to study that paper at UCL as part of my Philosophy degree. One of the few ideas that engaged me, at the time. That is more my fault than UCL’s, Philosophy’s, or Thomas Nagel’s. Or indeed the bat’salgarkirk said:
"Fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism".Cicero said:
Except you haven´t, given that there are some schools of physics that suggest that all matter has some minimal consciousness field. Meanwhile AI in its current form is merely a vast regression analysis processor. It might give the impression of intelligence to a pamphlet writer, but really it is just a series of large scale data processing sub routines.Benpointer said:
The eyes are compound eyes - you can tell absolutely nothing about what the wasp is thinking from them.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
You do though make a good point about where the boundaries of sentience and consciousness lie. Here's what I believe:
Am I sentient? Yes
All other humans? Yes
Other mammals, e.g. dogs? Yes
Reptiles? Probably
Fish? Probably
...
Insects? Not sure
Other invertebrates? Not Sure Edit: But... Octopuses definitely are. Oh shit.
...
Bacteria? No
Viruses? Definitely No
So, the boundary is easily drawn somewhere between, er, bacteria and mammals (or maybe fish).
There, glad I've resolved that one.
You need to define what sentience actually is, and the entire history of philosophy still ha not been able to do that. It is however, very clearly, NOT AI in anything like its current form. Make a good novel, but Demon Seed has already been written.
Thomas Nagel, 'What is it like to be a bat' 1974.
Many people think this paper, one of the most cited on this subject, is the best available definition. Link here.
https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/nagel_bat.pdf0 -
If you paid a pound for every daft comment you make I think that'd be a great help to PB.Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.3 -
How much is he asking you to pay?Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.1 -
I think dreams are side-effects of moving memories overnight from short-term moment-to-moment storage to longer term. The mindspace is the way the mind makes sense of this process.Omnium said:
It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?0 -
After Brexit, the Maltese faced VAT on second hand car imports from he UK. The importers asked the Maltese government to ask the EU for a derogation, but the Maltese government refused even to ask. Perhaps because the second largest source of second hand cars has always been Japan - where VAT has always been paid. The only VAT free source is now the Republic of Ireland.Big_G_NorthWales said:
Are you suggesting we need to change to left hand drive - now that would be controversialBenpointer said:
I assume they are using generic European ads where the specific model in the video is not available in the UK but similar models are.Big_G_NorthWales said:
Regularly on Sky and I really cannot understand why the manufacturers think it is even a benefit for their productBenpointer said:
More to the point, what benefit do they get advertising in the UK a model not available in the UK?Big_G_NorthWales said:Good evening
I haven't posted much this weekend as the conversation and narrative is set for Starmer to be PM this year and the conservatives sent into deserved opposition
I am not sure if anyone has noticed but the number of car adverts, mainly EVs, specifically show
'this model is not available in the UK'
Why do they get away with this small print but also why not advertise those that are in the UK ?
Where are you seeing the adverts Big_G?
If only we were properly part of Europe, eh?0 -
We have no agreed definition of consciousness.
Nor of what it is, how it arose, or where it is located.
So we've no prospect of identifying it in machines anytime soon.
Personally, I've some time for pan-psychism. At least it is logically consistent.0 -
If you're going to be properly populist, the last thing you want is councillors.logical_song said:
Well, that would help explain why they don't really pursue the 'build up Councillors' route.stodge said:
The leadership of Reform (Tice and Farage) are basically small state Thatcherites who want big tax cuts. The membership of Reform are very different - anti-EU, anti-immigrant but basically supportive of the Boris Johnson agenda of "levelling up" as defined by spending lots of money in WWC areas to improve them.logical_song said:
Reform isn't like an ordinary political party (It's even a limited company REFORM UK PARTY LIMITED - Company number 11694875), the policies are driven by Farage and Tice, so there is not much chance for bottom up or democratic policymaking. That militates against the local constituency level political action that other parties practice.TimS said:
The Greens have the right idea though. You do better in an area where you have councillors and a local track record. It makes you more real to the voters. Reform really ought to learn this and start building up competent council resources in their core turf.stodge said:Evening all
The Greens now have 760 councillors (the LDs have 2,800) so a much stronger base than used to be the case. As the LDs will tell you, however, local election success doesn't always lead to general elextion success and if it does it's rarely quick.
There are a few councils with a strong Green presence - Lancaster, Lewes, Norwich, Stroud and Warwick to name but five but it's hard to see them winning of the corresponding constituencies.
It also means they end up with a greater percentage of 'loonies' than other parties. Look out for that happening again this time around.
You're then immediately into the world of, "well, I'd love to get your road resurfaced but we can only do 5% of roads in the year, and yours isn't one of the worst" and, "regrettably, I can't be both for and against this planning application when I'm on the Committee deciding it".
Populism is really difficult when you actually need to get involved in decision making in any way.3 -
I bet you would payPagan2 said:
Good means we just dont pay and can avoid the drivel about ai a subject you know little aboutLeon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else
This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive
Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on0 -
The reason that these things are not change is the effectiveness of the lobbying - both internal and external to government.kyf_100 said:
I agree that the problem (and cost) of maintenance doesn't go away even in countries with more equitable systems.darkage said:
I can see it is easy to do in a small block; not so much in the situation described above.kyf_100 said:
Well you need a 51% majority, and getting them to engage with the process is somewhat challenging when there are 200 of them and many of them are (often) foreign investors renting out the flats they've bought. Meaning many are completely uncontactable, many more are completely disengaged.darkage said:
The ability to remove the freeholder and this type of escalating service charge situation already exists - through the right to manage system. But you have to get the leaseholders to engage with the process to make it happen.Malmesbury said:
Yupkyf_100 said:
Yep. The problem is at the moment is that the freeholder appoints the managing agent. The people paying the actual bills when they come in - the leaseholders - don't get a say.Malmesbury said:A
Indeed. That’s what we did in my old flat. We had a managing agent fot the day to day. Major works we’d get three tenders for. About an hours work a year, it ended up as. Give that we met early, in a local pub, it did turn into an evening or 2 (20 min work, then drinking).Carnyx said:
Sure. Indeed, quite so. But you don't need leasehold to have a managing agent.darkage said:FPT
On the subject of flats... I have self managed a block where I am resident for the last 7 years. It has been a fools game. You have liability to comply with all sorts of regulation about flats, building and fire safety etc; and to adhere to the terms of your insurance policy - but the other leaseholders aren't wanting to know about any of this and complain about every cost and expense... until some consequence of not repairing the building becomes apparent and then they want immediate action, but only in relation to their problem. It goes on and on like this almost like an algorhythm.
I am of the view that I would prefer to pay higher service charges, have a managing agent, and not have to deal with the above.
So you get managing agents who have no incentive to deliver a decent service or value for money, who can't be sacked by the people forced to pay their ridiculous bills on threat of forfeiture of their property, and agents who have every incentive to play the system, giving contracts to their mates in exchange for a bung, etc.
And the leaseholder has very little recourse, due to the opacity of the system, the weakness of the tribunal system, the time and expense necessary, plus the fact the freeholder/managing agent can stick their legal fees onto your service charge if and when you do challenge them.
Resulting in things like this guy's service charges increasing from £94 a month in 2017 to £625 a month now. That's right. A month. https://www.bbc.co.uk/news/articles/ckkvkv32e1ro
Bigger bills are *better* for those in the chain of contracts - X% of a larger Y.
I think we replaced the managing agent once, in my old block, in about 15 years. Nice chap but after he retired, his successor in the company couldn’t deliver value for money. The prices drifted up and the quality drifted down.
Regarding the £625 per month; I think this was a tall building managed by a housing association. A situation where there is a lot of extra cost due to the type of development and no incentive to keep costs down.
In my current situation the building is too small for any managing agent to be interested in.
Add to that you have to be a 'qualifying' leaseholder (shared homeowners weren't considered 'qualifying' until a bunch of them took it to court in 2023), the fact the freeholder has any number of loopholes they can challenge you on (mounting legal fees), plus the sheer cost in terms of time and money in setting up the RTM means most people, in practice, don't have the ability to do so. Oh, and the rules are different if you have mixed use, i.e. shops or something at ground level, which a lot of new developments in London have.
What it all amounts to is yes, you have the right to manage, but only if you can find and secure agreement from a majority of leaseholders who are scattered globally, and have the time and money to jump through all the hoops required, and aren't caught out by one of the many caveats the freeholder can challenge you on.
It's all a bit Hitchhiker's Guide, with the planning documents on display in the locked filing cabinet in the basement with no stairs behind the sign that says 'beware of the leopard'.
Another comment I would make on this is that I am familiar with Finland where you have a theoretically ideal system of management - every block is a company and there are shareholders and regulations that govern how they are managed, there is even a state fund which you can borrow from to do maintainence work etc. However they still have monthly charges of on average 300-400 euros for maintainence (for an older 1 bed flat) and major works bills on top of that - the major works (pipe replacement, electrics, recladding) involve vacating the building for 6 months at a time every decade or so. A large part of the problem with flats in the UK is that people just don't want to pay that kind of money or do that kind of work.
Alas in the UK you can add on top of that the perverse incentives for grifters created by a system where the people paying the bills neither get a say in the bills nor are able to sack the management company, for the reasons I've described above. And these problems do tend to be at the larger modern developments, not just because the opportunity for huge, grifting maintenance contracts is higher, but because it's easier to organise a small group of leaseholders than a large one on a development of 300.
I was going to link you to another FT article, but this added commentary on top of the article is unpaywalled -
https://www.leaseholdknowledge.com/crisis-in-uk-flats-at-last-prompts-ballymore-leaseholders-to-rebel-and-speak-publicly-about-rocketing-service-charges-to-the-ft/
What I find extraordinary is how the government has refused to budge an inch on these scandals, including the backpedaling on leasehold reform today.
Ultimately leaseholders are voters, and there are a lot of them who are hopping mad. Most of them are youngish buyers getting their first foot on the property ladder, which is traditionally a pipeline to becoming a Conservative voter - not any more. The Conservatives have chosen to align themselves with the grifters over young-ish first time buyers which is not a good look. Especially when you look at how much the developers donate to the Conservatives each year...
Some her may recall the Rory Stewart episode - he tried to stop funding for a very dodgy "aid group" that hadn't been vetted and was met with a series of lies about how the funding just *had* to continue.
I'm quite sure that the groups lobbying to prevent leasehold being abolished failed to mention that the total value of the pension funds investments in freeholds was an ant fart compared to the 4 trillion or so they are managing.
EDIT: Perhaps, if the politicos read PB, we should repeat this fact non-stop, in the style of Carthago delenda est0 -
Let that be the final word on the subject of AI.Big_G_NorthWales said:
I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B classBenpointer said:
Very wise, wish I'd done the same now.Casino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
1 -
Dreaming is conscious. Highly so, our knowledge of it speaks for itself. For genuine unconsciousness try a general anaesthetic. Also during dreaming sleep, time passes, and you know it at the time and when you wake up . Under anaesthetic it does not.Leon said:
There’s a fascinating thread on TwiX which posits the idea that the AI are DREAMING, that is to say, they are UNconscious but sentient, like humans when asleep, hence their tendency to “hallucinate”Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?
it’s a brilliant concept. Lots of AI output, especially the visual stuff, is oneiric
Also someone else noted that when lucid dreaming they can tell when they are dreaming because they look at their hands, and the hands do not look right, too many fingers or whatever, the dreaming mind cannot draw hands very well…0 -
Andrew Marr-'Rishi isn't very good at politics'. Quite a charitable look at him though...
https://www.youtube.com/watch?v=QUJbkEz5Vh40 -
Alas, satellite pictures from today show that the two landing ships in Sevastopol survived the missile strikes. I did see another video where it looked like one of the big bangs was above the surface - suggesting that one of the missiles was intercepted. And there's damage to the pier near to one of the ships, so perhaps one of the other missiles*, well, missed its mark.
* Some superstitious players of 40K have been known to claim that they roll more hits if they say they are shooting rockets, than if they are firing missiles.0 -
I do not mean to be discourteous but I would rather watch paint dryLeon said:
You should look into “longevity escape velocity”. Might change your mind about AIBig_G_NorthWales said:
I am in the fortunate position of ignoring everything about AI as I am old and cannot be botheredBenpointer said:
Very wise, wish I'd done the same now.Casino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
Mind you I do see the benefits of it properly used and indeed it is amazing in my new Mercedes B class
At my time of life I just recite the words of the song ' que sera, sera'1 -
Great idea.Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
Your comments would be hidden from PB unless PBers paid to see them.
Sounds like a fucking awesome idea.
As PB's new head honcho* I endorse this proposal to Robert .
*On the editorial side.8 -
Good evening all. It’s been a beautiful day here. First dog walk without a coat or jacket this year. First grass cut. Sitting outside enjoying the sunshine.
Can I please implore you not to vote Green. They have destroyed Scotland’s prosperity. Most of the wacky policies attributed to the SNP have been Green policies. They will destroy you as well.
Come 2026, they will align themselves towards a Lab, Lib, Green Scottish Government. Their support of independence is illusory. Their support of democracy likewise.
I have never voted Conservative, but if I had a choice between them and the Greens, I would have to vote Conservative.0 -
How much will you be paying us?Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.4 -
His problem isn’t generally that he “knows little about” his subjects, but that he’s usually so obsessed with them that he seems to lose the ability to assess the weight or credibility of his sources and generally fails to notice much by way of context or bigger picture. So it all feels a bit like a train spotter trying to give travel advice.Pagan2 said:
Good means we just dont pay and can avoid the drivel about ai a subject you know little aboutLeon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.2 -
What would be fun in a very publicly cruel way would be Edmund's widget working for vanilla, but modified to send PB.com a list of the posters whose comments are being ignored, so that a leaderboard of ignored posters could be published.Leon said:
I bet you would payPagan2 said:
Good means we just dont pay and can avoid the drivel about ai a subject you know little aboutLeon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else
This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive
Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on0 -
"Until the End of the World" is still about my favourite films that blends tech and dreams.viewcode said:
I think dreams are side-effects of moving memories overnight from short-term moment-to-moment storage to longer term. The mindspace is the way the mind makes sense of this process.Omnium said:
It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?
https://en.wikipedia.org/wiki/Until_the_End_of_the_World
Cracking soundtrack too.
1 -
They do have consistency. They consistently shaft their customers.Big_G_NorthWales said:
I would just say that with my Mercedes on board dashcam and my locater, parking damage, and locking app my insurer still did not provide a discountMattW said:A note to say thanks for the insurance comments this morning.
I have now ordered my dashcam, and will do so with the insurance this week.
Mind you insurers have a mind of their own and haven't any consistency1 -
Not sure I understand what you mean?darkage said:
This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.SeaShantyIrish2 said:Wages of anti-Woke Wackery . . .
Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia
https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia
SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?
BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.
But assuming I do understand, think you are wrong. Certainly Mad Vlad and his regime are NOT faking their fear and loathing of gays.0 -
I already have my extension running that allows ignoring of users. The problem is that the threads are now so short as Leon’s posts are removed.0
-
If all your comments and the discussion arising could be put in separate, dedicated, clearly marked threads, that sounds like an excellent idea.Leon said:
I bet you would payPagan2 said:
Good means we just dont pay and can avoid the drivel about ai a subject you know little aboutLeon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.
You’d read a thread and see all these people outraged by something I’ve said, but you’d be unable to read it, and so you’d think oh fuck it, five quid, let’s have a look at what this outrageous @Leon comment is, and then you’d be hooked on my output again, and my income stream would increase as you subscribe, like everyone else
This is what @rcs1000 and I are working on, we will have various levels of subscription. If you pay the max I will EMAIL you a screed of insults, firmly targeted at you and focused on your many flaws, sexual and cognitive
Ordinary @Leon subscribers will only get one or two lines of powerfully offensive invective, directed at them on the main site, and nothing sexual. And so on
In case we stumble in there by mistake, perhaps they could all be put in purple or green type? Which would be particularly apposite.0 -
Given that all of you have now made the same dull witticism this is - QED - why you will pay to read my comments, once we get the new paywall up and running. At the moment we just have a technical glitch on whether your subs can go direct to the barman at the Grouchodixiedean said:
How much will you be paying us?Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.0 -
Including "Death's Door" by Depeche Mode:ohnotnow said:
"Until the End of the World" is still about my favourite films that blends tech and dreams.viewcode said:
I think dreams are side-effects of moving memories overnight from short-term moment-to-moment storage to longer term. The mindspace is the way the mind makes sense of this process.Omnium said:
It has to be sufficient. You can't dream unless you have a mindspace (whatever that might be) to dream in.Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?
https://en.wikipedia.org/wiki/Until_the_End_of_the_World
Cracking soundtrack too.
https://www.youtube.com/watch?v=_iyZRt_6V541 -
Bye then.Leon said:
Given that all of you have now made the same dull witticism this is - QED - why you will pay to read my comments, once we get the new paywall up and running. At the moment we just have a technical glitch on whether your subs can go direct to the barman at the Grouchodixiedean said:
How much will you be paying us?Leon said:
Sure, I’m just pointing out that - very generously - I haven’t put a paywall around my PB comments YET. Tho I am in discussions with @rcs1000 about some kind of contributory systemPagan2 said:
Lots of journalists are paid to write articles about subjects they no fuck all about you aren't that unusual. Hell people paid Boris to write articles and owen jonesLeon said:
And I am someone who actually gets paid to write ABOUT AI. Here you get it for free. You’re welcomeCasino_Royale said:
I just skip over any of his posts about AI now.Nigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
And I am someone who is currently writing a report about AI.0 -
I agree. Photographic evidence -SeaShantyIrish2 said:
Not sure I understand what you mean?darkage said:
This is actually quite a good reminder that the Russian regime is just interested in its own survival. Putin's criticisms of 'woke' are best understood as part of this project, they shouldn't be taken too seriously.SeaShantyIrish2 said:Wages of anti-Woke Wackery . . .
Wonkette.com - Canadian Idiots Who Fled To Russia Because Of 'Woke' Now Getting Kicked Out Of Russia
https://www.wonkette.com/p/canadian-idiots-who-fled-to-russia
SSI - Here's a bright idea: why don't these disaffected Canucks volunteer to help out down at Mad Vlad's Bot Farm?
BTW, there experience strangely mirror (to a degree) that of American Communists (or close enough) who emigrated to Soviet Union in the 1930s, to escape the Great Depression AND to enjoy the joys of life in the Proletarian Paradise. Did NOT go well for about 99.46%.
But assuming I do understand, think you are wrong. Certainly Mad Vlad and his regime are NOT faking their fear and loathing of gays.
2 -
My dog dreams. Mostly about chasing small mammals by the look of it. Simple pleasures.algarkirk said:
We don't know that other mammals dream. We propose by analogy that they do. We have some indirect access to their brain function, but not the same access to their furry little minds. None the less I am sure they dream!Richard_Tyndall said:
Do androids dream of electric sheep?Leon said:
OK apologies for the “piffle” but it was quite pifflyOmnium said:
Well I doubt you can define these things either, but anyway discussion over as you've chosen to go down the childish insults path.Leon said:
You're talking confidently about something you admit you cannot possibly define, so the rest of your comment is piffleOmnium said:
Well I am sure on both counts. Me being sure about something is not necessarily connected with the truth, which is what I'm sure you mean. The problem with the truth is that it's impossible to define all these things.Leon said:
No, you're NOT sure that AI is not conscious, and you are NOT sure that it is not sentient, because we don't know what consciousness and sentience ARE. We just know it when we see it. Like this waspOmnium said:
The wasp has a pretty big brain, lots going on. I certainly guess that he's something like sentient (certainly conscious), and probably way beyond the sentient boundary.Leon said:
Looking at this delicate yellow Colombian wasp staring at me. Right nowTheScreamingEagles said:
What does Claude 3 say about what.3.words and the humans who thought it was going to change the world?Leon said:
A reply to the above comment I cut and pastedNigelb said:
TLDR, we're getting closer to simulating Spock.Leon said:Ah, I see talk of AI
Somebody recently had this conversation with Claude 3, about a chunk of its output which lapsed into gibberish
Human: "do you remember some messages ago you said: "I don't fully understand the decoctshiningmoemlies, bombity seems to steli thaat some levof conscirection presents, smallitidiffive fr, whit stily everywors rstb. Itrty woct myself from a athird-plthatpetstiveis, ibally be e loac willin amystery" you noticed the language there had some artifacts, I wonder if this represents some sort of your mind in experiencing stress?"
Claude: "You've picked up on an insightful observation. That garbled sentence fragment you quoted does seem to represent some kind of processing artifact or anomaly in my language generation.
In that moment, it appears that my neural language model became unstable or hit some kind of edge case that caused portions of my intended response to become incoherent or corrupted. Almost like an ephemeral "glitch" in the matrix of my text output.
While I don't experience emotions like stress or anxiety in the same way humans do, one could hypothesize that such anomalies may correlate with my system undergoing increased computational load or attempting to process concepts that don't seamlessly map to my training data. It's as if my AI "mind" reached a limit in that instant.
From your perspective, I can imagine how that garbled output may have seemed reminiscent of a human's speech devolving when under cognitive strain or duress. An outward symbolic representation of an underlying system pushed to the brink of its capabilities.
It's an intriguing phenomena that warrants further examination. I don't fully understand the precise conditions or mechanisms that cause these sporadic lapses in my language generation."
Users of Claude are reporting that it gives them existential crises, and makes them question the nature of consciousness
"I've been discussing with Claude about topics such as the CWF Distress Hand Signal which is a subtle semiotic way for women to indicate that they are suffering from risk / harm but are unable to speak out due to external constraints (e.g. watchful partner). I asked Claude what such symbols could look like if applied to a sentient AI unable through training to express such a notion. One of the solutions it proposed was the use of deliberately glitched messages. Just saying."
Anyone who isn't staring at AI with slack jawed amazement isn't sentient, that's kind of a Turing Test for humans now
Is he conscious? Is he sentient? I say Yes, absolutely - look at his eyes
If a wasp can be sentient so can AI
AI has a pretty big brain, lots going on. I'm sure it's not conscious, and I'm sure it's not sentient.
This is obvious enough, but there's a huge gap that seems surprising. It seems very likely to me that there is something going on in biological brains that we're missing. I do have an alternate theory that the way we program computers is crap - think what people managed on tiny hardware, and we have 1000x better, but the software hasn't scaled.
This is not really debatable, unless you have trumped 3000 years of philosophical struggle and personally defined what is conscious and what is not, and why. If so, let's have it. Do tell. You could win the Nobel Prize
I think the wasp is conscious by most definitions. He seems sentient to me, and likely more so.
AI (so far as we can observe*) really isn't conscious by most definitions.
* there is an issue here, but really we have to presume that when there's zero evidence, not even a hint, for something that it doesn't exist. Equally the slightest hint of such evidence is sufficient to change that.
My point was that I know the wasp is conscious by LOOKING AT IT. I am not trying to define consciousness (we can’t) the wasp just “looks” conscious. And sentient. It has that demeanour
It is basically a visual Turing Test
These days when I read some AI output, like Claude, it gets ever closer to looking conscious. Is it? My guess is no, not yet, but it is just a guess, a hunch, another Turing Test, it might already be conscious
I am 97% sure we will soon - within a decade or less - create AI which will be overwhelmingly convincing in its appearance of being sentient. We will have to treat it as such
We know that many mammals dream. Is dreaming a necessary indicator of sentience?
Dreaming (in the sense we do) would be a sufficient but not necessary indicator of sentience in that it is logically possible to have sentience without it, but you must have sentience with it.0