When will AI be clever enough reliably and regularly to make large and risk free profits betting on horses? Who will be the first major bookmaker to go bust because of it?
This is actually a sound point. I’ve thought about it before
AI will make a better bettor than any human. It will learn all possible knowledge about horses, conditions, riders, odds. It will be able to make bets orders of magnitude cleverer than any human
That ends bookmaking as a business
Not really. The bookmakers will use AI to set odds, and they will still have more information than the punting AIs, because they will know what bets are being placed, and the point of bookmaking is to make a profitable book - i.e. to make a profit regardless of the outcome. And *also* people will still want to place their own bets, without using an AI, because part of the appeal is to feel that you are clever enough to pick the winner.
Probably bookmaking will become more profitable with learning algorithms, rather than less.
Edit: This is going to go down as your worst take on "AI" ever, by the way. You didn't realise bookmakers could also use AI? Honestly?
https://x.com/iapolls2022/status/1767747527707615306 "williamglenn" > Decision Desk HQ projects Donald Trump wins the Washington Republican Primary and has won enough delegates to secure the Republican Nomination for President
"Leon" > Let’s just pray he wins the presidency. Or the West is finished
https://x.com/iapolls2022/status/1767747527707615306 "williamglenn" > Decision Desk HQ projects Donald Trump wins the Washington Republican Primary and has won enough delegates to secure the Republican Nomination for President
"Leon" > Let’s just pray he wins the presidency. Or the West is finished
I think Leon was rooting for Biden there.
Was that before or after he took all the drugs in Colombia?
“Would you take £15,000 from a racist just so that you can fly around in a helicopter? Multi-millionaire Rishi Sunak would”
The Tories really need to stop referring to the comments as being "alleged" or "unverified". The donor hasn't denied using the words, and has apologised for his rudeness.
So why does the PM insist on quibbling over this?
I realise it probably comes from his "well, actually" tetchiness, but in this case it just makes him look insincere.
He always looks insincere.
The reason for the quibbling may relate to how the following twine together before the weekend:
* "I think [Diane Abbott] should be shot" * Parliamentary Liaison and Investigations Team police investigation * Sunak spox: remorse should be accepted * new definition of "extremism", expected from Michael Gove tomorrow
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
https://x.com/iapolls2022/status/1767747527707615306 "williamglenn" > Decision Desk HQ projects Donald Trump wins the Washington Republican Primary and has won enough delegates to secure the Republican Nomination for President
"Leon" > Let’s just pray he wins the presidency. Or the West is finished
I think Leon was rooting for Biden there.
Was that before or after he took all the drugs in Colombia?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Because we remember Musk's prediction record on self driving Teslas!
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
What a signal to short tech stocks. The Leon has spoken.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Because we remember Musk's prediction record on self driving Teslas!
Indeed. He’s not always right. But he was a founder of OpenAI
AND he’s not the only one saying this
There is one upside to the fast-approaching singularity. It’s going to be interesting. It’s probably going to be the single most interesting thing in the history of humankind - and we are lucky enough to be here and to witness it, and with some warning beforehand
If I was contemplating suicide (I’m not, I’m on a bus in Colombia looking forward to the next town) I’d stay the blade, knowing this news. Why top yourself today when the next few years are going to be incredibly compelling and might kill all humanity anyway?
https://x.com/iapolls2022/status/1767747527707615306 "williamglenn" > Decision Desk HQ projects Donald Trump wins the Washington Republican Primary and has won enough delegates to secure the Republican Nomination for President
"Leon" > Let’s just pray he wins the presidency. Or the West is finished
I think Leon was rooting for Biden there.
Was that before or after he took all the drugs in Colombia?
“Would you take £15,000 from a racist just so that you can fly around in a helicopter? Multi-millionaire Rishi Sunak would”
The Tories really need to stop referring to the comments as being "alleged" or "unverified". The donor hasn't denied using the words, and has apologised for his rudeness.
So why does the PM insist on quibbling over this?
I realise it probably comes from his "well, actually" tetchiness, but in this case it just makes him look insincere.
The most ridiculous comment from Sunak was trying to equate it to Rayner’s ‘Tory scum’ comment.
‘I think you are scum’ is not at all comparable to ‘You should be shot’.
Bet Sunak is bloody seething about Hunt's wild promise to get rid of NI.
Labour can hammer this all the way until Jan 2025.
The irony of this is that it’s a good policy: NI & income tax should be merged.
The fact that neither party can advocate for this without people going nuts is the real problem.
It isn't, NI should be for the state pension and contributory JSA we don't need even more welfare dependency
Has something gone wrong with the Matrix? @HYUFD is opposing Tory party policy.
No, you have it the wrong way round: the current Cabinet is opposing party policy as interpreted by HYUFD. Which is that the pensioners and the bequests to their children must be protected. Probably a rational analysis of the situation in Epping from his point of view. Though it comes close to being a fetish (in two meanings of the word, though not the third).
Support for contributory welfare is a conservative principle, not just dependence on UC, as is support for inherited wealth not just letting the taxman take all your estate when you die
(x-1 000 000)*0.4 = x seems to be your mathematics. Which equation, as any fule kno, is wrong.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Here's the problem. "Elon Musk" on twitter isn't Elon. Its his AGI model. And he likes to do iterative testing with failure being part of innovation.
So whilst he does spectacular stuff with Tesla and SpaceX and the Boring Company are test-digging what could be a revolution in tunnelling at Giga Texas and yes the AI stuff, everything else he tweets is foaming-dog-fever alt-right lunacy, with increasing levels of bat-shittedness.
Remember how the Microsoft AI bot went on Twitter to learn and rapidly went Nazi. The "Elon Musk" AI has done the same, but instead of being switched off is being allowed to burrow further and further down the lunacy rabbit hole.
I wish he would make it stop. Because all of the good that he does is undone by "Elon Musk" on X.
Trump is ‘Unhinged’ But We Love Him, Say Kremlin Mouthpieces https://cepa.org/article/trump-is-unhinged-but-we-love-him-say-kremlin-mouthpieces/ ...Kalashnikov stated: “I am a fan of Trump, based on the interests of my country.” He expressed gratitude for the four-year reprieve he said Trump’s presidency provided for Russia, allowing the country to prepare for its expanded invasion of Ukraine. This is not unusual — lawmakers and foreign policy experts frequently gush about their affinity for Trump as “the destroyer of America.” ..
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
What a signal to short tech stocks. The Leon has spoken.
Leon has impressive fiscal form. He eulogised Kwarteng's budget, and then the bond markets crashed.
When will AI be clever enough reliably and regularly to make large and risk free profits betting on horses? Who will be the first major bookmaker to go bust because of it?
This is actually a sound point. I’ve thought about it before
AI will make a better bettor than any human. It will learn all possible knowledge about horses, conditions, riders, odds. It will be able to make bets orders of magnitude cleverer than any human
That ends bookmaking as a business
I thought Algakirk was trolling. Bettors only take money from other bettors. The problem wouldn't be Patrick Veitch getting his hands on a program. It would be everyone and his auntie getting such access.
Meanwhile in the programmers' club debating society, tonight's question is what if everyone could buy a suitcase nuke at the corner shop.
As for predictive ability, why leave it at the racetrack? Why not political betting, prediction markets generally, or the financial markets?
"Smarter than any single human" is silly talk. Overall smartness can't be measured on a single scale. I don't listen to anything Kurzweil says. He's the guy who said smartphones are making their users more intelligent.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Because we remember Musk's prediction record on self driving Teslas!
Indeed. He’s not always right. But he was a founder of OpenAI
AND he’s not the only one saying this
There is one upside to the fast-approaching singularity. It’s going to be interesting. It’s probably going to be the single most interesting thing in the history of humankind - and we are lucky enough to be here and to witness it, and with some warning beforehand
If I was contemplating suicide (I’m not, I’m on a bus in Colombia looking forward to the next town) I’d stay the blade, knowing this news. Why top yourself today when the next few years are going to be incredibly compelling and might kill all humanity anyway?
Linger awhile and catch the show
Compelling! Have you considered volunteering for Samaritans?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
When will AI be clever enough reliably and regularly to make large and risk free profits betting on horses? Who will be the first major bookmaker to go bust because of it?
This is actually a sound point. I’ve thought about it before
AI will make a better bettor than any human. It will learn all possible knowledge about horses, conditions, riders, odds. It will be able to make bets orders of magnitude cleverer than any human
That ends bookmaking as a business
There are many milllions of people prepared to bet at suboptimal odds. And there are quite a few small companies who have been (ahem) "working for the bookmakers" over the last 20 years by developing very good machine learning models. Ask Tony Bloom. The idea that these companies are not already heavily involved in incorporating AI is naive. The good bookmakers will survive because of AI.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Here's the problem. "Elon Musk" on twitter isn't Elon. Its his AGI model. And he likes to do iterative testing with failure being part of innovation.
So whilst he does spectacular stuff with Tesla and SpaceX and the Boring Company are test-digging what could be a revolution in tunnelling at Giga Texas and yes the AI stuff, everything else he tweets is foaming-dog-fever alt-right lunacy, with increasing levels of bat-shittedness.
Remember how the Microsoft AI bot went on Twitter to learn and rapidly went Nazi. The "Elon Musk" AI has done the same, but instead of being switched off is being allowed to burrow further and further down the lunacy rabbit hole.
I wish he would make it stop. Because all of the good that he does is undone by "Elon Musk" on X.
"... test-digging what could be a revolution in tunnelling at Giga Texas"
Ha ha ha ha
Yet another fool who has bought into Muskmania.
We should add Hyperloop to the list. I remember some on here got very damp over that con.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in". Elon Musk is a known shroomhead. And anyway it's the chip implant stuff that's really worrying.
When will AI be clever enough reliably and regularly to make large and risk free profits betting on horses? Who will be the first major bookmaker to go bust because of it?
This is actually a sound point. I’ve thought about it before
AI will make a better bettor than any human. It will learn all possible knowledge about horses, conditions, riders, odds. It will be able to make bets orders of magnitude cleverer than any human
That ends bookmaking as a business
Not really. The bookmakers will use AI to set odds, and they will still have more information than the punting AIs, because they will know what bets are being placed, and the point of bookmaking is to make a profitable book - i.e. to make a profit regardless of the outcome. And *also* people will still want to place their own bets, without using an AI, because part of the appeal is to feel that you are clever enough to pick the winner.
Probably bookmaking will become more profitable with learning algorithms, rather than less.
Edit: This is going to go down as your worst take on "AI" ever, by the way. You didn't realise bookmakers could also use AI? Honestly?
No. Because both sides will have AI and neither side will know if the other has an AI good enough to guarantee winning. For bookmaking to work you need human fallibility on both sides - and mutual trust
It’s like the way smartphones destroyed pub quizzes - but times a billion, because there’s no way of eliminating the phones
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Because we remember Musk's prediction record on self driving Teslas!
Indeed. He’s not always right. But he was a founder of OpenAI
AND he’s not the only one saying this
There is one upside to the fast-approaching singularity. It’s going to be interesting. It’s probably going to be the single most interesting thing in the history of humankind - and we are lucky enough to be here and to witness it, and with some warning beforehand
If I was contemplating suicide (I’m not, I’m on a bus in Colombia looking forward to the next town) I’d stay the blade, knowing this news. Why top yourself today when the next few years are going to be incredibly compelling and might kill all humanity anyway?
Linger awhile and catch the show
Compelling! Have you considered volunteering for Samaritans?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
We could talk about what.three.words
what.three.word could well surprise on the upside.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
We could talk about what.three.words
what.three.word could well surprise on the upside.
A monolith, by definition, is made of stone. It cannot be made of steel
Only if you're being unnecessarily pedantic.
'Monolith' already has one other long established alternative meaning, so it's not unreasonable that a third has evolved to describe an object of that size and form.
You're a wordsmith, so you know that.
Also, the Greeks had no steel.
No, but to be even more pedantic they'd call the thing on Hay Bluff a stele - στήλη
μονόλιθος in Ancient Greek means 'made of one stone' and Herodotus at least uses it in the sense of a single stone, without reference to its shape. His example is of a shrine made of one stone, with an internal chamber hollowed out.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Because we remember Musk's prediction record on self driving Teslas!
Indeed. He’s not always right. But he was a founder of OpenAI
AND he’s not the only one saying this
There is one upside to the fast-approaching singularity. It’s going to be interesting. It’s probably going to be the single most interesting thing in the history of humankind - and we are lucky enough to be here and to witness it, and with some warning beforehand
If I was contemplating suicide (I’m not, I’m on a bus in Colombia looking forward to the next town) I’d stay the blade, knowing this news. Why top yourself today when the next few years are going to be incredibly compelling and might kill all humanity anyway?
Linger awhile and catch the show
Compelling! Have you considered volunteering for Samaritans?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
Because we remember Musk's prediction record on self driving Teslas!
Indeed. He’s not always right. But he was a founder of OpenAI
AND he’s not the only one saying this
There is one upside to the fast-approaching singularity. It’s going to be interesting. It’s probably going to be the single most interesting thing in the history of humankind - and we are lucky enough to be here and to witness it, and with some warning beforehand
If I was contemplating suicide (I’m not, I’m on a bus in Colombia looking forward to the next town) I’d stay the blade, knowing this news. Why top yourself today when the next few years are going to be incredibly compelling and might kill all humanity anyway?
Linger awhile and catch the show
Compelling! Have you considered volunteering for Samaritans?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
I agree. Garbage in garbage out. Also i dont believe humans have the intelligence to utilize AI effectively. Of course tech companies with hype AI they get more money from investors if they do.
“Would you take £15,000 from a racist just so that you can fly around in a helicopter? Multi-millionaire Rishi Sunak would”
The Tories really need to stop referring to the comments as being "alleged" or "unverified". The donor hasn't denied using the words, and has apologised for his rudeness.
So why does the PM insist on quibbling over this?
I realise it probably comes from his "well, actually" tetchiness, but in this case it just makes him look insincere.
The most ridiculous comment from Sunak was trying to equate it to Rayner’s ‘Tory scum’ comment.
‘I think you are scum’ is not at all comparable to ‘You should be shot’.
Yes, it's poor politics. They've already taken the hit, so the sensible thing to do is to concede and move on.
They can probably get away with not giving the money back, especially if they were to follow FF43's suggested strategy of announcing that any further donations would be blocked pending an investigation.
What they can't get away with is trying to score political points. Quibbling, non-apologies, whataboutery, debating whether it's actually racist or misogynist to "want to hate black women" - at this stage it's just keeping the story alive. There's nothing to be gained from it.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
I agree. Garbage in garbage out. Also i dont believe humans have the intelligence to utilize AI effectively. Of course tech companies with hype AI they get more money from investors if they do.
Didn't you know? @Leon with the self-proclaimed massive IQ stated that the AI companies had all the money they needed, and didn't want any more investment.
Which makes them the first tech companies ever to say that...
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
It’s a good challenge but not currently possible to fulfil.
I suspect we are missing several layers of understanding about how the brain works (see also genetics - how is instinct encoded?). This is not to promote weird conspiracy theories just to recognise that there is no reason to think we are anywhere near knowing all the unknowns.
We should remember too that most scientific and technological progress is achieved at least in part through experimentation as much as pure thought. How adept AI will be an experimenting remains to be seen.
When will AI be clever enough reliably and regularly to make large and risk free profits betting on horses? Who will be the first major bookmaker to go bust because of it?
This is actually a sound point. I’ve thought about it before
AI will make a better bettor than any human. It will learn all possible knowledge about horses, conditions, riders, odds. It will be able to make bets orders of magnitude cleverer than any human
That ends bookmaking as a business
Not really. The bookmakers will use AI to set odds, and they will still have more information than the punting AIs, because they will know what bets are being placed, and the point of bookmaking is to make a profitable book - i.e. to make a profit regardless of the outcome. And *also* people will still want to place their own bets, without using an AI, because part of the appeal is to feel that you are clever enough to pick the winner.
Probably bookmaking will become more profitable with learning algorithms, rather than less.
Edit: This is going to go down as your worst take on "AI" ever, by the way. You didn't realise bookmakers could also use AI? Honestly?
No. Because both sides will have AI and neither side will know if the other has an AI good enough to guarantee winning. For bookmaking to work you need human fallibility on both sides - and mutual trust
It’s like the way smartphones destroyed pub quizzes - but times a billion, because there’s no way of eliminating the phones
Bookmakers guarantee winning by making a book, not by accurately predicting the winner. Your reasoning is based on a flawed understanding of bookmaking.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
No, we don’t know even that
We don’t even know if humans have free will. Nor do we know if we are merely players in a simulation
Nor do we know if WE are just stochastic parrots, autocomplete machines of reflexes driven by our genes
So we don’t know any of this and you in particular don’t know shit, sorry
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
AP (via Seattle Times) - Robert F. Kennedy Jr. is considering Aaron Rodgers or Jesse Ventura for a 2024 running mate
Robert F. Kennedy Jr. is having conversations with vice presidential candidates as he gets closer to announcing his running mate for his independent presidential bid.
Kennedy told The New York Times that NFL quarterback Aaron Rodgers and former Minnesota Gov. Jesse Ventura are at the top of his list. Stefanie Spear, a campaign spokesperson, confirmed the Times report and said there are other names on Kennedy’s short list.
Kennedy, a scion of one of the nation’s most prominent political families, has focused on getting access to the ballot, an expensive and time-consuming process that he has said will require him to collect more than a million signatures in a state-by-state effort.
Many states require independent candidates to name a running mate before they can seek access to the ballot, a factor driving the early push for Kennedy to make a pick. Major party candidates generally don’t pick vice presidential nominees until closer to their summer conventions. . . .
Rodgers, the longtime Green Bay Packers quarterback who now plays for the New York Jets, shares Kennedy’s distrust of vaccine mandates and, like Kennedy, is a fixture on anti-establishment podcasts. Ventura, a former professional wrestler, shocked observers when he won the race for Minnesota governor as an independent candidate in 1998.
SSI - Beyond national celebrity (bit dimmed by time in case of Ventura) either of these possible VP picks MIGHT give RFKjr a very wee (in more way than one?) boost in the battleground state of Wisconsin.
Ventura having been gov of neighboring Minnesota, and Rogers long-time quarterback for Green Bay Packers.
OR in case of AR, perhaps not . . .
Forbes - Why Aaron Rodgers Was Never Beloved Like Other Green Bay Packers Greats
While Packer Nation mourned [previous QB Brett] Favre’s departure, few tears were shed when the Rodgers’ trade became official. In fact there was more celebrating than sorrow.
Talk radio. Social media. Fan polls.
They’ve all had largely the same message for Rodgers in recent days: “Don’t let the door hit you on the way out.” . . .
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
Goertzel thinks large language models are the wrong tree to bark up (if I remember correctly, previously he said the wrong tree in the wrong forest on the wrong Continent). But that's not at all the same as thinking "AI can't happen".
US considering a ban on tiktok. Apparently they are not happy with all the pro Palestine propoganda on the app.
The Future of TikTok in the US and UK: What You Need to Know
The US is considering a ban on TikTok while the UK may be following a similar path. But what does this mean for users and how is TikTok responding? Read on to find out.
#TikTok #Ban #US #UK #TechNews 📱
The US House of Representatives has passed a bill that could lead to the app being banned if its Chinese owner does not sell. What does this bill mean for TikTok and its users?
#TikTokBan #USBill #TechRegulation
If enforced, the bill would require the Chinese company ByteDance to sell its stake in the US version of TikTok, effectively resulting in a ban. But what is the impact of this decision and who would buy the US version of the app?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
No, we don’t know even that
We don’t even know if humans have free will. Nor do we know if we are merely players in a simulation
Nor do we know if WE are just stochastic parrots, autocomplete machines of reflexes driven by our genes
So we don’t know any of this and you in particular don’t know shit, sorry
The study of linguistics, and the brain machinery behind it, provides strong evidence that we aren't auto complete machines.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
Sorry to be blunt, but really saying "Unknown unknowns" is still less of an argument for anything than the ones I already mentioned.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
No, we don’t know even that
We don’t even know if humans have free will. Nor do we know if we are merely players in a simulation
Nor do we know if WE are just stochastic parrots, autocomplete machines of reflexes driven by our genes
So we don’t know any of this and you in particular don’t know shit, sorry
We do have free will - the Greek philosophers showed that 2,400 years ago.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
Sorry to be blunt, but really saying "Unknown unknowns" is still less of an argument for anything than the ones I already mentioned.
No need to apologise for bluntness - I made my point badly.
What I am trying to say is we don’t know what we don’t know. Now, that may be seen as a cop out but unless you think we have learnt all there is to know on this subject it’s undeniably true that we may yet learn things that do for example fall into the category of something non-physical being involved in intelligence.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
No, we don’t know even that
We don’t even know if humans have free will. Nor do we know if we are merely players in a simulation
Nor do we know if WE are just stochastic parrots, autocomplete machines of reflexes driven by our genes
So we don’t know any of this and you in particular don’t know shit, sorry
We do have free will - the Greek philosophers showed that 2,400 years ago.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
No, we don’t know even that
We don’t even know if humans have free will. Nor do we know if we are merely players in a simulation
Nor do we know if WE are just stochastic parrots, autocomplete machines of reflexes driven by our genes
So we don’t know any of this and you in particular don’t know shit, sorry
We do have free will - the Greek philosophers showed that 2,400 years ago.
I'm not convinced they necessarily have the last word on the subject, though. We've learned a bit about neuroscience since then. My money, FWIW, is on not.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
From the part of your comment where you say "I expect AI to happen", I conclude that you aren't disagreeing with me when I say that the proposition that it can't happen has not been adequately supported.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
I might suggest you are being a bit woolly in your terms, with "Current AI" and "actual AI". We have had forms of AI for... well, centuries or millennia, depending what you mean, but about 80 years in a modern sense of the term. We've had everyday, practical uses of AI for over 40 years.
But I presume by "Current AI", you mean the recent explosion in generative AI methods, and particularly the use of large language models. LLMs are an exciting tech that is going to have a lot of practical uses. The hype from the cargo cult commentators should be ignored, but this is important tech.
Do LLMs and other generative AI get us any closer to artificial general intelligence (AGI), something that thinks like a person and what I presume you mean by "actual AI"? I think you're right that there is a very big gap between LLMs and AGI. Fancier, bigger LLMs are not going to turn into AGI and spontaneously generate self-awareness. But that doesn't mean that they might not be a part of the puzzle that gets you to AGI. LLMs do, already, prod our understanding of "real" intelligence and how we use language. They do suggest that a Chomskyian universal grammar is unnecessary and that statistical models of language acquisition are more viable than we thought.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
From the part of your comment where you say "I expect AI to happen", I conclude that you aren't disagreeing with me when I say that the proposition that it can't happen has not been adequately supported.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
Sorry to be blunt, but really saying "Unknown unknowns" is still less of an argument for anything than the ones I already mentioned.
No need to apologise for bluntness - I made my point badly.
What I am trying to say is we don’t know what we don’t know. Now, that may be seen as a cop out but unless you think we have learnt all there is to know on this subject it’s undeniably true that we may yet learn things that do for example fall into the category of something non-physical being involved in intelligence.
Probably what I'm saying is that according to the best current understanding there is no reason to think that computational AI isn't possible.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
From the part of your comment where you say "I expect AI to happen", I conclude that you aren't disagreeing with me when I say that the proposition that it can't happen has not been adequately supported.
I believe that intelligence is an emergent property of physical processes, so I think that the assertion that it cannot be reproduced artificially is self-evident tosh, but your comment made me think of more interesting aspects.
Unfortunately you seem to be focused on point-scoring.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
It was once thought that the saxophone couldn't exist. There were court cases about it. Yet I think the consensus now is that it not only can, but does.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I think that's a bit of a red herring. By that, I mean debates over whether an artificial brain can emulate or surpass a human brain. I don't see any reason why that shouldn't be possible, but that's all very hypothetical. What matters is what the current technology, in particular large language models, represent in terms of that quest. I think the question of whether the cargo cult are "getting excited over an Eliza with more data" is very apt.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
I might suggest you are being a bit woolly in your terms, with "Current AI" and "actual AI". We have had forms of AI for... well, centuries or millennia, depending what you mean, but about 80 years in a modern sense of the term. We've had everyday, practical uses of AI for over 40 years.
But I presume by "Current AI", you mean the recent explosion in generative AI methods, and particularly the use of large language models. LLMs are an exciting tech that is going to have a lot of practical uses. The hype from the cargo cult commentators should be ignored, but this is important tech.
Do LLMs and other generative AI get us any closer to artificial general intelligence (AGI), something that thinks like a person and what I presume you mean by "actual AI"? I think you're right that there is a very big gap between LLMs and AGI. Fancier, bigger LLMs are not going to turn into AGI and spontaneously generate self-awareness. But that doesn't mean that they might not be a part of the puzzle that gets you to AGI. LLMs do, already, prod our understanding of "real" intelligence and how we use language. They do suggest that a Chomskyian universal grammar is unnecessary and that statistical models of language acquisition are more viable than we thought.
A statistical model of language acquisition is viable for a supercomputer studying through the entire corpus of the internet, not for a young child learning language from interactions in meatspace over the course of a few years.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
Sorry to be blunt, but really saying "Unknown unknowns" is still less of an argument for anything than the ones I already mentioned.
No need to apologise for bluntness - I made my point badly.
What I am trying to say is we don’t know what we don’t know. Now, that may be seen as a cop out but unless you think we have learnt all there is to know on this subject it’s undeniably true that we may yet learn things that do for example fall into the category of something non-physical being involved in intelligence.
Probably what I'm saying is that according to the best current understanding there is no reason to think that computational AI isn't possible.
That’s fair enough.
Trying to think of a good analogy. How about prior to Newton there was no reason to think that perpetual motion wasn’t possible? Or maybe, prior to Columbus there was no reason for Europeans to think that the Americas existed (you’ll have to excuse me the Vikings).
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
From the part of your comment where you say "I expect AI to happen", I conclude that you aren't disagreeing with me when I say that the proposition that it can't happen has not been adequately supported.
What definition of AI are you using?
As I said, emulating a human brain. I can't see that a better test has really been proposed, other than that an AI should be indistinguishable from a human intelligence.
AP (via Seattle Times) - Robert F. Kennedy Jr. is considering Aaron Rodgers or Jesse Ventura for a 2024 running mate
Robert F. Kennedy Jr. is having conversations with vice presidential candidates as he gets closer to announcing his running mate for his independent presidential bid.
Kennedy told The New York Times that NFL quarterback Aaron Rodgers and former Minnesota Gov. Jesse Ventura are at the top of his list. Stefanie Spear, a campaign spokesperson, confirmed the Times report and said there are other names on Kennedy’s short list.
Kennedy, a scion of one of the nation’s most prominent political families, has focused on getting access to the ballot, an expensive and time-consuming process that he has said will require him to collect more than a million signatures in a state-by-state effort.
Many states require independent candidates to name a running mate before they can seek access to the ballot, a factor driving the early push for Kennedy to make a pick. Major party candidates generally don’t pick vice presidential nominees until closer to their summer conventions. . . .
Rodgers, the longtime Green Bay Packers quarterback who now plays for the New York Jets, shares Kennedy’s distrust of vaccine mandates and, like Kennedy, is a fixture on anti-establishment podcasts. Ventura, a former professional wrestler, shocked observers when he won the race for Minnesota governor as an independent candidate in 1998.
SSI - Beyond national celebrity (bit dimmed by time in case of Ventura) either of these possible VP picks MIGHT give RFKjr a very wee (in more way than one?) boost in the battleground state of Wisconsin.
Ventura having been gov of neighboring Minnesota, and Rogers long-time quarterback for Green Bay Packers.
OR in case of AR, perhaps not . . .
Forbes - Why Aaron Rodgers Was Never Beloved Like Other Green Bay Packers Greats
While Packer Nation mourned [previous QB Brett] Favre’s departure, few tears were shed when the Rodgers’ trade became official. In fact there was more celebrating than sorrow.
Talk radio. Social media. Fan polls.
They’ve all had largely the same message for Rodgers in recent days: “Don’t let the door hit you on the way out.” . . .
Arthur Koestler (The Ghost in the Machine, 1967), Steven Rose (The Conscious Brain, 1976) and Roger Penrose (The Emperor's New Mind, 1989) all argued convincingly that there was far more to the human brain than meets the eye and its function cannot be explained or replicated by simulation. But still people try.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
I might suggest you are being a bit woolly in your terms, with "Current AI" and "actual AI". We have had forms of AI for... well, centuries or millennia, depending what you mean, but about 80 years in a modern sense of the term. We've had everyday, practical uses of AI for over 40 years.
But I presume by "Current AI", you mean the recent explosion in generative AI methods, and particularly the use of large language models. LLMs are an exciting tech that is going to have a lot of practical uses. The hype from the cargo cult commentators should be ignored, but this is important tech.
Do LLMs and other generative AI get us any closer to artificial general intelligence (AGI), something that thinks like a person and what I presume you mean by "actual AI"? I think you're right that there is a very big gap between LLMs and AGI. Fancier, bigger LLMs are not going to turn into AGI and spontaneously generate self-awareness. But that doesn't mean that they might not be a part of the puzzle that gets you to AGI. LLMs do, already, prod our understanding of "real" intelligence and how we use language. They do suggest that a Chomskyian universal grammar is unnecessary and that statistical models of language acquisition are more viable than we thought.
A statistical model of language acquisition is viable for a supercomputer studying through the entire corpus of the internet, not for a young child learning language from interactions in meatspace over the course of a few years.
Yes, that is the key difference between an LLM and a young child. That's why I only went with "suggest" rather than "prove". However, research on miniature LLMs and other lines of enquiry do point to statistical models of language acquisition as being viable for a young child. And there are, of course, many other problems with Chomsky's UG.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
Go on then. Define intelligence and sentience and consciousness. - do what no philosopher has been able to do for 3000 years - then define why it is only limited to humans and to certain organisms (what organisms tho? Dogs? Ants? Viruses? Trees? Bee colonies? Parrots? Why them) - then define how you will know that machines do NOT have this
Off you go. Best of luck
What we do know is that intelligence isn't copy and pasting bits of stuff that are "a bit like what you want" together.
Goertzel thinks large language models are the wrong tree to bark up (if I remember correctly, previously he said the wrong tree in the wrong forest on the wrong Continent). But that's not at all the same as thinking "AI can't happen".
Indeed - "AI can't happen requires" literal magic. See Roger Penrose. You might as well proclaim a need for souls.
I upset Penrose, long ago, when I asked him why a machine couldn't do his quantum stuff.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
A computer scientist told me in 2004 that mass streaming of TV was impossible and would remain so as there could never be enough bandwidth. Now a middle-sized cheese at Google.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
From the part of your comment where you say "I expect AI to happen", I conclude that you aren't disagreeing with me when I say that the proposition that it can't happen has not been adequately supported.
What definition of AI are you using?
As I said, emulating a human brain. I can't see that a better test has really been proposed, other than that an AI should be indistinguishable from a human intelligence.
That's fair enough as a definition of something. It is worth noting that the term "AI" as used in the academic and computing literature is used to mean something much simpler. Different definitions are useful in different discussions: just want to avoid any terminological confusion.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
I might suggest you are being a bit woolly in your terms, with "Current AI" and "actual AI". We have had forms of AI for... well, centuries or millennia, depending what you mean, but about 80 years in a modern sense of the term. We've had everyday, practical uses of AI for over 40 years.
But I presume by "Current AI", you mean the recent explosion in generative AI methods, and particularly the use of large language models. LLMs are an exciting tech that is going to have a lot of practical uses. The hype from the cargo cult commentators should be ignored, but this is important tech.
Do LLMs and other generative AI get us any closer to artificial general intelligence (AGI), something that thinks like a person and what I presume you mean by "actual AI"? I think you're right that there is a very big gap between LLMs and AGI. Fancier, bigger LLMs are not going to turn into AGI and spontaneously generate self-awareness. But that doesn't mean that they might not be a part of the puzzle that gets you to AGI. LLMs do, already, prod our understanding of "real" intelligence and how we use language. They do suggest that a Chomskyian universal grammar is unnecessary and that statistical models of language acquisition are more viable than we thought.
Wait: you think Chomsky could be wrong about something?
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I expect AI to happen but.
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
From the part of your comment where you say "I expect AI to happen", I conclude that you aren't disagreeing with me when I say that the proposition that it can't happen has not been adequately supported.
I believe that intelligence is an emergent property of physical processes, so I think that the assertion that it cannot be reproduced artificially is self-evident tosh, but your comment made me think of more interesting aspects.
Unfortunately you seem to be focused on point-scoring.
Fine. Whether you call it clarity or point-scoring is up to you.
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
Current AI is a really, really nice "Travesty Generator". Which is not intelligence. No one has explained a path from that to actual AI.
I might suggest you are being a bit woolly in your terms, with "Current AI" and "actual AI". We have had forms of AI for... well, centuries or millennia, depending what you mean, but about 80 years in a modern sense of the term. We've had everyday, practical uses of AI for over 40 years.
But I presume by "Current AI", you mean the recent explosion in generative AI methods, and particularly the use of large language models. LLMs are an exciting tech that is going to have a lot of practical uses. The hype from the cargo cult commentators should be ignored, but this is important tech.
Do LLMs and other generative AI get us any closer to artificial general intelligence (AGI), something that thinks like a person and what I presume you mean by "actual AI"? I think you're right that there is a very big gap between LLMs and AGI. Fancier, bigger LLMs are not going to turn into AGI and spontaneously generate self-awareness. But that doesn't mean that they might not be a part of the puzzle that gets you to AGI. LLMs do, already, prod our understanding of "real" intelligence and how we use language. They do suggest that a Chomskyian universal grammar is unnecessary and that statistical models of language acquisition are more viable than we thought.
Pretty much what I meant.
And solving particular tasks is not evidence of AI. For centuries people thought that a a machine that could play chess would have to be intelligent.
Then this played chess. Badly, but in 1K of space!
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I think that's a bit of a red herring. By that, I mean debates over whether an artificial brain can emulate or surpass a human brain. I don't see any reason why that shouldn't be possible, but that's all very hypothetical. What matters is what the current technology, in particular large language models, represent in terms of that quest. I think the question of whether the cargo cult are "getting excited over an Eliza with more data" is very apt.
Whether you think it's a red herring or not, it was a response to the question "What if AI can't happen?"
Honestly. If Elon is right this is it. The end of the world as we know it
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
You sound like Timothy Leary in the early days of the internet: "turn on, boot up, jack off in".
What if an all powerful AI decided that going Nazi was the right way to organise our societies that Hitler was correct. Or on the same theme that dictatorship was the best form of governance. What then.
If, if, if.
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data? What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
"What if AI can't happen?" is an intelligent question to ask, but it is a question, not an answer.
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
Unknown unknowns. The idea of television, for example, went pretty much against the scientific mainstream 150 years ago.
A computer scientist told me in 2004 that mass streaming of TV was impossible and would remain so as there could never be enough bandwidth. Now a middle-sized cheese at Google.
What was the quote? "Something is only possible after a middle aged scientist has denied it is possible."?
In the case of streaming, an extraordinary number of "experts" were unaware that
1) To stream a film *instantly*, you only need a network bitrate marginally higher than the playback bitrate. 2) The planned, years in advance, increase in capacities of the networks in various countries.
Expertise didn't apparently extend to reading published papers on the subject.
Comments
Probably bookmaking will become more profitable with learning algorithms, rather than less.
Edit: This is going to go down as your worst take on "AI" ever, by the way. You didn't realise bookmakers could also use AI? Honestly?
Which isn’t that complicated at all, albeit it might be difficult for a humourless accountant (ret’d)
The reason for the quibbling may relate to how the following twine together before the weekend:
* "I think [Diane Abbott] should be shot"
* Parliamentary Liaison and Investigations Team police investigation
* Sunak spox: remorse should be accepted
* new definition of "extremism", expected from Michael Gove tomorrow
How can we talk about anything else? This dwarfs anything else. This changes everything. It’s terrifying and spectacular. It’s the arrival of alien intelligence - vastly superior to ours. The world will be unrecognisable within a decade - IF he is right
And Elon is quite a bright man, who knows rather a lot about this stuff
Brace brace brace brace brace
Fuck me
ETA: Ah yes, as already noted by NigelB
AND he’s not the only one saying this
There is one upside to the fast-approaching singularity. It’s going to be interesting. It’s probably going to be the single most interesting thing in the history of humankind - and we are lucky enough to be here and to witness it, and with some warning beforehand
If I was contemplating suicide (I’m not, I’m on a bus in Colombia looking forward to the next town) I’d stay the blade, knowing this news. Why top yourself today when the next few years are going to be incredibly compelling and might kill all humanity anyway?
Linger awhile and catch the show
..😊
‘I think you are scum’ is not at all comparable to ‘You should be shot’.
So whilst he does spectacular stuff with Tesla and SpaceX and the Boring Company are test-digging what could be a revolution in tunnelling at Giga Texas and yes the AI stuff, everything else he tweets is foaming-dog-fever alt-right lunacy, with increasing levels of bat-shittedness.
Remember how the Microsoft AI bot went on Twitter to learn and rapidly went Nazi. The "Elon Musk" AI has done the same, but instead of being switched off is being allowed to burrow further and further down the lunacy rabbit hole.
I wish he would make it stop. Because all of the good that he does is undone by "Elon Musk" on X.
https://cepa.org/article/trump-is-unhinged-but-we-love-him-say-kremlin-mouthpieces/
...Kalashnikov stated: “I am a fan of Trump, based on the interests of my country.” He expressed gratitude for the four-year reprieve he said Trump’s presidency provided for Russia, allowing the country to prepare for its expanded invasion of Ukraine. This is not unusual — lawmakers and foreign policy experts frequently gush about their affinity for Trump as “the destroyer of America.” ..
Is this like 39 steps I get sucked into something without a clue what is going on? Or like Mr Bean?
Meanwhile in the programmers' club debating society, tonight's question is what if everyone could buy a suitcase nuke at the corner shop.
As for predictive ability, why leave it at the racetrack? Why not political betting, prediction markets generally, or the financial markets?
"Smarter than any single human" is silly talk. Overall smartness can't be measured on a single scale. I don't listen to anything Kurzweil says. He's the guy who said smartphones are making their users more intelligent.
Ha ha ha ha
Yet another fool who has bought into Muskmania.
We should add Hyperloop to the list. I remember some on here got very damp over that con.
Elon Musk is a known shroomhead.
And anyway it's the chip implant stuff that's really worrying.
It’s like the way smartphones destroyed pub quizzes - but times a billion, because there’s no way of eliminating the phones
https://www.youtube.com/watch?v=rvcgmVtm8Ko
Does this constitute financial advice ?
μονόλιθος in Ancient Greek means 'made of one stone' and Herodotus at least uses it in the sense of a single stone, without reference to its shape. His example is of a shrine made of one stone, with an internal chamber hollowed out.
https://scaife.perseus.org/reader/urn:cts:greekLit:tlg0016.tlg001.perseus-grc2:2.175.3?q=μονόλιθος&qk=lemma&right=perseus-eng2.
Monolith as a pillary thing is a new-fangled modern meaning. And I've seen it used in the context of eg Perspex.
https://londonist.com/london/secret/see-the-original-monolith-from-2001-a-space-odyssey
What if AI can't happen? What happens if Leon et al are getting excited over an Eliza with more data?
What if people believe systems that are not intelligent as being intelligent, and then follow whatever stupidity they say?
How many people follow a SatNav down a one-way street (or the wrong sliproad?).
They can probably get away with not giving the money back, especially if they were to follow FF43's suggested strategy of announcing that any further donations would be blocked pending an investigation.
What they can't get away with is trying to score political points. Quibbling, non-apologies, whataboutery, debating whether it's actually racist or misogynist to "want to hate black women" - at this stage it's just keeping the story alive. There's nothing to be gained from it.
and
https://www.youtube.com/watch?v=HlpX3AGLR8w
Well you never know. There was everyone saying this lark is all straightforward.
History books remembers the winners, not what helped them.
Off you go. Best of luck
Which makes them the first tech companies ever to say that...
Still less are "What if people are fooled into thinking computer systems are intelligent when they aren't", or "SatNav makes mistakes" answers to the question.
Really the question boils down to "Is there anything to prevent an artificial brain from emulating or surpassing a human brain?". Some people certainly advance religious or philosophical arguments to that effect, but it seems to me that they all hinge on the idea that either there is something essentially non-physical involved in intelligence, or else there is some function of our biological brains that is incapable of being replicated computationally. Both those ideas seem to go pretty much against the scientific mainstream.
I think the proposition that AI can't happen needs something a lot stronger than the kind of arguments people come up with while perched on bar-stools.
I suspect we are missing several layers of understanding about how the brain works (see also genetics - how is instinct encoded?). This is not to promote weird conspiracy theories just to recognise that there is no reason to think we are anywhere near knowing all the unknowns.
We should remember too that most scientific and technological progress is achieved at least in part through experimentation as much as pure thought. How adept AI will be an experimenting remains to be seen.
We don’t even know if humans have free will. Nor do we know if we are merely players in a simulation
Nor do we know if WE are just stochastic parrots, autocomplete machines of reflexes driven by our genes
So we don’t know any of this and you in particular don’t know shit, sorry
Robert F. Kennedy Jr. is having conversations with vice presidential candidates as he gets closer to announcing his running mate for his independent presidential bid.
Kennedy told The New York Times that NFL quarterback Aaron Rodgers and former Minnesota Gov. Jesse Ventura are at the top of his list. Stefanie Spear, a campaign spokesperson, confirmed the Times report and said there are other names on Kennedy’s short list.
Kennedy, a scion of one of the nation’s most prominent political families, has focused on getting access to the ballot, an expensive and time-consuming process that he has said will require him to collect more than a million signatures in a state-by-state effort.
Many states require independent candidates to name a running mate before they can seek access to the ballot, a factor driving the early push for Kennedy to make a pick. Major party candidates generally don’t pick vice presidential nominees until closer to their summer conventions. . . .
Rodgers, the longtime Green Bay Packers quarterback who now plays for the New York Jets, shares Kennedy’s distrust of vaccine mandates and, like Kennedy, is a fixture on anti-establishment podcasts. Ventura, a former professional wrestler, shocked observers when he won the race for Minnesota governor as an independent candidate in 1998.
SSI - Beyond national celebrity (bit dimmed by time in case of Ventura) either of these possible VP picks MIGHT give RFKjr a very wee (in more way than one?) boost in the battleground state of Wisconsin.
Ventura having been gov of neighboring Minnesota, and Rogers long-time quarterback for Green Bay Packers.
OR in case of AR, perhaps not . . .
Forbes - Why Aaron Rodgers Was Never Beloved Like Other Green Bay Packers Greats
While Packer Nation mourned [previous QB Brett] Favre’s departure, few tears were shed when the Rodgers’ trade became official. In fact there was more celebrating than sorrow.
Talk radio. Social media. Fan polls.
They’ve all had largely the same message for Rodgers in recent days: “Don’t let the door hit you on the way out.” . . .
https://www.forbes.com/sites/robreischel/2023/04/24/why-aaron-rodgers-was-never-beloved-like-other-green-bay-packers-greats/?sh=6878e6464425
https://www.iflscience.com/top-computer-scientist-thinks-super-intelligent-ai-could-be-here-by-2029-73280
Goertzel thinks large language models are the wrong tree to bark up (if I remember correctly, previously he said the wrong tree in the wrong forest on the wrong Continent). But that's not at all the same as thinking "AI can't happen".
The Future of TikTok in the US and UK: What You Need to Know
The US is considering a ban on TikTok while the UK may be following a similar path. But what does this mean for users and how is TikTok responding? Read on to find out.
#TikTok #Ban #US #UK #TechNews 📱
The US House of Representatives has passed a bill that could lead to the app being banned if its Chinese owner does not sell. What does this bill mean for TikTok and its users?
#TikTokBan #USBill #TechRegulation
If enforced, the bill would require the Chinese company ByteDance to sell its stake in the US version of TikTok, effectively resulting in a ban. But what is the impact of this decision and who would buy the US version of the app?
#ByteDance #US #TikTokSale
https://x.com/DoglinsNFT/status/1767941517342056720?s=20
1. The bullshit generator large language models that we have are not a stepping stone towards AI, useful though they might be in a number of applications.
2. There's a lot of evidence that the brain is more complicated than simply a large number of neurons that applies general purpose computation to create "intelligence", so trying to replicate it is a non-obvious problem.
The conceptually simplest way would probably be to create a really good simulation of the problems our brains evolved to solve, and then let artificial evolution have at it, but that isn't trivial to set up.
What I am trying to say is we don’t know what we don’t know. Now, that may be seen as a cop out but unless you think we have learnt all there is to know on this subject it’s undeniably true that we may yet learn things that do for example fall into the category of something non-physical being involved in intelligence.
But I presume by "Current AI", you mean the recent explosion in generative AI methods, and particularly the use of large language models. LLMs are an exciting tech that is going to have a lot of practical uses. The hype from the cargo cult commentators should be ignored, but this is important tech.
Do LLMs and other generative AI get us any closer to artificial general intelligence (AGI), something that thinks like a person and what I presume you mean by "actual AI"? I think you're right that there is a very big gap between LLMs and AGI. Fancier, bigger LLMs are not going to turn into AGI and spontaneously generate self-awareness. But that doesn't mean that they might not be a part of the puzzle that gets you to AGI. LLMs do, already, prod our understanding of "real" intelligence and how we use language. They do suggest that a Chomskyian universal grammar is unnecessary and that statistical models of language acquisition are more viable than we thought.
Unfortunately you seem to be focused on point-scoring.
Yet I think the consensus now is that it not only can, but does.
Trying to think of a good analogy. How about prior to Newton there was no reason to think that perpetual motion wasn’t possible? Or maybe, prior to Columbus there was no reason for Europeans to think that the Americas existed (you’ll have to excuse me the Vikings).
That makes it... unlikely... that Aaron Rodgers would be able to devote much time to being RFK's VP.
You are what you read.
I upset Penrose, long ago, when I asked him why a machine couldn't do his quantum stuff.
https://x.com/uklabour/status/1767888379172049075?s=61&t=c6bcp0cjChLfQN5Tc8A_6g
And solving particular tasks is not evidence of AI. For centuries people thought that a a machine that could play chess would have to be intelligent.
Then this played chess. Badly, but in 1K of space!
In the case of streaming, an extraordinary number of "experts" were unaware that
1) To stream a film *instantly*, you only need a network bitrate marginally higher than the playback bitrate.
2) The planned, years in advance, increase in capacities of the networks in various countries.
Expertise didn't apparently extend to reading published papers on the subject.