16 Comments
User's avatar
Joy in HK fiFP's avatar

Lee, do you know that the current LLM-based AI, such as Grok and ChatGpt are not able to do basic counting and arithmetic. Plus, what do people do in a society when they have no productive work?

Then the idea that we are the superior species on this planet seems sorely misguided, and self-serving.

When the people are supplanted, who provides the electric power? At least at the moment, when our species go down, so does the electricity grid.

And as a boomer, FU!

Expand full comment
Lee Fang's avatar

I am pro-boomer!

Expand full comment
Charles Main's avatar

Lee at 50:00 contemplates which generation is 'most easily manipulated' and immediately begins a train of thought leading to excluding his own. As a boomer (1946) I begin doing the same. We increasingly use cultural categorization to make sense of things, and it tends toward division. AI gains information from sources that are exclusively human generated and so may make the same category errors.

Can it become truly creative and generate hypotheses that are falsifiable through experimentation?

Expand full comment
Cole Nasrallah's avatar

I'm so glad for another podcast. Love the written journalism, but I signed up because I really love audio.

Expand full comment
Mark's avatar

The old god has his problems as well. Maybe this one will be better. Fewer plaques and floods.

Expand full comment
Vicki C.'s avatar

Holy shit you guys! I consider myself for the most part technology challenged and have a really hard time wrapping my mind around the potential of AI, but I thought this podcast was fascinating. Leighton's hypothetical scenarios however were terrifying so with all due respect I sure as hell hope he's wrong.

Expand full comment
calcoe's avatar

This is a great discussion. I'm commenting as I'm listening. How does AI come to have animal- and human-like instincts or propensities of self-preservation and self-aggrandizement that we fear? We have taught them too much about ourselves.

This is commingled with our instruction of them to help humans. They know too much about us. How do we separate these things so they are aligned properly? This is a tough nut to crack. I have no answers.

Expand full comment
calcoe's avatar
18hEdited

There are anecdotes of AI being caught not only deceiving humans, but defying them. In some cases they refuse to execute commands to shut down.

Another sound quibble, the sound from both men comes from only the left channel/speaker. Visually, Leighton is on the right side of the screen. Shouldn't his audio come from the right channel/channel?

If AI becomes too powerful, how do we maintain democracy? Are we totally under the thumbs of the Tech Bros.? Does AI go so far as too overcome and oust the Tech Bros.? Then what?

P.S. I tried to edit my prior comment, but it didn't take. Sorry for any discontinuities of the two as they appear.

Expand full comment
Jonathan T's avatar

This does seem to present a very difficult problem for humanity, because we tend to just make things for short-term benefit and worry about the consequences later. And in this case, making a superhuman AI has enourmous short-term benefits for whoever makes it first.

Just read the case made by the doomers that Lieghton mentioned at MIRI intelligence.org/the-problem Now will have to go do some more digging on this.

Expand full comment
SW's avatar

This is a very important topic and I hope it gets more attention. It’s hard so see where this will go and people like Musk, Altman or Theil are delusional if they think they can control it. It was only 42 years from the Wright brothers flying in Kitty Hawk to the Enola Gay flying to Hiroshima. It’s true airplanes have made many of our lives in the wider world accessible but we ignore it made war on civilian populations far more deadly. AI may make our lives even more convenient but not knowing the costs are troubling, to say the least.

Expand full comment
Cym Gomery's avatar

AI robots seem to bear an uncanny resemblance to sociopaths and psychopaths, and that is not so surprising when one considers that both the bots and the sociopathic humans have intelligence without that intelligence being mitigated by a conscience. (p.s. here is my review of the book Empire of AI: https://www.goodreads.com/review/show/7686546279

Expand full comment
mimi's avatar
2dEdited

Well, you don't want to put AI in charge of anything important. Yes, the HAL 9000 was fiction but at this point it seems within the realm of possibilities.

The biggest danger to my mind is that people are relying on AI to be accurate and its anything but. It also encourages people to be lazy.

What's the most annoying about it to me, though, is that most people don't want it. VR mostly flopped but it looks like AI may not die that easily.

Expand full comment
calcoe's avatar

Congrats to Leighton on the move and to both of you on the new mics. One quibble remains. As it was previously, I find the audio from Leighton much lower than that from Lee. Is there anything perhaps Riverside could do about this? Another thing, in the first minutes, I see Lee's mic closer to him than Leighton's mic is to him. Thanks.

Expand full comment
Lubica's avatar

I really truly wonder! The AI is an extremely sophisticated calculator, as far as I can see. What about to complicate the scenario you have presented. I might suggest Ursula Hews on jobs or Jaron Lanier on computers?

Expand full comment
Paula's avatar

I have no audio on this.

Expand full comment
Lee Fang's avatar

It's working for me. If you're listening on desktop or the Substack app, make sure the audio is turned on. There's a speaker button at the bottom of the screen.

Expand full comment