One of the oldest tropes in science fiction is sentient machines. HAL and Lieutenant Commander Data are some of the more famous versions.
On June 11th the Washington Post reported that an engineer at Google, Blake Lemoine, had been suspended from his job for arguing that the firm’s “LaMDA” artificial-intelligence (AI) model may have become sentient. The newspaper quotes him as saying: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”
Has LaMDA achieved sentience?
And a bigger question is how would we know? Arguing about intelligence is tricky because, despite decades of research, no one really understands how the main example—biological brains built by natural selection—work in detail. In fact, scientists and engineers don't know how the neural networks in AI work because those networks grow on their own to a point of complexity where they cannot be reverse engineered.
As a lover of science fiction and engineering, I find all this fascinating which is why I bring it to your attention.
The engineer, Blake Lemoine, has published an actual transcript between himself and the AI device, LaMDA. There is no dispute this conversation between the two (and an occasional colleague) is real. It's lengthy but IMHO worth the read.
Here is the link to that conversation.
What will each of us make of this? For some, it'll be an eye-opener. Others will be unable to accept the idea that a device not built from flesh and bone can qualify as alive. Some will laugh. Some will feel threatened.
I ponder what it all means. Imagine the ramifications of a truly sentient machine. Is it murder to pull its plug? Should it have the right to vote? Does it have a soul? How soon could we replace the politicians in Washington with one? Can I buy one?
Ok. This has nothing to do with audio.
On occasion, we gotta step out of our skins. Have fun.
*Thanks to my son, Lon, for sending me this link.
© 2022 PS Audio, Inc.