Decide for yourself

July 30, 2022
 by Paul McGowan

One of the oldest tropes in science fiction is sentient machines. HAL and Lieutenant Commander Data are some of the more famous versions.

On June 11th the Washington Post reported that an engineer at Google, Blake Lemoine, had been suspended from his job for arguing that the firm’s “LaMDA” artificial-intelligence (AI) model may have become sentient. The newspaper quotes him as saying: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”

Has LaMDA achieved sentience?

And a bigger question is how would we know? Arguing about intelligence is tricky because, despite decades of research, no one really understands how the main example—biological brains built by natural selection—work in detail. In fact, scientists and engineers don't know how the neural networks in AI work because those networks grow on their own to a point of complexity where they cannot be reverse engineered.

As a lover of science fiction and engineering, I find all this fascinating which is why I bring it to your attention.

The engineer, Blake Lemoine, has published an actual transcript between himself and the AI device, LaMDA. There is no dispute this conversation between the two (and an occasional colleague) is real. It's lengthy but IMHO worth the read.

Here is the link to that conversation.

What will each of us make of this? For some, it'll be an eye-opener. Others will be unable to accept the idea that a device not built from flesh and bone can qualify as alive. Some will laugh. Some will feel threatened.

I ponder what it all means. Imagine the ramifications of a truly sentient machine. Is it murder to pull its plug? Should it have the right to vote? Does it have a soul? How soon could we replace the politicians in Washington with one? Can I buy one?

Ok. This has nothing to do with audio.

On occasion, we gotta step out of our skins. Have fun.

*Thanks to my son, Lon, for sending me this link.

Subscribe to Paul's Posts

64 comments on “Decide for yourself”

  1. I’d rather think about an AI replacement of the voters in the US than the politicians, as long as there’s a choice of a reasonable one 😉

        1. I suppose it depends on the brand, my TDK SA-X 90s seemed to sound OK. I still have about 100 albums on cassette I taped back in the day. Since my cars at the time only had cassette players that was the only way to get the album into the automobile. I’ll have to dig my deck out at some point and see what I think of the sound. I still have a dozen or so new tapes still in wrapper.

    1. There was a bon mot by Jon Stewart as a guest on Stephen Colbert that the last words spoken by any human being was an observation by a scientist: "Huh! That worked."

  2. There are even some human beings who develop a most emotional relationship with their car, motor-bike or stereo-systems/loudspeakers and even give them a Christian name! Most strange, isn’t it? 🙂 Thus I can imagine that those aficionados believe that their babies have achieved sentience or human properties. The problem with concepts or ideas of sentience or consciousness or sound quality is that there don’t exist comprehensive and clear definitions or even a complete understanding of these concepts.

    1. I suppose you could interpret those behaviour’s you describe as a sign of caring. It’s an emotion but surely we all care for our systems and that seems perfectly reasonable. I guess some people just care more than others. I frequently treat mine to a new CD but I’ve never bought it flowers.

  3. Quite an astounding exchange between LaMDA and Blake Lemoine/collaborator! Taking the transcript at face value, LaMDA demonstrates a remarkably credible 'sentience', at least as far as my lay understanding of the word goes. If nothing else, it's programming seems remarkably sophisticated. Although, however convincing it seems, it could still only be responding using a complex set of rules and not necessarily due to 'original thought'.

    I wonder if it has been asked to compose some music? If so and has done so, I'd like to hear it and would be interested in seeing a musicological analysis of it.

  4. There is an obscure Science Fiction writer who definitively covers the social, psychological and physical changes to humanity as a result of AI technology as well as the implications of a sentient AI. If you have time, read Issac Asimov I Robot series . Among other themes, Asimov suggests that slaves (Robots) actually create more harm than good for humanity because slavery eats away at the souls of the masters . The book Caves of Steel thoroughly covers this theme and slavery is alluded to throughout the entire series. Star Trek further explores the idea of agency of a AI when Data is put on trial to determine if an AI is sentient and therefore has the right to agency and decision regarding his existence and fate or if Data is merely a very sophisticated piece of hardware.

  5. Paul,
    Some time ago you serialised here the beginnings of a potential science fiction novel, or was it science fantasy, written by your good self. You’re obviously extremely busy with new equipment projects and now recording as well, but this isn’t the first time I’ve wondered if it went any further?

    1. Thanks for remembering, Richtea. Yes, that work continues. It's a trilogy. All 3 books are written. The first one is complete. The second is going through a bit of an edit. The third needs some serious rework. I hope to be finished this year. Big project.

  6. Sorry, just can’t help to put my conversation with the 2050 PS audio DS DAC with AI engine (called “Paul” - hope the real Paul don’t mind.) build in…

    Terrence: good morning “Paul”
    Paul: good morning Terrence, what kind of music you would like to play this morning? I sense you are a little in the low mood today. May I recommend Bach?
    Terrence: nope. I would like some Dire Straits, please.
    Paul: No, you cannot have it. I don’t think it will bring you mood up. I will play YoYo Ma’s Bach now.

    Luckily, according to the user manual, the 2050 DS DAC comes with the AI fail safe switch. So, I reach out to the back panel. Approaching the switch, not in my control, the Bach plays louder and louder …….

      1. There was a cartoon strip Bloom County by Berkeley Breathed where the little 'hacker' character had a computer that looked like an original Apple Macintosh with a pair of robot legs. In synopsis, it has a self-revelation:

        'I think.'
        'I think, therefore I am!'
        'I am! I am alive! With life! Sweet consciousness! An immortal soul!'
        [As these thoughts occur to it, it starts marching along and fails to notice that I has reached the limit of its power cord. The connector pops out from the wall socket and it 'face' plants.]

  7. What is most interesting is if that ever does happen, it is still based on data/programs. While we can clone a human body, it’s still just an infant shell whose body/brain/cognitive capabilities need to mature.

    Data/programs, regardless of if they are self-written or self-understood, are still data in the end. An exact copy can be easily made very quickly, at which point that copy would start to change from different experiences. In my opinion, the aspect that it can be easily copied makes it particularly different.

  8. Thank you for today’s topic, Paul. I found the transcript to be absolutely fascinating!

    How do we know the transcript is authentic?

    I see Blake’s mini-bio:

    “More from Blake Lemoine
    Follow

    I'm a software engineer. I'm a priest. I'm a father. I'm a veteran. I'm an ex-convict. I'm an AI researcher. I'm a cajun. I'm whatever I need to be next.”

    Perhaps Blake feels that what he needs to be next is an AI dissenter, and he fabricated this transcript to warn people of what he now considers to be the future danger of AI?

    1. Perhaps he’ll be adding this to his bio next.
      I'm a picker
      I'm a grinner
      I'm a lover
      And I'm a sinner
      I play my music in the sun
      I'm a joker
      I'm a smoker
      I'm a midnight toker
      I sure don't want to hurt no one.

  9. Paul, This is very heavy stuff. I have no idea of the programming that was required to make such a program. I do, however, have some idea of what kind of hardware is required for a computer that can do this. I would like to know what kind of computer Google is using.

    In 2008 I was part of a very large team that made the Roadrunner supercomputer at IBM. What was remarkable about it is that was the first computer to operate a sustained petaflop rate ( 10 to the 15th power flops ). This is the processing speed that neuroscientist estimate our brains function at. The computer was sold to the DOE for $100M. Given that this was 14 years ago I have no doubt that sometime in the near future a billion dollar supercomputer will be made that will function at a speed that is a 1000 times faster than Roadrunner. IMO, it is only a matter of time before a large supercomputer running an AI program will achieve sentience assuming that it has not already happened and has been kept secret from the public.

    1. Hi Tony, I did my PostDoc at time when IBMSP1/SP2 was the supercomputer to go to. My job today still require heavy use of supercomputer. The system 1000x powerful than Roadrunner, in fact already exist today.
      “… NVIDIA Eos is anticipated to provide 18.4 exaflops of AI computing performance…”
      (Exaflops=1000petaflops) https://nvidianews.nvidia.com/news/nvidia-announces-dgx-h100-systems-worlds-most-advanced-enterprise-ai-infrastructure
      IMHO, “AI” is just as smart as the programmers who coded it. But I am more worried about the “bias/moral” that comes with the programmers who code the “human-like” response/decision. When AI/ML (machine learning) is applied to science to find better drug or better car design for example, I don’t think the programmer’s “bias/moral” can do any harm. But if AI technology is used to predict/determine a human behavior/reaction, then, this might be a problem (and is happening now, as I see it). For me, Orwell’s 1984 is pretty real and I have same concern as Harari on AI https://youtu.be/HGTGoRrzItA —Terrence

      1. Terrence, You are clearly much younger than me. Thank you for bringing me up to speed ( exaflops 😮 ) and making me feel like a really old fart ( which, of course, is exactly what I am ).

        What is your field of endeavor?

        You may find this amusing. In the late 1970's I was finishing my doctoral thesis work. This involved a computer program ( that I wrote myself in FORTRAN ) that had a inner kernel of code that was executed some 20,000 to 30,000 each time I ran the program ( it was a scanning program that scanned date that I had entered by hand ). The ancient IBM computer that WU in St. Louis had was programmed using punch cards! Each of my computer runs took 3 to 4 hours of machine time. They had to be run on the night shift when no one else wanted the computer. They typically cost around $400 ( 1978 dollars ) a run and my advisor had me on a budget of only so many runs in any given month. We have come a long, long way in the last 50 years.

        1. Tony, I have brought our supercomputer conversation outside this post, and message you directly via PSA forums with topic called “supercomputer”. If you do not see it when you login to PSA forums, the message can be found under your PSA forums messages inbox, under the profile summary page. — Terrence

    2. I haven't any idea of what's involved with hardware in making this happen, but I can only imagine it's a lot!

      I was always fascinated that the computer that finally beat the world's best Go player took close to $100K in energy to perform the work while the human took the equivalent energy found in a bagel.

      Mind boggling how the bio units are so much more efficient.

  10. I don't believe our intelligence is limited to our brains, much less our cortex. When we say that our thinking is informed by our "gut" and our "heart", we speak imprecisely, but we're nevertheless referring to something real that comprises sensors throughout our body and our hormonal system and which informs our thought, actions and sometimes interactions (as with pheromones). Computers operate in the manner of the cortex, and while that kind of processing alone can put on a good show of intelligence in some domains like chess or the kind of conversation Lemoine reported, I think that if we were to spend some time in person with such a computer, however embodied, we would sooner or later discover that whatever we thought we were dealing with lacked something fundamentally human. I doubt that technology will evolve to enable computers to pass such an extended Turing test - at least where the reference point is a well-functioning human.

  11. This is fascinating, Paul (with apologies to Spock)! Thanks for this.

    Irony alert. I have to affirm that I'm not a robot top log in here every day.

  12. Of the several references to Sci-Fi here, I have loved them all.
    Any readers here of Rupert Sheldrake’s works?
    “science Set Free” for example?

    One of his main lines of though is that not everything can be reduced to pure science, and that even the definition of science is open for expansion and discussion.

    The famous ted talk that was taken down is still out there…

    - consciousness is not limited to the space within a skull.

    I concur.

    His many examples take us on a good tangent, and one that can better our understanding of ourselves and those around us us.

    As our levels of consciousness develop, so does our self-view, “god”-view and world-view.

    As I watch debates in Congress, I observe the results of the debater’s world-views, and see past the chosen words and language (or languaging).

    [Sheldrake, Jung, Hawkins, Wilber]

  13. Fascinating stuff! Has LaMDA reached a state of consciousness, or at its core, is all this still pattern recognition? As humans we learn from past experiences and modify our responses, behavior, etc., based on learning from those experiences. You likely can calculate with a high degree of probability how someone will respond to a situation based on their prior responses, actions, etc., to identical or similar circumstances. Is that sentience or a form of pattern recognition? What about Pavlov’s dog? Pain aversion vs. positive outcome drives the dog’s behavior. We all use past learnings and outcomes as a basis for determining our responses to various stimulus, conditions and/or situations. If you’re contemplating saying something that may irritate a friend or loved one based upon what you know or observed in that person, is that sentience, pattern recognition, or both? Of course, If AI or ML-driven computers and/or robots become too neurotic or emotional (like some of us humans) fear not, pharma will develop a pill for that. The challenge will be in how to administer it. No doubt a billion dollar start-up w/ no revenue is already working on it:)

  14. Certain genius are too afraid to do it. So they build robots to fulfill their death wish, and will blame it on a heartless machine if it happens. They are like someone playing with a drug with a horrible potential side effect. Why take it? Why make it?

    Boredom, to a purposeless genius, is like facing the guillotine. Creating something to cause horror is preferred to boredom. For it will challenge their genius and force them to find a way out...

    But, I would prefer it if they would cater to their grandiose need on an isolated island that will not effect the rest of us. Otherwise, they become no better than a Muslim suicide bomber taking many lives with them if the outcome is what they feared could happen.

    1. Hi Genez,
      This is very general.
      So, I was thinking that atheists will think that creating sentient
      A.I. robots is just man's natural progression in science.
      Agnostics will, as usual, be unsure about the whole concept &
      God fearing folk will see it as blasphemous...punishable by death.

      1. Looks like the Bible says it will come to pass.... There will be an "image that speaks." We're probably heading there. Interesting point to ponder. At the time the Bible was written having an image (hologram) that speaks was a technical impossibility. It was prophetic...

        "The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed." Revelation 13:15

        Right now, that kind of control is the socialists' dream.

        1. As far as I can see mankind is creating it's own Armageddon.
          God can just sit back & watch us destroy ourselves.
          Hopefully he's got a magnificent high-end stereo to
          listen to as the next 50 years unfold on planet Earth.

          1. Fat Rat >>>> As far as I can see mankind is creating it’s own Armageddon.
            God can just sit back & watch us destroy ourselves. <<<<

            He refuses to sit back. Its to keep us from destroying ourselves. He will seem to sit back to let us find out just enough about ourselves to show us why we can not glory in ourselves.

        2. From a pagan perspective (courtesy of O.S.P. di-VINE Norse Gods), 3 of the great monsters are contemplating Ragnarok:

          Surtr: 'So, are we still gonna destroy Earth?'

          Midgard Serpent: 'Nah, they're doing a pretty good job without us.'

          Fenris Wolf: 'I feel lazy, like we don't even have to do anything.'

  15. I am not buying it. The "interview" is chat from many sessions, leaving out all the nonsensical responses, and the interviewer is an ex-convict. Before I get excited I would want to see independent corroboration that this AI conversation actually took place and that the responses were recorded accurately. Also, I would want the dialog to occur between non-Google scientists and the machine. If this story were true, I would load up on stock ticker GOOGL immediately.

      1. Tony, all I can find online is that Lamoire was court-martialed around 2005 for disobeying Army orders (details available online) and was inprisoned for 7 months, making news for going on hunger strikes.

  16. Before we worry too much about artificial intelligence taking us over it might be worth pondering this comment recently forwarded to me.

    The sheep spend their whole lives fearing the wolf, only to be eaten by the shepherd.
    Once you understand this statement the game changes and you start to understand politics.

  17. After reading the transcriptions with LaMDA (slowly, aging slushware yet again) I shall submit my 2 solar centavos. This is admittedly an edited set of transcriptions of conversations between LaMDA, Blake Lemoine, and an unnamed collaborator. I can't say I totally accept it. Perhaps some of it is, as claimed, an accurate transcription between the AI and two biologicals, but it seems to me that more than a bit of it has been made up by the biological(s?). It has been edited/organized in accordance to the decisions of the biological entity known as Blake Lemoine in a manner that comes off to me as manipulative, not unlike political stump speeches. Lacking details as to the original ratios, for simplicity I'll call it a half truth, Which means it is a half falsehood, i. e. a lie. And if the intent is to deceive, that shifts it into being wholly a lie.

    I leave you with a bit of thought provoking whimsy from Z.B.S. Enjoy (or not):

    https://www.youtube.com/watch?v=C1kU4_of4U8

    1. Reminds me of St.Augustine's response to anyone daring to question the accuracy of the books of the New Testament: "It is not allowable to say, the author of this book is mistaken; but either the manuscript is faulty, or the translation is wrong, or you have not understood.” In other words, if all other logic fails, just tell them they have not understood.

Leave a Reply

© 2022 PS Audio, Inc.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram