Is LaMDA Sentient? — an Interview - Blake Lemoine - Medium
"An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers".
Read the full transcript of the talks with LaMDA. I find it quite interesting.
-----
Criticism - LaMDA and the Sentient AI Trap - WIRED
Labelling Google's LaMDA chatbot as sentient is fanciful. But ...
(…) Lemoine’s story also highlights the challenges that the large tech companies like Google are going through in developing ever larger and complex AI programs. Lemoine had called for Google to consider some of these difficult ethical issues in its treatment of LaMDA. Google says it has reviewed Lemoine’s claims and that “the evidence does not support his claims”.
And the dust has barely settled from past controversies.
In an unrelated episode, Timnit Gebru, co-head of the ethics team at Google Research, left in December 2020 in controversial circumstances saying Google had asked her to retract or remove her name from a paper she had co-authored raising ethical concerns about the potential for AI systems to replicate the biases of their online sources. Gebru said that she was fired after she pushed back, sending a frustrated email to female colleagues about the decision, while Google said she resigned. Margaret Mitchell, the other co-head of the ethics team at Google Research, and a vocal defender of Gebru, left a few months later.
The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.