Skip to main content

Google’s LaMDA AI and the question of AI sentience

The notion of AI becoming sentient has long been fodder for science fiction movies and primarily speculative discussion about the risk and possibility of achieving general artificial intelligence. Rarely has such discussion entered the forefront of the AI community until recently, when the Washington Post profiled Google engineer Blake Lemoine, who was put on paid leave by the company for claiming their natural language processing model, LaMDA, possesses sentience.

Google’s LaMDA model isn’t yet publicly available, but differs from other natural language processing models of its kind because it was extensively trained on dialogue, making its ability to imitate human speech uncanny. The LaMDA model was generated primarily for use in generating text for chatbots. In his testing of the model, Lemoine (who has long been a voice of ethics at Google, specifically working in their Responsible AI department at the time of his departure) found the model able to make articulate distinctions between emotions and feelings, as well as express its own existential fear of being used and powered off in service of helping others.

These complex ideas being seemingly naturally expressed by a chatbot left Lemoine with a few key concerns. Firstly, he was concerned with both the ethical treatment of AI he believed possessed sentience, and secondly, with the power in Google’s hands were LaMDA and models like it able to achieve sentience. While he has admitted to having found no scientific evidence of sentience in the LaMDA model, he based his claims on his religious views as a mystic Christian priest. Representatives from Google, on the other hand, as well as experts in the AI community have profusely rejected Lemoine’s views as unfounded and speculative, citing the LaMDA model’s remarkable ability to generate text based on pattern recognition from the material it is trained on.

While this latter claim that LaMDA isn’t sentient, but rather adept at imitating sentience, the case poses a fascinating question how far off we truly are from general AI. While sentient AI was once thought to reside mainly in the realm of fiction, the ability of this natural language processing model to at least mimic sentience means we may not be so far off.


Comments are closed.