當前位置

首頁 > 英語閱讀 > 雙語新聞 > 機器人能理解喜怒哀樂嗎

機器人能理解喜怒哀樂嗎

推薦人: 來源: 閱讀: 2.2W 次

Can a robot read your emotions? Apple, Google, Facebook and other technology companies seem to think so. They are collectively spending billions of dollars to build emotion-reading devices that can interact meaningfully (and profitably) with humans using artificial intelligence.

機器人能讀懂你的情緒嗎?蘋果(Apple)、谷歌(Google)、Facebook等科技公司的答案似乎是肯定的。它們共斥資數十億美元用於研發能讀懂情緒的設備,讓設備利用人工智能與人類進行有意義(並可帶來利潤)的互動。

These companies are banking on a belief about emotions that has held sway for more than 100 years: smiles, scowls and other facial movements are worldwide expressions of certain emotions, built in from birth. But is that belief correct? Scientists have tested it across the world. They use photographs of posed faces (pouts, smiles), each accompanied by a list of emotion words (sad, surprised, happy and so on) and ask people to Pick the word that best matches the face. Sometimes they tell people a story about an emotion and ask them to choose between posed faces.

這些企業正寄望於一種流行了100多年的有關情緒的看法:微笑、憤怒和其他面部活動都是在表達某種情緒,這是與生俱來的,而且是全球相通的。但這種看法正確嗎?科學家在全球各地進行了實驗。他們利用面部動作的圖片(噘嘴、微笑),每張圖片的後面都列出一些描述情緒的詞彙(悲傷、驚訝、高興等等),然後要求實驗對象選擇與面部動作最匹配的詞彙。有時,他們會講述一個有關情緒的故事,然後讓實驗對象在不同的面部表情中做出選擇。

Westerners choose the expected word about 85 per cent of the time. The rate is lower in eastern cultures, but overall it is enough to claim that widened eyes, wrinkled noses and other facial movements are universal expressions of emotion. The studies have been so well replicated that universal emotions seem to be bulletproof scientific fact, like the law of gravity, which would be good news for robots and their creators.

西方人大約有85%選擇了預期詞彙。東方人的得分較低,但總的來說,這足以說明眼睛睜大、皺鼻和其他面部動作都是全球通用的情緒表達方式。這些研究重複了多次,結果都一樣,通用的情緒似乎成了刀槍不入的科學事實,就像重力法則一樣,這對於機器人和他們的創造者來說是個好消息。

But if you tweak these emotion-matching experiments slightly, the evidence for universal expressions dissolves. Simply remove the lists of emotion words, and let subjects label each photo or sound with any emotion word they know. In these experiments, US subjects identify the expected emotion in photos less than 50 per cent of the time. For subjects in remote cultures with little western contact, the results differ even more.

然而,如果你稍微調整一下這些情緒匹配實驗,表情具有普適性的證據就消失了。如果去掉情緒詞彙列表,讓實驗對象用他們知道的情緒詞彙來描述圖片或聲音。在這些實驗中,美國實驗對象的正確率不到50%,對於與西方接觸不多的遙遠文化的實驗對象而言,結果就更不同了。

Overall, we found that these and other sorts of emotion-matching experiments, which have supplied the primary evidence for universal emotions, actually teach the expected answers to participants in a subtle way that escaped notice for decades — like an unintentional cheat sheet. In reality, you’re not “reading” faces and voices. The surrounding situation, which provides subtle cues, and your experiences in similar situations, are what allow you to see faces and voices as emotional.

總的來說,我們發現,這些實驗以及其他各種情緒匹配實驗(提供了情緒具有普適性的主要證據)以一種微妙的方式把預期的答案教給了實驗參與者,而這是幾十年來人們未曾注意到的——就像無意中的打小抄。在現實中,你並沒有在“閱讀”面部和聲音。提供細微提示的周圍環境以及你在類似情境下的經驗,讓你把面部活動和聲音視爲是情緒的表達。

A knitted brow may mean someone is angry, but in other contexts it means they are thinking, or squinting in bright light. Your brain processes this so quickly that the other person’s face and voice seem to speak for themselves. A hypothetical emotion-reading robot would need tremendous knowledge and context to guess someone’s emotional experiences.

雙眉緊鎖可能意味着一個人生氣了,但在其他背景下,這可能意味着他們在思考問題或因爲光照強烈而眯着眼。你的大腦處理速度很快,以至於別人的面部和聲音似乎在表達一種情緒。假想中的能讀懂情緒的機器人需要大量知識和背景來猜測一個人的情緒體驗。

So where did the idea of universal emotions come from? Most scientists point to Charles Darwin’s The Expression of the Emotions in Man and Animals (1872) for proof that facial expressions are universal products of natural selection. In fact, Darwin never made that claim. The myth was started in the 1920s by a psychologist, Floyd Allport, whose evolutionary spin job was attributed to Darwin, thus launching nearly a century of misguided beliefs.

那麼通用情緒的觀點從何而來?多數科學家舉出查爾斯?達爾文(Charles Darwin)1872年的著作《人與動物的情緒表達》(The Expression of the Emotions in Man and Animals)作爲證據,證明面部表情是自然選擇的通用產物。實際上,達爾文從未這麼說過。這種說法源於上世紀20年代的心理學家弗洛伊德?奧爾波特(Floyd Allport),他的進化論解釋工作被認爲是出自達爾文,這致使錯誤的觀點延續了近一個世紀。

Will robots become sophisticated enough to take away jobs that require knowledge of feelings, such as a salesperson or a nurse? I think it’s unlikely any time soon. You can probably build a robot that could learn a person’s facial movements in context over a long time. It is far more difficult to generalise across all people in all cultures, even for simple head movements. People in some cultures shake their head side to side to mean “yes” or nod to mean “no”. Pity the robot that gets those movements backwards. Pity even more the human who depends on that robot.

機器人會變得足夠複雜以至於奪走需要理解情緒的工作嗎?例如銷售人員或護士。我認爲,這不太可能很快出現。你或許可以製造一臺能夠在特定環境下經過長期學習從而理解人類面部表情的機器人。但把所有文化中所有人的面部表情概括出來就困難多了,即便是簡單的頭部動作。在一些文化中,搖頭的意思是“是”,點頭的意思是“不”。把這些動作搞反的機器人會很可憐。那些依賴這些機器人的人類就更可憐了。

機器人能理解喜怒哀樂嗎

Nevertheless, tech companies are pursuing emotion-reading devices, despite the dubious scientific basis There is no universal expression of any emotion for a robot to detect Instead, variety is the norm.

儘管如此,科技公司正尋求研發能讀懂情緒的設備,儘管其科學基礎可疑。任何情緒都沒有通用的表達方式來供機器人識別,多樣性纔是常態。

The writer is author of ‘How Emotions Are Made: The Secret Life of the Brain’

本文作者著有《情緒如何產生:大腦的祕密生活》(How Emotions Are Made: The Secret Life of the Brain)一書