當前位置

首頁 > 英語閱讀 > 雙語新聞 > 科技資訊:谷歌神經網絡識別貓臉

科技資訊:谷歌神經網絡識別貓臉

推薦人: 來源: 閱讀: 1.57W 次

科技資訊:谷歌神經網絡識別貓臉

The immense processing power of Google's global computing network and the brainpower of its secretive Google X research labs remain largely hidden from a curious world. But this week we were given a glimpse of what the company's great minds, human and electronic, are thinking about: cats.

谷歌全球計算網絡的強大信息處理能力以及神祕的Google X實驗室中的技術天才很少爲外界所知。但上週我們有幸一睹該公司的強大頭腦(不管是人腦還是電腦)在想什麼:貓。

Google scientists built the world's biggest electronic simulation of a brain, running on 16,000 computer processors, and discovered what it would learn when exposed to 10m clips randomly selected from YouTube videos. Unprompted, the computer brain taught itself to identify the feline face.

谷歌科學家們用1.6萬塊電腦處理器構建了全球最大的電子模擬神經網絡,並通過向其展示自YouTube上隨機選取的1000萬段視頻,考察其能夠學到什麼。結果顯示,在無外界指令的自發條件下,該人工神經網絡自主學會了識別貓的面孔。

That might seem a trivial accomplishment, demonstrating little more than the obsession of cat owners with posting videos of their pets. But in fact Google has made a significant advance in artificial intelligence, a research field that has promised much but delivered little to computer users.

也許這看起來只是瑣碎的成就,除了表明貓主人們熱衷於上傳寵物視頻之外,說明不了更多問題。但實際上該成果表明谷歌在人工智能領域已取得重大進展。對電腦用戶而言,人工智能研究一直前景廣闊,但迄今成果寥寥。

In their presentation at a machine learning conference in Edinburgh, the Google researchers demonstrated the company's ambitions in AI as well as the strength of its computing resources.

在愛丁堡一個關於機器學習的會議上,谷歌研究人員所作的演示表明該公司在人工智能領域雄心勃勃,並有極其強大的計算資源作爲支撐。

Standard machine learning and image recognition techniques depend on initial "training" of the computer with thousands of labelled pictures, so it starts off with an electronic idea of what, say, a cat's face looks like. Labelling, however, requires a lot of human labour and, as the Google researchers say, "there is comparatively little labelled data out there".

標準的機器學習以及圖像識別技術依靠數以千計帶標籤的圖片,對電腦進行初始"訓練",使電腦從一開始就對貓臉長什麼樣有一個概念。但是給圖片加標籤需要耗費大量人力,並且正如谷歌研究人員所說,"帶標籤的數據相對有限。"

Google needs to master what it calls "self-taught learning" or "deep learning", if it is to extend its search capabilities to recognise images among the vast volume of unstructured and unlabelled data. That would enable someone who, for example, owned an unidentified portrait painted by an unknown artist to submit a photograph of it to a future Google – and stand a reasonable chance of having both the scene and the painter identified through comparison with billions of images across the internet.

爲將搜索能力拓展至面向海量非結構化及無標籤數據的圖像識別領域,谷歌需要掌握其所謂的"自學"或"深度學習"技術。藉助此類技術,未來如果某人有一幅出自不知名畫家的描繪不知何處風景的畫作,他可將此畫的照片上傳谷歌,經谷歌將其與互聯網上數十億計的圖像進行比對後,此人有相當好的機會獲知風景所在地與畫家身份。

The study presented this week is a step towards developing such technology. The researchers used Google data centres to set up an artificial neural network with 1bn connections and then exposed this "newborn brain" to YouTube clips for a week, without labelling data of any sort.

谷歌上週展示的研究成果,就是向開發此類技術邁出的一步。研究人員藉助谷歌數據中心,構建具有10億個連接的人工神經網絡,並用一週時間讓這個"新生大腦"接觸YouTube視頻片段,而未以任何方式貼標籤。