當前位置

首頁 > 英語閱讀 > 英語閱讀理解 > 用人類智慧應對人工智能挑戰大綱

用人類智慧應對人工智能挑戰大綱

推薦人: 來源: 閱讀: 1.96W 次

A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the increasing power of AI could result in a “fascist’s dream” if the technology were misused by authoritarian regimes.
關於人工智能的變革威力,人們提出了很多大膽的設想。但我們也有必要聽聽一些嚴重警告。上月,微軟研究院(Microsoft Research)首席研究員凱特?克勞福德(Kate Crawford)警告稱,如果被威權政府濫用,威力與日俱增的人工智能可能會釀成一場“法西斯夢”。

“Just as we are seeing a step function increase in the speed of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” Ms Crawford told the SXSW tech conference.
克勞福德在SXSW科技大會上表示:“就在我們看到人工智能的發展速度呈階梯型上升時,其他一些事情也在發生:極端民族主義、右翼威權主義和法西斯主義崛起。”

The creation of vast data registries, the targeting of population groups, the abuse of predictive policing and the manipulation of political beliefs could all be enabled by AI, she said.
她表示,人工智能可能帶來龐大的數據登記冊、針對特定人口羣體、濫用預測型警務以及操縱政治信仰。

用人類智慧應對人工智能挑戰

Ms Crawford is not alone in expressing concern about the misapplication of powerful new technologies, sometimes in unintentional ways. Sir Mark Walport, the British government’s chief scientific adviser, warned that the unthinking use of AI in areas such as the medicine and the law, involving nuanced human judgment, could produce damaging results and erode public trust in the technology.
克勞福德並不是唯一對強大的新技術被錯誤使用(有時以意想不到的方式)感到擔憂的人。英國政府首席科學顧問馬克?沃爾波特(Mark Walport)警告稱,在醫學和法律等涉及細膩人類判斷的領域不假思索地使用人工智能,可能帶來破壞性結果,並侵蝕公衆對這項技術的信任。

Although AI had the potential to enhance human judgment, it also risked baking in harmful prejudices and giving them a spurious sense of objectivity. “Machine learning could internalise all the implicit biases contained within the history of sentencing or medical treatment — and externalise these through their algorithms,” he wrote in an article in Wired.
儘管人工智能有增強人類判斷的潛力,但它也可能帶來有害的偏見,併產生一種錯誤的客觀感覺。他在《連線》(Wired)雜誌的一篇文章中寫道:“機器學習可能會內部化在量刑或醫療歷史中存在的所有隱性偏見,並通過它們的算法外部化。”

As ever, the dangers are a lot easier to identify than they are to fix. Unscrupulous regimes are never going to observe regulations constraining the use of AI. But even in functioning law-based democracies it will be tricky to frame an appropriate response. Maximising the positive contributions that AI can make while minimising its harmful consequences will be one of the toughest public policy challenges of our times.
就像一直以來的情況那樣,識別危險仍然要比化解危險容易得多。沒有底線的政權永遠不會遵守限制人工智能使用的規定。然而,即便在正常運轉的基於法律的民主國家,框定適當的迴應也很棘手。將人工智能可以做出的積極貢獻最大化,同時將其有害後果降至最低,將是我們這個時代最艱鉅的公共政策挑戰之一。

For starters, the technology is difficult to understand and its use is often surreptitious. It is also becoming increasingly hard to find independent experts, who have not been captured by the industry or are not otherwise conflicted.
首先,人工智能技術很難理解,其用途往往帶有神祕色彩。找到尚未被行業挖走、且不存在其他利益衝突的獨立專家也變得越來越難。

Driven by something approaching a commercial arms race in the field, the big tech companies have been snapping up many of the smartest academic experts in AI. Much cutting-edge research is therefore in the private rather than public domain.
受到該領域類似商業軍備競賽的競爭的推動,大型科技公司一直在爭奪人工智能領域很多最優秀的學術專家。因此,很多領先研究位於私營部門,而非公共部門。

To their credit, some leading tech companies have acknowledged the need for transparency, albeit belatedly. There has been a flurry of initiatives to encourage more policy research and public debate about AI.
值得肯定的是,一些領先科技公司認識到了透明的必要性,儘管有些姍姍來遲。還有一連串倡議鼓勵對人工智能展開更多政策研究和公開辯論。

Elon Musk, founder of Tesla Motors, has helped set up OpenAI, a non-profit research company pursuing safe ways to develop AI.
特斯拉汽車(Tesla Motors)創始人埃隆?馬斯克(Elon Musk)幫助創建了非盈利研究機構OpenAI,致力於以安全方式開發人工智能。

Amazon, Facebook, Google DeepMind, IBM, Microsoft and Apple have also come together in Partnership on AI to initiate more public discussion about the real-world applications of the technology.
亞馬遜(Amazon)、Facebook、谷歌(Google) DeepMind、IBM、微軟(Microsoft)和蘋果(Apple)也聯合發起Partnership on AI,以啓動更多有關該技術實際應用的公開討論。

Mustafa Suleyman, co-founder of Google DeepMind and a co-chair of the Partnership, says AI can play a transformative role in addressing some of the biggest challenges of our age. But he accepts that the rate of progress in AI is outstripping our collective ability to understand and control these systems. Leading AI companies must therefore become far more innovative and proactive in holding themselves to account. To that end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics board to scrutinise all the company’s activities.
谷歌DeepMind聯合創始人、Partnership on AI聯合主席穆斯塔法?蘇萊曼(Mustafa Suleyman)表示,人工智能可以在應對我們這個時代一些最大挑戰方面發揮變革性作用。但他認爲,人工智能的發展速度超過我們理解和控制這些系統的集體能力。因此,領先人工智能公司必須在對自己問責方面發揮更具創新和更主動的作用。爲此,這家總部位於倫敦的公司正在嘗試可驗證的數據審計,並將很快宣佈一個道德委員會的構成,該委員會將審查該公司的所有活動。

But Mr Suleyman suggests our societies will also have to devise better frameworks for directing these technologies for the collective good. “We have to be able to control these systems so they do what we want when we want and they don’t run ahead of us,” he says in an interview for the FT Tech Tonic podcast.
但蘇萊曼指出,我們的社會還必須設計更好的框架,指導這些技術爲集體利益服務。他在接受英國《金融時報》Tech Tonic播客的採訪時表示:“我們必須能夠控制這些系統,使他們在我們希望的時間做我們想做的事,而不會自說自話。”

Some observers say the best way to achieve that is to adapt our legal regimes to ensure that AI systems are “explainable” to the public. That sounds simple in principle, but may prove fiendishly complex in practice.
一些觀察人士表示,做到這點的最佳方法是調整我們的法律制度,確保人工智能系統可以向公衆“解釋”。從原則上說,這聽上去很簡單,但實際做起來可能極爲複雜。

Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on “mindless minds” that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. “If you cannot meaningfully explain your system’s decisions then you cannot make them,” she says.
布魯塞爾自由大學(Free University of Brussels)法律和科技學教授米雷列?希爾德布蘭特(Mireille Hildebrandt)表示,人工智能的危險之一是我們變得過度依賴我們並不完全理解的“不用腦子的智慧”。她認爲,這些算法的目的和影響必須是可測試而且在法庭上是可爭論的。她表示:“如果你無法有意義地解釋你的系統的決定,那麼你就不能製造它們。”

We are going to need a lot more human intelligence to address the challenges of AI.
我們將需要更多的人類智慧來應對人工智能挑戰。