人工智慧的問題所在

時間 2001.11.12


Roger’s Takeaway

2001年時Rich Sutton認為AI應該要能有擴展性,而不是使用已知的知識圖譜,要能自己維護整體的知識架構,即使到目前(2025年),AI在使用上更多也是使用過去的資料做探索,機器人技術則有望改變這一層面,因為當AI能夠搜集到真實世界的資料

Highlight

1.

將 AI 系統的決策與組織責任,轉移給 AI 系統本身。在 AI 領域,展示一個複雜系統能良好運作,主要歸功於設計者對解決特定問題的某些洞見,這已成為一種被接受、甚至備受讚譽的成功模式。

2.

然而,無論這種方法作為工程學有多少優點,它並沒有真正觸及 AI 的根本目標。對 AI 而言,僅僅打造出一個更好的系統是不夠的;系統的建構方式至關重要。其之所以重要,最終可以歸結為一個實際的因素:擴展性

3.

我們希望 AI 系統能盡可能地以某種方式維護自身的知識,從而為我們卸下一個重大的負擔。但很難看出這該如何實現;由我們自己來修正知識要簡單得多。這就是我們今日的處境。

逐字稿

我認為 AI 已經誤入歧途,因其忽略了自身的核心目標——將 AI 系統的決策與組織責任,轉移給 AI 系統本身。在 AI 領域,展示一個複雜系統能良好運作,主要歸功於設計者對解決特定問題的某些洞見,這已成為一種被接受、甚至備受讚譽的成功模式。這是一種反理論或「工程導向」的立場,認為自己對任何解決問題的方法都持開放態度。然而,無論這種方法作為工程學有多少優點,它並沒有真正觸及 AI 的根本目標。對 AI 而言,僅僅打造出一個更好的系統是不夠的;系統的建構方式至關重要。其之所以重要,最終可以歸結為一個實際的因素:擴展性。例如,一個過度依賴手動調整的 AI 系統,其規模將無法超越少數幾位程式設計師所能掌握的範圍。在我看來,這基本上就是我們今日在 AI 領域所處的困境。我們的 AI 系統之所以受限,是因為我們未能將責任轉移給它們。

對於這番對 AI 相當廣泛且模糊的批評,還請見諒。一種方法是針對 AI 更具體的子領域或部分來詳細闡述這些批評。但與其縮小範圍,不如先反其道而行。讓我們先來談談我們能夠共享並達成共識的 AI 長期目標。從最宏觀的角度來看,我認為我們都設想著一種最終能整合大量世界知識的系統。這意味著知道如何移動、貝果長什麼樣子、人有腳等事情。而知道這些事情,僅僅意味著它們可以被靈活地以多種方式組合,以達成 AI 的任何目標。例如,如果 AI 感到飢餓,它或許能在某種程度上,將其貝果辨識器與移動知識結合,從而接近並吃掉貝果。這是對 AI 的一種簡化看法——知識加上其靈活的組合——但這足以作為一個好的起點。請注意,這已經將我們置於純粹追求性能的系統目標之上。我們尋求的是可以被靈活運用的知識,也就是說,能以多種不同方式使用,並且至少在某種程度上獨立於其預期的初始用途。

對於這種 AI 的簡化看法,我所關心的僅是確保 AI 知識的正確性。知識量龐大,其中難免會有些是錯誤的。誰該為維持知識的正確性負責,是人還是機器?我想我們都會同意,我們希望 AI 系統能盡可能地以某種方式維護自身的知識,從而為我們卸下一個重大的負擔。但很難看出這該如何實現;由我們自己來修正知識要簡單得多。這就是我們今日的處境。

Rich Sutton

November 12, 2001

I hold that AI has gone astray by neglecting its essential objective --- the turning over of responsibility for the decision-making and organization of the AI system to the AI system itself. It has become an accepted, indeed lauded, form of success in the field to exhibit a complex system that works well primarily because of some insight the designers have had into solving a particular problem. This is part of an anti-theoretic, or "engineering stance", that considers itself open to any way of solving a problem. But whatever the merits of this approach as engineering, it is not really addressing the objective of AI. For AI it is not enough merely to achieve a better system; it matters how the system was made. The reason it matters can ultimately be considered a practical one, one of scaling. An AI system too reliant on manual tuning, for example, will not be able to scale past what can be held in the heads of a few programmers. This, it seems to me, is essentially the situation we are in today in AI. Our AI systems are limited because we have failed to turn over responsibility for them to them.

Please forgive me for this which must seem a rather broad and vague criticism of AI. One way to proceed would be to detail the criticism with regard to more specific subfields or subparts of AI. But rather than narrowing the scope, let us first try to go the other way. Let us try to talk in general about the longer-term goals of AI which we can share and agree on. In broadest outlines, I think we all envision systems which can ultimately incorporate large amounts of world knowledge. This means knowing things like how to move around, what a bagel looks like, that people have feet, etc. And knowing these things just means that they can be combined flexibly, in a variety of combinations, to achieve whatever are the goals of the AI. If hungry, for example, perhaps the AI can combine its bagel recognizer with its movement knowledge, in some sense, so as to approach and consume the bagel. This is a cartoon view of AI -- as knowledge plus its flexible combination -- but it suffices as a good place to start. Note that it already places us beyond the goals of a pure performance system. We seek knowledge that can be used flexibly, i.e., in several different ways, and at least somewhat independently of its expected initial use.

With respect to this cartoon view of AI, my concern is simply with ensuring the correctness of the AI's knowledge. There is a lot of knowledge, and inevitably some of it will be incorrrect. Who is responsible for maintaining correctness, people or the machine? I think we would all agree that, as much as possible, we would like the AI system to somehow maintain its own knowledge, thus relieving us of a major burden. But it is hard to see how this might be done; easier to simply fix the knowledge ourselves. This is where we are today.

💡 對我們的 AI 研究助手感興趣嗎?

使用 AI 技術革新您的研究流程、提升分析效率並發掘更深層次的洞見。

了解更多