,Sam Altman 寫了一封公開信《Ten Years》
這個公開信被放在了 OpenAI 的首頁,隨之而來的還有一段視頻
本篇為翻譯稿件,后附原文
過往十年
序:關于這十年突破、經驗與通向造福全人類的 AGI 之路的一些思考
OpenAI 取得的成就,已經遠超我當初敢想象的邊界。我們一開始就是要做一件瘋狂的、不太可能成功的、前無古人的事。從一個充滿不確定性的起點出發,頂著所有理性分析都不看好的壓力,憑借持續的努力,現在看起來,我們真的有機會完成我們的使命了。
十年前的今天,我們向世界宣布了這個計劃,盡管正式啟動要等到幾周后的 2016 年 1 月初。
十年,從某種意義上說很長。但如果放在社會變革通常需要的時間尺度里,其實又很短。雖然日常生活和十年前比起來并沒有那么大的不同,但我們面前的可能性空間,已經和當初 15 個技術宅圍坐在一起、絞盡腦汁想著怎么往前推進時,完全不一樣了。
回看早期的照片,我首先被每個人年輕的面孔震撼到。但更讓我震撼的是,每個人臉上那種不合理的樂觀,以及那種發自內心的快樂。那是一段瘋狂而快樂的時光:雖然外界對我們充滿誤解,但我們有深刻的信念、有「這件事太重要了,哪怕成功概率很小也值得拼命去做」的使命感、有極其優秀的人才,還有高度聚焦的目標。
一點一點地,隨著幾次小勝(和更多次失敗),我們逐漸建立起對正在發生之事的理解。那時候很難想清楚具體該做什么,但我們建立了一種極其適合探索發現的文化。深度學習(Deep Learning)顯然是一項偉大的技術,但如果只是在實驗室里閉門造車、不在真實世界中積累運營經驗,總感覺哪里不對。那些年我們做過的事情太多了(希望以后有人能寫一部完整的歷史),但我們始終保持著一種精神:專注于眼前的下一個障礙——可能是研究能把我們帶向何方,可能是怎么搞到錢買更大的計算機,或者別的什么。我們在實踐中開創了讓 AI 安全、可靠的技術路線,這種 DNA 一直延續到今天。
2017 年,我們取得了幾項奠基性的成果:Dota 1v1 的突破,把強化學習(Reinforcement Learning)推向了新的規模高度;無監督情感神經元(Unsupervised Sentiment Neuron)的發現,讓我們清楚地看到語言模型確實在學習語義,而不僅僅是語法;還有基于人類反饋的強化學習(RLHF,Reinforcement Learning from Human Preferences)成果,展示了一條讓 AI 與人類價值觀對齊的初步路徑。到這個階段,創新遠未結束,但我們已經知道,需要用海量的算力把這些成果放大。
我們繼續推進,技術越來越好。三年前,我們發布了 ChatGPT。世界開始關注我們,而當我們發布 GPT-4 后,關注變成了震動——突然之間,AGI(通用人工智能,Artificial General Intelligence)不再是一個瘋狂的想法了。過去這三年極其緊張,充滿壓力和沉甸甸的責任。這項技術以前所未有的規模和速度融入了世界。這需要極其高難度的執行力,而我們必須馬上練出這塊新肌肉。在這么短的時間內從零成長為一家大型公司并不容易,我們每周都要做出上百個決策。我為團隊做對的那些決策感到驕傲,而做錯的那些,大多是我的責任。
我們必須做出一些全新類型的決策。比如,當我們在思考如何讓 AI 最大程度地造福世界時,我們提出了「迭代部署」(Iterative Deployment)策略——把技術的早期版本投放到真實世界中,讓人們建立直覺,讓社會與技術共同演化。這在當時爭議很大,但我認為這是我們做過的最好的決策之一,現在也已經成為行業標準。
OpenAI 走過十年,我們已經擁有一個 AI,它在我們最頂尖的人才所參與的、難度最高的智力競賽中,表現得比大多數人都好。
全世界已經在用這項技術做出非凡的事情,而我們預計,在接下來的一年里,還會有更多非凡之事發生。與此同時,世界在應對潛在負面影響方面做得還不錯,我們需要繼續努力保持這個勢頭。
我對我們的研究路線和產品路線從未如此樂觀,對通向使命的整體視野也是如此。我相信,再過十年,我們幾乎肯定能造出超級智能(Superintelligence)。我預計未來會讓人感到奇異:從某種意義上說,日常生活和我們最在意的那些事不會有太大變化,我相信我們仍然會比關注機器更關注其他人在做什么。但從另一個意義上說,2035 年的人們能做到的事情,我覺得我們今天根本無法輕易想象。
我感謝那些信任我們、使用我們產品去做偉大事情的人和公司。沒有他們,我們只是一個實驗室里的技術。我們的用戶和客戶,在很多情況下是用一種早期的、不合理的高度信念押注了我們。沒有他們,我們的工作不可能達到今天的水平。
我們的使命是確保 AGI 造福全人類。前方還有很多工作要做,但我真的為團隊帶領我們走上的這條軌跡感到驕傲。人們今天已經用這項技術做出了巨大的成就,而我們知道,未來幾年還有更多精彩在等著我們。
下為英文版原文
Ten years
Reflections on a decade of breakthroughs, learnings, and the path toward AGI that benefits all of humanity.
OpenAI has achieved more than I dared to dream possible; we set out to do something crazy, unlikely, and unprecedented. From a deeply uncertain start and against all reasonable odds, with continued hard work it now looks like we have a shot to succeed at our mission.
We announced our effort to the world ten years ago today, though we didn’t officially get started?(opens in a new window) for another few weeks, in early January of 2016.
Ten years is a very long time in some sense, but in terms of how long it usually takes the arc of society to bend, it is not very long at all. Although daily life doesn’t feel all that different than it did a decade ago, the possibility space in front of us all today feels very different than what it felt like when we were 15 nerds sitting around trying to figure out how to make progress.
When I look back at the photos from the early days, I am first struck by how young everyone looks. But then I’m struck by how unreasonably optimistic everyone looks, and how happy. It was a crazy fun time: although we were extremely misunderstood, we had a deeply held conviction, a sense that it mattered so much it was worth working very hard even with a small chance of success, very talented people, and a sharp focus.
Little by little, we built an understanding of what was going on as we had a few wins (and many losses). In those days it was difficult to figure out what specifically to work on, but we built an incredible culture for enabling discovery. Deep learning was clearly a great technology, but developing it without gaining experience operating it in the real world didn’t seem quite right. I’ll skip the stories of all the things we did (I hope someone will write a history of them someday) but we had a great spirit of always just figuring out the next obstacle in front of us: where the research could take us next, or how to get money for bigger computers, or whatever else. We pioneered technical work for making AI safe and robust in a practical way, and that DNA carries on to this day.
In 2017, we had several foundational results: our Dota 1v1 results, where we pushed reinforcement learning to new levels of scale. The unsupervised sentiment neuron, where we saw a language model undeniably learn semantics rather than just syntax. And we had our reinforcement learning from human preferences result, showing a rudimentary path to aligning an AI with human values. At this point, the innovation was far from done, but we knew we needed to scale up each of these results with massive computational power.
We pressed on and made the technology better, and we launched ChatGPT three years ago. The world took notice, and then much more when we launched GPT?4; all of a sudden, AGI was no longer a crazy thing to consider. These last three years have been extremely intense and full of stress and heavy responsibility; this technology has gotten integrated into the world at a scale and speed that no technology ever has before. This required extremely difficult execution that we had to immediately build a new muscle for. Going from nothing to a massive company in this period of time was not easy and required that we make hundreds of decisions a week. I’m proud of how many of those the team has gotten right, and the ones we’ve gotten wrong are mostly my fault.
We have had to make new kinds of decisions; for example, as we wrestled with the question of how to make AI maximally beneficial to the world, we developed a strategy of iterative deployment, where we successfully put early versions of the technology into the world, so that people can form intuitions and society and the technology can co-evolve. This was quite controversial at the time, but I think it has been one of our best decisions ever and become the industry standard.
Ten years into OpenAI, we have an AI that can do better than most of our smartest people at our most difficult intellectual competitions.
The world has been able to use this technology to do extraordinary things, and we expect much more extraordinary things in even the next year. The world has also done a good job so far of mitigating the potential downsides, and we need to work to keep doing that.
I have never felt more optimistic about our research and product roadmaps, and overall line of sight towards our mission. In ten more years, I believe we are almost certain to build superintelligence. I expect the future to feel weird; in some sense, daily life and the things we care most about will change very little, and I’m sure we will continue to be much more focused on what other people do than we will be on what machines do. In some other sense, the people of 2035 will be capable of doing things that I just don’t think we can easily imagine right now.
I am grateful to the people and companies who put their trust in us and use our products to do great things. Without that, we would just be a technology in a lab; our users and customers have taken what is in many cases an early and unreasonably high-conviction bet on us, and our work wouldn’t have gotten to this level without them.
Our mission is to ensure that AGI benefits all of humanity. We still have a lot of work in front of us, but I’m really proud of the trajectory the team has us on. We are seeing tremendous benefits in what people are doing with the technology already today, and we know there is much more coming over the next couple of years.
特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發布,本平臺僅提供信息存儲服務。
Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.