
The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore’s law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation. There were many examples of AI researchers’ belated learning of this bitter lesson, and it is instructive to review some of the most prominent.
In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that “brute force” search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.
A similar pattern of research progress was seen in computer Go, only delayed by a further 20 years. Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale. Also important was the use of learning by self play to learn a value function (as it was in many other games and even in chess, although learning did not play a big role in the 1997 program that first beat a world champion). Learning by self play, and learning in general, is like search in that it enables massive computation to be brought to bear. Search and learning are the two most important classes of techniques for utilizing massive amounts of computation in AI research. In computer Go, as in computer chess, researchers’ initial effort was directed towards utilizing human understanding (so that less search was needed) and only much later was much greater success had by embracing search and learning.
In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge—knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked—they tried to put that knowledge in their systems—but it proved ultimately counterproductive, and a colossal waste of researcher’s time, when, through Moore’s law, massive computation became available and a means was found to put it to good use.
In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.
This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes. To see this, and to effectively resist it, we have to understand the appeal of these mistakes. We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.
One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.
The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.
从 70 年的人工智能研究中可以得到的最大教训是,利用计算的通用方法最终是最有效的,而且是最大的优势。其根本原因是摩尔定律,或者更确切地说是其对单位计算成本持续呈指数下降的概括。大多数人工智能研究都是在代理可用的计算是恒定的情况下进行的(在这种情况下,利用人类知识将是提高性能的唯一方法之一),但与典型的研究项目相比,在稍长的时间内,不可避免地会出现大量的计算可用。为了寻求在短期内产生影响的改进,研究人员试图利用他们对该领域的人类知识,但从长远来看,唯一重要的是利用计算。这两者不必相互矛盾,但在实践中它们往往是相互矛盾的。花在其中一个上的时间就是没有花在另一个上的时间。人们在心理上承诺投资于一种方法或另一种方法。而人类知识方法往往会使方法复杂化,使其不太适合利用利用计算的通用方法。人工智能研究人员迟迟没有吸取这一惨痛教训的例子有很多,回顾一下其中最突出的一些例子很有启发意义。
在计算机象棋中,1997 年击败世界冠军卡斯帕罗夫的方法是基于大规模深度搜索。当时,大多数计算机象棋研究人员对此感到沮丧,他们一直在寻求利用人类对象棋特殊结构的理解的方法。当一种更简单的、基于搜索的方法加上特殊的硬件和软件被证明更为有效时,这些基于人类知识的象棋研究人员就不是善于输的人了。他们说,“蛮力”搜索这次可能赢了,但这不是一种通用策略,而且无论如何它也不是人们下棋的方式。
这些研究人员希望基于人类输入的方法能够获胜,但结果却令他们失望。 计算机围棋也出现了类似的研究进展模式,只是推迟了 20 年。最初,人们付出了巨大的努力,利用人类知识或游戏的特殊功能来避免搜索,但一旦搜索被大规模有效应用,所有这些努力都被证明是无关紧要的,甚至更糟。同样重要的是使用自学来学习价值函数(就像在许多其他游戏甚至国际象棋中一样,尽管学习在 1997 年首次击败世界冠军的程序中并没有发挥重要作用)。自学和一般的学习就像搜索一样,因为它能够发挥大规模计算的作用。搜索和学习是人工智能研究中利用大量计算的两类最重要的技术。在计算机围棋中,就像在计算机国际象棋中一样,研究人员最初的努力是利用人类的理解力(这样就不需要太多的搜索),直到后来,通过采用搜索和学习才取得了更大的成功。
在语音识别方面,20 世纪 70 年代,DARPA 赞助了一场早期的竞赛。参赛者包括大量利用人类知识(单词、音素、人类声道等知识)的特殊方法。另一方面,一些较新的方法更具统计性质,并且基于隐马尔可夫模型 (HMM) 进行更多的计算。统计方法再次战胜了基于人类知识的方法。这导致了整个自然语言处理领域发生了重大变化,几十年来,统计和计算逐渐占据了主导地位。语音识别中深度学习的兴起是朝着这一一致方向迈出的最新一步。深度学习方法更少地依赖人类知识,使用更多的计算,再加上对大量训练集的学习,从而产生了更好的语音识别系统。就像在游戏中一样,研究人员总是试图制造出按照他们认为自己的想法运作的系统——他们试图将这些知识放入他们的系统中——但最终却适得其反,浪费了研究人员大量的时间,而摩尔定律让大规模计算成为可能,并找到了一种充分利用它的方法。
在计算机视觉中,也有类似的模式。早期的方法将视觉设想为搜索边缘、广义圆柱体或 SIFT 特征。但今天所有这些都被抛弃了。现代深度学习神经网络只使用卷积和某些类型的不变性的概念,而且表现要好得多。
这是一个很大的教训。作为一个领域,我们还没有彻底学会它,因为我们还在继续犯同样的错误。要看到这一点,并有效地抵制它,我们必须了解这些错误的吸引力。我们必须学会不那么痛苦.
Repository: https://github.com/owkin/FLamby
1 | git clone https://github.com/owkin/FLamby.git |
Fed-TCGA-BCRA
https://owkin.github.io/FLamby/fed_tcga_brca.html
1 | import torch |
Import several macros, datasets and metrics.
1 | # Instantiation of local train set (and data loader)), baseline loss function, baseline model, default optimizer |
In this script, the pooled
parameter is set to False
when creating the FedDataset
instances. This indicates that the dataset is not pooled, meaning that the data is kept separate for each client or center. Each client or center has its own local dataset, which is a common setup in federated learning to simulate real-world scenarios where data is distributed across different locations or devices.
1 | # Traditional pytorch training loop |
正常的训练流程
1 | # Evaluation |
使用的evaluation metric是lifelines.utils.concordance_index
,返回的是c_index
1 | import torch |
1 | # We loop on all the clients of the distributed dataset and instantiate associated data loaders |
1 | # Federated Learning loop |
联邦学习(Federated Learning, FL)作为一种新兴的分布式机器学习方法,已经引起了大量研究的关注。要系统地理解联邦学习的相关研究,建议遵循以下结构化的阅读图谱,以便逐步加深对其原理、应用和挑战的理解。
这些论文介绍了联邦学习的基本概念、目标、以及经典算法,是了解联邦学习的起点。
Konečnỳ, J., et al. (2016). “Federated Learning: Strategies for Improving Communication Efficiency” arXiv
McMahan, H. B., et al. (2017). “Communication-Efficient Learning of Deep Networks from Decentralized Data” arXiv
Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., & Yu, H. (2019). “Federated Learning” ACM Transactions on Intelligent Systems and Technology (TIST)
联邦学习的一个重要目标是确保数据的隐私和安全,这一领域的研究为其提供了理论基础和技术手段。
Bonawitz, K., et al. (2017). “Practical Secure Aggregation for Federated Learning on User-Held Data” arXiv
Geyer, R. C., Klein, T., & Nabi, M. (2017). “Differentially Private Federated Learning: A Client Level Perspective” arXiv
Zhao, Y., et al. (2018). “Federated Learning with Non-IID Data” arXiv
联邦学习中的通信和计算效率问题是该领域的关键研究方向,许多研究尝试通过各种方法优化模型训练过程中的资源消耗。
Li, X., et al. (2020). “Federated Optimization in Heterogeneous Networks” arXiv
Kairouz, P., et al. (2021). “Advances and Open Problems in Federated Learning” arXiv
Chen, M., et al. (2020). “Joint Learning and Communication Optimization for Federated Learning over Wireless Networks” arXiv
要更好地理解联邦学习在实际中的应用和系统架构,可以参考一些开源框架和实际实现案例。
Google AI. “Federated Learning for Mobile Keyboard Prediction” Blog Post
TensorFlow Federated (TFF): GitHub
联邦学习在诸多行业中都具有广泛的应用,了解这些应用有助于扩展对联邦学习实际意义的认识。
Rieke, N., et al. (2020). “The Future of Digital Health with Federated Learning” arXiv
Hard, A., et al. (2019). “Federated Learning for Mobile Keyboard Prediction” arXiv
对于未来的研究,联邦学习还面临许多挑战,比如系统异质性、模型性能与隐私保护的平衡等。
通过这个图谱,你可以系统地了解联邦学习的关键领域,并逐步深入到各个具体问题的解决方法与研究前沿。
ProxiML -- Building Machine Learning Classifiers for Photonic Quantum Computing
https://dl.acm.org/doi/pdf/10.1145/3620666.3651367
Qumodes are a different way of carrying and manipulating quantum information than qubits.
就像二进制在电脑中的encoding方式,总共n bit的内存,那必然只能有 $2^n$ 种内存state,这是由二进制0或1的特性决定的。然后在让我们看qubit和qumode
If we go to qubits, not much in this picture changes. While a qubit has infinitely many possible states, it turns out that you should look at what is called the basis of the state space, which loosely said means that you should find the minimal number of states in which you can express every other state. For a qubit, this turns out to be two, for example the up state and the down state. To use the language from above, each qubit therefore has 2 ‘possible assignments’, and you have n of them, so by the arguments presented above, there are $2^ⁿ$ unique states. Because we are doing quantum mechanics, superpositions of these states are also allowed, but that doesn’t change the picture: the dimensionality of the system is still $2^ⁿ$.
Qubits是由单个光子的量子态决定的,的存储维数限制依然是 $2^n$
Deep-Reinforcement-Learning-With-Python
In supervised learning, the machine learns from training data. The training data consists of a labeled pair of inputs and outputs. So, we train the model (agent) using the training data in such a way that the model can generalize its learning to new unseen data. It is called supervised learning because the training data acts as a supervisor, since it has a labeled pair of inputs and outputs, and it guides the model in learning the given task.
Quantitative response
predict a quantitative variable from a set of features
Categorical response
predict a categorical variable
Similar to supervised learning, in unsupervised learning, we train the model (agent) based on the training data. But in the case of unsupervised learning, the training data does not contain any labels; that is, it consists of only inputs and not outputs. The goal of unsupervised learning is to determine hidden patterns in the input. There is a common misconception that RL is a kind of unsupervised learning, but it is not. In unsupervised learning, the model learns the hidden structure, whereas, in RL, the model learns by maximizing the reward.
The set of all possible actions in the environment is called the action space. Thus, for this grid world environment, the action space will be [up, down, left, right]. We can categorize action spaces into two types:
A policy defines the agent’s behavior in an environment. The policy tells the agent what action to perform in each state.
Over a series of iterations, the agent will learn a good policy that gives a positive reward.
The optimal policy tells the agent to perform the correct action in each state so that the agent can receive a good reward.
Deterministic Policy
deterministic policy tells the agent to perform a one particular action in a state. Thus, the deterministic policy maps the state to one particular action
Stochastic Policy
maps the state to a probability distribution over an action space.
The agent interacts with the environment by performing some action starting from the initial state and reach the final state. This agent-environment interaction starting from the initial state until the final state is called an episode. For instance, in the car racing video game, the agent plays the game by starting from the initial state (starting point of the race) and reach the final state (endpoint of the race). This is considered an episode. An episode is also often called trajectory (path taken by the agent)
Horizon is the time step until which the agent interacts with the environment. We can classify the horizon into two:
Return is the sum of rewards received by the agent in an episode.
Value function or the value of the state is the expected return that the agent would get starting from the state $s$ following the policy $\pi$
implies the expected return agent would obtain starting from the state $s$ and an action $a$ following the policy $\pi$.