Standard Model of Artificial Intelligence: “AI has focused on the study and construction of agents that do the right thing. What counts as the right thing is defined by the objective that we provide to the agent.”
The problem of achieving agreement between our true preferences and the objective we put into the machine is called the [[value alignment problem]]: the values or objectives put into the machine must be aligned with those of the human.
Roughly speaking, a problem is called [[intractable]] if the time required to solve instances of the problem grows exponentially with the size of the instances.
Economists, with some exceptions, did not address the third question listed above: how to make rational decisions when payoffs from actions are not immediate but instead result from several actions taken in sequence. This topic was pursued in the field of [[operations research]]
[[brain–machine interfaces]] (Lebedev and Nicolelis, 2006): A remarkable finding from this work is that the brain is able to adjust itself to interface successfully with an external device, treating it in effect like another sensory organ or limb.
Doug Engelbart, one of the pioneers of [[human-computer interaction]], championed the idea of intelligence augmentation—IA rather than AI. He believed that computers should augment human abilities rather than automate away human tasks. [[Augmentation vs Automation]]
Lighthill Report 1973 ended support for AI research in United Kingdom in all but two universities; ostensibly for not coming to grips with the “combinatorial explosion”