Learning deep architectures for AI
Yoshua Bengio
Bok Engelsk 2009
Utgitt | Hanover, Mass. : NOW , cop. 2009
|
---|---|
Omfang | IX, 131 s. : ill., fig.
|
Opplysninger | Theoretical results suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers or in complicated propositional formulae re-using many sub-formulae. Searching the parameter space of deep architectures is a difficult task, but learning algorithms such as those for Deep Belief Networks have recently been proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas. This paper discusses the motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer models such as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks
|
Emner | maskinlæring kunstig intelligens læringsalgoritmer deep architectures informatikk
|
Dewey | |
ISBN | 9781601982940
|