Computer Vision : Models, Learning, and Inference


Simon J. D. Prince
Bok Engelsk 2012 · Electronic books.
Annen tittel
Utgitt
Cambridge : : Cambridge University Press, , 2012.
Omfang
1 online resource (582 p.)
Opplysninger
Description based upon print version of record.. - Cover; Chapter 1 Introduction; Organization of the book; Other books; Part IProbability; Chapter 2 Introduction to probability; 2.1 Random variables; 2.2 Joint probability; 2.3 Marginalization; 2.4 Conditional probability; 2.5 Bayes' rule; 2.6 Independence; 2.7 Expectation; Discussion; Notes; Problems; Chapter 3 Common probability distributions; 3.1 Bernoulli distribution; 3.2 Beta distribution; 3.3 Categorical distribution; 3.4 Dirichlet distribution; 3.5 Univariate normal distribution; 3.6 Normal-scaled inverse gamma distribution; 3.7 Multivariate normal distribution. - 3.8 Normal inverse Wishart distribution3.9 Conjugacy; Summary; Notes; Problems; Chapter 4Fitting probability models; 4.1 Maximum likelihood; 4.2 Maximum a posteriori; 4.3 The Bayesian approach; 4.4 Worked example 1: Univariate normal; 4.4.1 Maximum likelihood estimation; Least square setting; 4.4.2 Maximum a posteriori estimation; 4.4.3 The Bayesian approach; Predictive density; 4.5 Worked example 2: Categorical distribution; 4.5.1 Maximum Likelihood; 4.5.2 Maximum a posteriori; 4.5.3 Bayesian Approach; Predictive Density; Summary; Notes; Problems; Chapter 5The normal distribution. - 5.1 Types of covariance matrix5.2 Decomposition of covariance; 5.3 Linear transformations of variables; 5.4 Marginal distributions; 5.5 Conditional distributions; 5.6 Product of two normals; 5.6.1 Self-conjugacy; 5.7 Change of variable; Summary; Notes; Problems; Part II Machine learning for machine vision; Chapter 6Learning and inference in vision; 6.1 Computer vision problems; 6.1.1 Components of the solution; 6.2 Types of model; 6.2.1 Model contingency of world on data (discriminative); 6.2.2 Model contingency of data on world (generative); Summary; 6.3 Example 1: Regression. - 6.3.1 Model contingency of world on data (discriminative)6.3.2 Model the contingency of data on world (generative); 6.4 Example 2: Binary classification; 6.4.1 Model contingency of world on data (discriminative); 6.4.2 Model contingency of data on world (generative); Discussion; 6.5 Which type of model should we use?; 6.6 Applications; 6.6.1 Skin detection; 6.6.2 Background subtraction; Summary; Notes; Problems; Chapter 7Modeling complex data densities; 7.1 Normal classification model; 7.1.1 Deficiencies of the multivariate normal model; 7.2 Hidden variables; 7.3 Expectation maximization. - 7.4 Mixture of Gaussians7.4.1 Mixture of Gaussians as a marginalization; 7.4.2 Expectation maximization for fitting mixture models; 7.5 The t-distribution; 7.5.1 Student t-distribution as a marginalization; 7.5.2 Expectation maximization for fitting t-distributions; 7.6 Factor analysis; 7.6.1 Factor analysis as a marginalization; 7.6.2 Expectation maximization for learning factor analyzers; 7.7 Combining models; 7.8 Expectation maximization in detail; 7.8.1 Lower bound for EM algorithm; 7.8.2 The E-Step; 7.8.3 The M-Step; 7.9 Applications; 7.9.1 Face detection; 7.9.2 Object recognition. - 7.9.3 Segmentation. - A modern treatment focusing on learning and inference, with minimal prerequisites, real-world examples and implementable algorithms.
Emner
Sjanger
Dewey
ISBN
9781107011793

Bibliotek som har denne