CP7253 MACHINE LEARNING TECHNIQUES L T P C 3 0 2 4
OBJECTIVES
To understand the concepts of machine learning
To appreciate supervised and unsupervised learning and their applications
To understand the theoretical and practical aspects of Probabilistic Graphical Models
To appreciate the concepts and algorithms of reinforcement learning
To learn aspects of computational learning theory
UNIT I INTRODUCTION 8+6
Machine Learning – Machine Learning Foundations –Overview – Design of a Learning system – Types of machine learning –Applications Mathematical foundations of machine learning – random variables and probabilities – Probability Theory – Probability distributions -Decision Theory- Bayes Decision Theory – Information Theory
UNIT II SUPERVISED LEARNING 10+6
Linear Models for Regression – Linear Models for Classification – Naïve Bayes – Discriminant Functions -Probabilistic Generative Models -Probabilistic Discriminative Models – Bayesian Logistic Regression. Decision Trees – Classification Trees- egression Trees – Pruning. Neural Networks -Feed-forward Network Functions – Back- propagation. Support vector machines – Ensemble methods- Bagging- Boosting
UNIT III UNSUPERVISED LEARNING 8+6
Clustering- K-means – EM Algorithm- Mixtures of Gaussians. The Curse of Dimensionality -Dimensionality Reduction – Factor analysis – Principal Component Analysis – Probabilistic PCA- Independent components analysis
UNIT IV PROBABILISTIC GRAPHICAL MODELS 10+6
Graphical Models – Undirected graphical models – Markov Random Fields – Directed Graphical Models -Bayesian Networks – Conditional independence properties – Inference – Learning- Generalization – Hidden Markov Models – Conditional random fields(CRFs)
UNIT V ADVANCED LEARNING 9+6
Sampling –Basic sampling methods – Monte Carlo. Reinforcement Learning- K-Armed Bandit- Elements – Model-Based Learning- Value Iteration- Policy Iteration. Temporal Difference Learning- Exploration Strategies- Deterministic and Non-deterministic Rewards and Actions Computational Learning Theory – Mistake bound analysis, sample complexity analysis, VC dimension. Occam learning, accuracy and confidence boosting TOTAL : 45 + 30 : 75 PERIODS 20
OUTCOMES:
Upon completion of this course, the student should be able to
Design a neural network for an application of your choice
Implement probabilistic discriminative and generative algorithms for an application of your choice and analyze the results
Use a tool to implement typical clustering algorithms for different types of applications
Design and implement an HMM for a sequence model type of application
Identify applications suitable for different types of machine learning with suitable justification
REFERENCES:
1. Christopher Bishop, “Pattern Recognition and Machine Learning” Springer, 2007.
Bishop – Pattern Recognition And Machine Learning – Springer 2006
2. Kevin P. Murphy, “Machine Learning: A Probabilistic Perspective”, MIT Press, 2012.
3. Ethem Alpaydin, “Introduction to Machine Learning”, MIT Press, Third Edition, 2014.
Introduction_to_Machine_Learning_-_2e_-_Ethem_Alpaydin
4. Tom Mitchell, “Machine Learning”, McGraw-Hill, 1997.
5. Trevor Hastie, Robert Tibshirani, Jerome Friedman, “The Elements of Statistical Learning”, Springer, Second Edition, 2011.
6. Stephen Marsland, “Machine Learning – An Algorithmic Perspective”, Chapman and Hall/CRC Press, Second Edition, 2014.
Notes:
vg-Lec18 -Bayesian Networks II