Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
關於 概率图模型 專項課程
Learning Outcomes: By the end of this course, you will be able to
Compute the sufficient statistics of a data set that are necessary for learning a PGM from data
Implement both maximum likelihood and Bayesian parameter estimation for Bayesian networks
Implement maximum likelihood and MAP parameter estimation for Markov networks
Formulate a structure learning problem as a combinatorial optimization task over a space of network structure, and evaluate which scoring function is appropriate for a given situation
Utilize PGM inference algorithms in ways that support more effective parameter estimation for PGMs
Implement the Expectation Maximization (EM) algorithm for Bayesian networks
Honors track learners will get hands-on experience in implementing both EM and structure learning for tree-structured networks, and apply them to real-world tasks