Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
來自Probabilistic Graphical Models 3: Learning的熱門評論
very good course for PGM learning and concept for machine learning programming. Just some description for quiz of final exam is somehow unclear, which lead to a little bit confusing.
Great course! Very informative course videos and challenging yet rewarding programming assignments. Hope that the mentors can be more helpful in timely responding for questions.
關於 概率图模型 專項課程
Learning Outcomes: By the end of this course, you will be able to
Compute the sufficient statistics of a data set that are necessary for learning a PGM from data
Implement both maximum likelihood and Bayesian parameter estimation for Bayesian networks
Implement maximum likelihood and MAP parameter estimation for Markov networks
Formulate a structure learning problem as a combinatorial optimization task over a space of network structure, and evaluate which scoring function is appropriate for a given situation
Utilize PGM inference algorithms in ways that support more effective parameter estimation for PGMs
Implement the Expectation Maximization (EM) algorithm for Bayesian networks
Honors track learners will get hands-on experience in implementing both EM and structure learning for tree-structured networks, and apply them to real-world tasks