Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
提供方
課程信息
學生職業成果
43%
29%
17%
您將獲得的技能
學生職業成果
43%
29%
17%
提供方

斯坦福大学
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
教學大綱 - 您將從這門課程中學到什麼
Learning: Overview
This module presents some of the learning tasks for probabilistic graphical models that we will tackle in this course.
Review of Machine Learning Concepts from Prof. Andrew Ng's Machine Learning Class (Optional)
This module contains some basic concepts from the general framework of machine learning, taken from Professor Andrew Ng's Stanford class offered on Coursera. Many of these concepts are highly relevant to the problems we'll tackle in this course.
Parameter Estimation in Bayesian Networks
This module discusses the simples and most basic of the learning problems in probabilistic graphical models: that of parameter estimation in a Bayesian network. We discuss maximum likelihood estimation, and the issues with it. We then discuss Bayesian estimation and how it can ameliorate these problems.
Learning Undirected Models
In this module, we discuss the parameter estimation problem for Markov networks - undirected graphical models. This task is considerably more complex, both conceptually and computationally, than parameter estimation for Bayesian networks, due to the issues presented by the global partition function.
Learning BN Structure
This module discusses the problem of learning the structure of Bayesian networks. We first discuss how this problem can be formulated as an optimization problem over a space of graph structures, and what are good ways to score different structures so as to trade off fit to data and model complexity. We then talk about how the optimization problem can be solved: exactly in a few cases, approximately in most others.
Learning BNs with Incomplete Data
In this module, we discuss the problem of learning models in cases where some of the variables in some of the data cases are not fully observed. We discuss why this situation is considerably more complex than the fully observable case. We then present the Expectation Maximization (EM) algorithm, which is used in a wide variety of problems.
審閱
來自PROBABILISTIC GRAPHICAL MODELS 3: LEARNING的熱門評論
An amazing course! The assignments and quizzes can be insanely difficult espceially towards the conclusion.. Requires textbook reading and relistening to lectures to gather the nuances.
very good course for PGM learning and concept for machine learning programming. Just some description for quiz of final exam is somehow unclear, which lead to a little bit confusing.
Great course! Very informative course videos and challenging yet rewarding programming assignments. Hope that the mentors can be more helpful in timely responding for questions.
Great course, especially the programming assignments. Textbook is pretty much necessary for some quizzes, definitely for the final one.
關於 概率图模型 專項課程
Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.

常見問題
我什么时候能够访问课程视频和作业?
我订阅此专项课程后会得到什么?
Is financial aid available?
Learning Outcomes: By the end of this course, you will be able to
完成课程后,我会获得大学学分吗?
還有其他問題嗎?請訪問 學生幫助中心。