本课程是面向有C语言经验的，并想要学习C++编程的程序员们。涉及到的例子和练习需要有对基础算法和面向对象软件的理解。

Loading...

本课程是面向有C语言经验的，并想要学习C++编程的程序员们。涉及到的例子和练习需要有对基础算法和面向对象软件的理解。

Graph Theory, C++11, C++, Graph Algorithms

4.4（591 個評分）

- 5 stars379 ratings
- 4 stars138 ratings
- 3 stars31 ratings
- 2 stars18 ratings
- 1 star25 ratings

Jul 04, 2017

I like Prof. Ira a lot and wish to see more courses from him! I wish I was his student at SC. Best Regards to him and the Coursera team for the great work.

Jul 18, 2018

I really loved this course. It is really helpul for my programming aspect. I have managed to develop my logical abilty and this course really helped me.

從本節課中

Module 4

Prim’s and Kruskal’s algorithms. Use of basic Container Classes. Tripod-Container, Iterator, Algorithm.

#### Ira Pohl

Professor

Today we're going to delve into the architecture of the Standard

Template Library, the Standard Template Library (STL) that we've already been using.

We've been using it with container classes such as vector.

Vector is very, very useful. If you, if you've been doing much with it

in your homework, you will see that it's a big benefit over the

C standard way of index

collections which is a relatively primitive form of array.

So STL, as an architecture, once we understand that architecture,

we're going to have, a much better way to use it.

We're also, in understanding the architecture, going to learn how to extend

the STL library. Now, one of the

topics I'm going to start emphasizing today and

in the future part of the course and successor course is going to

be new developments in the C++ standard.

C++ has been around since 1985, originally released

at Bell Labs with the Cfront compiler. It immediately

got a lot of attention in the industry, you know, and got widely adopted.

And then things got added to it.

The last major standardization occurred around 1998.

So that's 15 years ago.

And most of the practice that we have had in

the industry and most of the practice that I have shown you is based on

1998 C++, but in

2011, the standards committee had,

made major additions, major improvements to the language and major additions.

Among the major additions were the expansion of

the Standard Template Library. That allows us to effectively, without

having to create our own new libraries use libraries from this new standard.

An example is the threading library. So, threading has become more and more

important as we have multi-core computers, multi-processors clouds and

the ability to have a community standard in threading.

It wasn't that we couldn't do threading.

We could do threading even in the old C community.

We could use POSIX threads, for example. Now, built

into a library is a standard. So, we're going to start discussing

some of the new C++11 features as well, and that will be a big

emphasis in the second half of this course.

To understand the architecture of STL, I like to picture

it as a three legged stool. So STL is sitting on

containers, algorithms and iterators.

And they all interact.

They've all been thought through to benefit each other.

The components of the three legged stool are containers like vector and list.

List and vector are what are called sequential containers.

In sequential containers, we know what the first element, the third

element, the nth element of a list or a vector is.

So we think of it as storing things in compartments that are indexed.

On the other hand, we have other kinds of containers such as

sets and maps and those are called associative containers.

And there, the major form of lookup is no longer an index

but, instead, is an association between a key and what the

key lets you get to the contents. So there's

an association.

in the real world, if I were to give you a friend's name and maybe ask you

for the friend's address, that would be an associative lookup.

Then there is the ability to search through those containers.

And that ability is going to be provided by what we call iterators.

Iterators are ways.

They're, they're conceptually very close to pointers.

So if you want to think of an iterator in the C language, think of a pointer.

Pointer lets you get

access to an array or to parts of a struct.

And, in this case, pointers are, can be generalized and

given other properties that make them more convenient, more systematic.

And there will be at the end of the day, five major categories

of iterators which we'll also explain. Finally, what we want to

generate out of STL is convenient

algorithms, algorithms that are very applicable across

all sorts of domains so that we don't have to keep inventing the wheel.

Why should be have to continually invent sorting algorithms

finding algorithms accumulating algorithms

partitioning algorithms? We're using those all the time and,

if you are a student, those are probably a lot of your exercises and you get to learn

how to discriminate between good ones and bad ones.

That's a lot of what your learning is in your

basic computer science classes so you might learn how to write

a bubblesort and learn why a bubblesort is not as good as

quicksort or a heapsort. You might learn how to deal with

permutations and use random number generation and see what the qualities

of random number generator are that make it useful and reliable

in the kinds of programs we're displaying.

So all those things, in this case, are going to be provided by professionals

within the STL standard. Great, great labor saving.

Okay. Name two standard sorting algorithms.

Hint, hint, I just mentioned them. What is their expected time?

What is their worst case time? Take a second.

Everybody has learned bubblesort, a very simple routine.

Bubblesort is an n-squared routine.

We basically interchange elements where, if the adjacent elements of out

of order, we swap them and we go through the whole sequence continuing to swap.

And if we do that right, the typical bubblesort

let's us get one element at least in order at the end of that list,

and now we can bubblesort the remaining elements, which are n

minus 1 after the first pass and then n minus 2 and then n minus 3.

And, if we look at the order of

that computation, it ends up being order n-squared.

On the other hand,

quicksort, a sort invented by Tony

Hoare, if it's working, well, gets us

n log n sort. And just remember, n log n is very,

very much more efficient than n squared because, let's say, the

log of 1,000 base 10 is just 3

So we would have something like a few 1,000 (for quicksort on average)

abstract operations as opposed to a million abstract operations

if we were to use bubblesort. So we get enormous savings by having

a relatively efficient, or at least in the average case, efficient algorithm.

Recall how quicksort works. Quicksort works by taking an element and

then using the element and dividing all the

elements, comparing the two and hopefully they divide

in two halves. And these are the less than elements.

These are the greater than elements.

This is the element that does the partition.

So it's sometimes called a partition (exchange) sort.

And if it divides it roughly in half, it gets you your log n behavior.

hopefully, again, this is something you all have had in

your background that was expected in coming in to this class.

if not, you should look it up.

Then after you've done a partition, you now re-partition,

recursively, each of these lists. And again, if

they're roughly half the size of the previous list, you end up with

this logarithmic behavior in dividing the list.

And each time, you have to go through n elements so you get the n log n

behavior. Its worst case occurs

if the partition element ends up, for example, partitioning

in this very bad way, there's no elements below it.

Let's say, the partition element itself was a minimum.

So all the remaining elements are above it.

This is just a, this is just zero.

And now you partition this and you have the same behavior and this would only give

you one element in the ordering each time

so you'd get, again, worst case, n squared.

that's not going to happen in a probabilistic large situation and

there are special ways to do sampling that make it even less likely.