0:00

[music] So, here in lecture 5.3, we're going to complete our discussion of

2-level logic synthesis. And in the previous parts of, of lecture

5, we talked about the idea of heuristically reshaping a cover of a

two-level form to get a, a really high quality heuristically good solution and

we, we gave you a high level tour. In this final part of the lecture, we're

just going to take one of those steps. We're going to take the expand step.

And we're going to, to a little bit of a deep dive and open it up and show you how

it works. And so, there's some new ideas that are

applicable in the computational Boolean algebra universe that we're going to see.

And with that, we'll finish upon 2-level logic synthesis.

And then, we'll move on next in the next set of lectures, to multilevel logic

synthesis. So, how does all this stuff actually work?

Well, one of the things that's really very interesting is that the core

representation for most of this stuff is positional cube notation cube lists.

So, this is just exactly like week 1's lectures on URP tautology.

Many of the heuristics use some clever ordering of the cubes, and then, operate

on each cube in some sort of a ranked order.

And they're mostly doing bit level PCN operations on the cube lists.

So, it's actually all pretty understandable by you.

And many of the rest of the operations are actually done as URP, unit recursive

paradigm, kinds of recursive descent just exactly like week 1's lecture.

And in fact, the tautology algorithm that we showed you is a key part of, of

Espresso. And with some work, you could actually

read a lot of the primary source literature here, so congratulations.

You know, you're, we're actually making some real progress here.

I want to look briefly at one step, now let's, let's, let's pick one, let's look

at the expand step and let's just take a step back and say, so what does it mean

to, to expand a cube? Well, so here's, Here's a little product

term. Here's a cube.

And I'm listing it in positional cube notation, just for clarity.

Xyz bar w. That's this circle in this Karnaugh map

right there. That's xyz bar w.

Right, so this little Karnaugh map here, it's four by four squares, xy on the

horizontal axis, zw on the vertical axis. The top row, is 1, 0, 1, 1.

The second row, 0, 1, 1, 1. And then, the entire 1, 1, xy column is

filled with 1. So, that's, that's what this cube that's

what this Karnaugh map looks like. If you were to expand a cube, what that

means is that you remove variables from the cube.

And so, what that means is that, you know, instead of xyz bar w, you'd like a product

term that had less literals in it. Because you know what you're looking for?

You're looking for a smaller and gate. Right?

So, this thing is an and gate with four inputs.

2:55

Both of the examples that I'm showing in the middle of the slide are in gates with

2 inputs. Removing variables from the cube removes

inputs from the and gates and makes you have less hardware.

So, what would happen if you removed yw? Well, if you removed yw, then, you would

be able to circle the top two by two set of ones in the Karnaugh map.

And you would actually get xz bar. Alright, that's what that one is.

What would happen if you were to remove zw?

Well, then, you would actually be able to circle the entire xy column of the

Karnaugh map and you would be able to get that cube.

And so, you know, both of those are, are perfectly nice solutions.

I'll just say, you know, both of those solutions are okay.

The question is, how do you find something like that when you're dealing with

thousands and thousands and thousands, maybe tens of thousands of cubes, and you

know, hundreds of variables. I mean, again, we need a computational

algebra approach, we need something that's, that's you know, that's a

operation. And an algorithm.

So, it turns out that we can expand, we can take the expand operation and we can

transform it into a classical kind of computer science problem for which they

are perfectly okay heuristics. So, we are going to turn this into

something called a covering problem and you may or may not have seen a covering

problem, so let me show you the most basic covering problem.

So, I'm given a matrix of R rows and C columns.

So, in this case I have R equals, you know, 4 rows, and I have C equals 5

columns. And the matrix has 1s and 0s in it, but

for convention and for clarity and for simplicity, I'm just going to draw the 1s

because it makes it easier to see. And so row R1 has a 1 for column two.

Row two has 1s in columns one, two and three.

Row three has 1 in column three and five. Row four has 1s in columns two, four and

five. What does it mean to be a cover?

4:58

A cover says this, choose the smallest set of rows, so that using only, only those

rows, every column has at least a single one in it.

Which is to say, every column is covered by the selected rows.

Now, it's okay to have more than one row. I'm sorry, it's okay to have more than one

1 in a column. It's just not okay to have no 1s in a

column. And so, if I were to do a nice simple

cover for this matrix, I would choose row two and row four because as we see, row 2

covers column 1. Row two and also row four column, cover

column two. Row two covers column three.

Row four covers columns four, and row four covers columns five.

And so, a solution here is row two and row three as the fewest possible rows that

cover all of the columns. They are perfectly nice heuristics to give

you very decent, very quick solutions for these.

So, this is what we're going to turn the expand problem into.

Because these are not so hard to solve, even for things that are pretty big.

So, what are we going to do? We're going to build something called the

Blocking Matrix. Now, the first thing we're going to do is

just look at the structure of the problem. So first, I'm given a function F, right,

so there's my, there's my function F. W bar y bar z plus xz plus x bar yz bar

plus w bar xyz bar. And that last term, w bar xyz bar, that is

the cube that I am going to attempt to expand because, boy, that really stinks as

a cube. That's a cube that's covering exactly a

single one in the Karnaugh map, that's probably not a good idea.

So, the first thing I have to do, which is really quite interesting, is that I have

to build a cube cover of the 0s in the function.

I have to build a cover of where the function does not make a 1 that has a

fancy name. It's called the OFF Set.

Why do I need that? Because I need to know the cube that I am

working to expand cannot touch when it expands.

And you can start to get some sense of what the blocking matrix is going to be.

I'm going to build a model of for the cube I am trying to expand, what are the things

I could do wrong that would, in this case, overlap a cube in the offset, and then,

I'm going to, you know, magically do a covering problem that picks the right

stuff. So, how do you actually build a cover of

the offset, and the answer is. Vlsi CAD to Layout [unknown] programming

assignment number one. How about that?

The URP complement algorithm that we're having you implement for those of you

shooting for the mastery track in this class the URP Complement using a PCN cube

notation, this is exactly this step in Espresso, for real.

Really, really, really. This is exactly the step in Espresso.

This is why we made you do it. So URP Complement is going to give me the

grey cubes and I've got them highlighted here so we can see that they, they seem,

they seem bad and ominous. You know, clearly we don't want to step on

these. They seem very unpleasant.

So F prime, the complement is x bar z plus wxz bar plus wy bar z.

And so, we get a cube x bar z which is in the middle two columns the left and right

the middle two rows, the left and right columns, we get a wxz bar cube which is in

the third column, the top and bottom rows, and we get a, wy bar z bar cube, top row,

right two columns. So, those are the things we should not be

overlapping when we grow this cube. So, now we're going to build the Blocking

Matrix and this is a binary matrix with a very simple structure.

It's got one row for each variable in the cube you're trying to expand.

So what that means is that it's got a w bar, it's got an x, it's got a y and it's

got a z bar. And roughly speaking, the rows are going

to tell us which variables we keep and which variables we get rid of.

Because we know that expanding a cube means getting rid of some variables.

And it's got a column for each cube in the cover of the offset which it means it's

got an x bar z column. It's got a wxz bar column and it's got a

wy bar z bar column. Now, one of the things that I'll just note

is that, boy, you know, this can be a pretty big matrix.

You might not have that many rows, but you can have thousands, and thousands, and

thousands of columns, it's just the way it works, because you can have a lot of cubes

covering that offset. So, one row for every variable in the cube

you wish to expand, one column for every cube you don't want to cover, in the

offset. Put a one in the matrix, if the cube

variable row is different than the polarity of the variable in the cube

column. Otherwise, it's a zero and if it's a don't

care which means just the variable doesn't it, it's also a zero.

And for the zeroes, I'm just going to draw a blank because it's just easier to see

where the ones are. So, let's go look at the first row, right.

So, the matrix has rows w bar, x, y, z bar and it has columns x bar z, wxz bar and wy

bar z bar. So, w bar has nothing to do with the xz

cubed, and so, I'm not going to put anything there.

W bar disagrees with the wxz bar cube, so I'm going to put a 1 there because it's

got a w in it and the row is a w bar. Similarly, w bar disagrees with the wy bar

z cube. The x row disagrees only with the first

column x bar z. The y row disagrees only with the third

column wy bar z bar. And the z bar cubed disagrees only with

the first column x bar z. Alright, so that's it, that's the Blocking

Matrix. And the question is what do I do with this

thing to try to figure out how to grow my cube, well, let's go look.

So, what does a 1 in the Blocking Matrix row and column really mean?

So, let's just go focus on the first row. And I've just got that highlighted here

with some ones. What it really means is that if you remove

this variable, which is the variable associated with the row, which is w bar.

Then, you might end up overlapping any of the off cubes where you have a 1,

depending on what other variables you remove when you expand this cube.

And it's sort of a, a weird reminder sort of a thing that says please be careful,

these are all of the cubes you could hit. Why could you hit those cubes?

Because you're a w bar. You're on one half if you will this

Karnaugh map and these cubes with the ws they're on the other side.

If when you choose to grow this cube, you get rid of the w, you might actually end

up on that side of the Karnaugh map and something bad might happen.

So, for example, if you just took this one and grew it sideways to occupy the middle

2 squares of the bottom row of the Karnaugh map, you would hit the wxz bar

cube. If similarly, you were to expand this cube

to occupy the middle two squares on the bottom row and to the top row of the

Karnaugh map, that's what this, this sort of the, you know, loop up loop down

diagram in the Karnaugh map. You would also hit the wxz bar cube and

you would also hit the wy bar z bar cube. And if you got really lucky and you expand

this little tiny one in the Karnaugh map to 8 squares in the Karnaugh map, so that

you were the entire middle two columns of the Karnaugh map, you would also hit both

of those cubes wxz bar wy bar z. So, this is what the Blocking Matrix tells

you. It's sort of danger, danger.

If you get rid of this variable, be aware, these are the things you might step on.

And you cannot hit any of these things. They're the zero's in the function.

You're not allowed to cover those. Your cubes cannot circle those things.

So, given that I can build the blocking matrix and I can find a row cover, right,

instead of rows that covers the columns with 1s.

What is the rule for how I use that to, to find a key of expansion.

And its a very simple answer. Find the smaller set of rows that covers

each column. The product of those row variables is a

legal cube expansion and I will admit, so lower it backwards, if you keep just those

variable in the cover, the rows, you mutually avoid all of the cover, all of

the cubes of the offset and those retained variables, those are the ones that you

actually get as your expanded cube. So, look at the blocking matrix.

What would you pick as a cover? I picked the first row and the second row,

that's a perfectly good solution. And so, the expanded cube would keep w bar

and would keep x and would get rid of the other rows that are not in the cover.

And wx is one good solution. It turns out there's another solution

because I could keep the first row and the fourth row of the matrix.

And so, I could keep w bar and also keep z bar.

So what do those solutions actually look like as expansions of the cube?

Well, the w bar z bar looks like this. This is w bar, z bar, it is the top and

bottom rows of the Karnaugh map left two columns.

So, that's completely legitimate expansion of the cube.

What does the w bar x expansion look like? Well, that's just the entire column, just

go straight up, capture all four of those ones, so that is the w bar x expansion.

Both of those are, are completely legitimate, completely legal solutions.

So, it turns out to be a really nice, simple sort of sort of technology.

You know, build the Blocking Matrix. Find a cover, minimum set of rows that

puts a 1, at least a 1 single 1 in every single column.

The rows that define the cover. Those variables, that's what your product

is. Now, the one thing that we'll notice that

you know, the size of this cover may be really big because you know, you could

have you know, in a, in a design with you know 50 or 60 variables in it.

You know, you could have tens or thousands of cubes in the offset.

Nevertheless the heuristics for finding these covers were actually quite effective

at getting you, you know, a decent answer really quite, quite quickly.

So, even though the size of this cover may be really big, you can get a really,

really quick answer. So, it's a very effective, very effective

solution. So, ESPRESSO is really a collection of

just lovely and elegant heuristics. And the Reduce-Expand-Irredundant cycle is

the really big deal inside the, the ESPRESSO algorithms reduce ranks the cubes

in a clever order. And then, if you do, you do PCN bit

hacking to actually reduce them one at a time.

The expand operation ranks the cube in the opposite of that clever order, expands

them each individually as a pair of covering problems.

One part of the covering problem is just how big can you make the cube and the

other part of the covering problem that I will admit I did not show you says, you

know, not only would I like to expand this cube, I'd like to go cover some other

cubes, so they can go away and become redundant and that's just another covering

problem. Irredundant amazingly enough is not only a

clever URP algorithm that is a modification of the URP tautology that we

taught at the beginning of the class, it is also a clever covering problem.

So, ESPRESSO very, very pretty set of algorithms and a bunch of other

interesting steps that I just don't have time to talk about.

17:04

There's some other things the method can do.

You can minimize several functions at the same time.

Each function will be reduced to a 2-level form, but some of the product terms will

be shared. So, you can, for example, have an AND gate

that's a product term that's fan out that then is used in, you know, multiple other

functions that are all made from a single OR gate.

So, this means you make the AND product once in hardware connected to many OR gate

outputs and you can save some hardware and you can also handle conventional don't

cares. So, we showed you about input don't cares

that are really just pattern matches to make the truth table specification easy.

You could also just specify a row of the truth table as being a don't care.

That just means that the output is equal to a don't care.

It means the hardware can make a 1 or a 0 for this output, because you really don't

care. Typically, you do not believe that, that

input can happen. So, let the optimizer choose whether to

let the hardware make a 0 or a 1. And so, you can actually get better

hardware that way. How well does all this stuff work?

Fabulously well it's extremely fast, extremely robust.

Where does the ESPRESSO algorithm spend it's time?

So, this is just straight out of the Brayton book.

The complement operation that you have to do so that you can tell what expansion

can't hit. It takes about 14% of the time, and that

can be big if there's lots of cubes in the cover.

The expand takes almost 30%. It depends on the size of the complement.

Irredundant takes 12%. Essentials, which is a step we didn't talk

about, some prime implicants, some things in the Karnaugh, in the cover are, are

essential. They have to be there.

There's no other way that you can cover some 1s in the input space.

It turns out you can just go find them first and take them out of the problem,

make the problem smaller. Reduce is quick, it's a cube by cube

thing, it's 8%. And then, there's a whole bunch of other

AND of the optimizer optimizations that just try to get a little better answer.

How fast is this? Tremendously fast, you know, usually five

reduce expand and irredundant iterations. It often converges in just one or two

iterations. You can do thousands of cubes, you could

do tens of thousands of literals. You can do hundreds of variables and you

can do this in milliseconds really, you know, way, way less than, than, than a CPU

second. So, this is a really stupendously

effective set of heuristics for doing this problem.

So, that's our summary for 2-level logic synthesis.

The big idea is it uses heuristics to find good, but not necessarily perfect.

We're not going to get the best solution. Sorry.

You know? In those, those graphs that I showed.

We're not going to get the very best solution.

We're going to get a good solution, but not a perfect solution.

Minimal, not minimum, prime and irredundant.

You cannot take any of the cubes and make them bigger.

You cannot get rid of any of them. Minimal, prime, and irredundant.

That's what we're going to get. There's a very famous idea.

An iterative improvement idea. Reduce expanding irredundant.

Reshape the cover iterative so that each pass through might get you a better

answer. It's all done with PCN cube lists covering

problems and URP ideas which we've actually explained to you now.

So, you actually know enough to go read a lot of this stuff.

So, this is great, it's really good progress here.

But not every piece of logic is implemented in 2-level form.

In fact, most logic is not implemented in 2-level form.

So, we need another set of ideas and another set of methods.

And so, onward now to Multilevel Logic which is where most of the real action is.