This is an introductory astronomy survey class that covers our understanding of the physical universe and its major constituents, including planetary systems, stars, galaxies, black holes, quasars, larger structures, and the universe as a whole.

Loading...

來自 Caltech 的課程

演变中的宇宙

395 個評分

This is an introductory astronomy survey class that covers our understanding of the physical universe and its major constituents, including planetary systems, stars, galaxies, black holes, quasars, larger structures, and the universe as a whole.

從本節課中

Week 8

- S. George DjorgovskiProfessor

Astronomy

So, that was just the qualitative description, but

we'd like to associate some numbers to be really rigorous about this.

And so, how do we do this?

How do we actually quantify distribution of galaxies at large scales?

The first thing that was suggested was so-called a 2-point correlation function.

Which is excess probability of finding one galaxy next to another galaxy,

at some distance r.

Now, if galaxies were all uniformular, randomly distributed,

say like molecules of gas in this room, then to be constant density and

probability would be exactly proportional to density that's uniform.

But if galaxies are bunched together,

then you find there'll be extra probability on smaller scales.

So you compute what is the actual probability just by counting

galaxies in spheres around each other and then

you subtract one from that ratio because it's called the excess probability.

So that correlation function for

uniformly randomly distributed field of galaxies would be zero, no correlation.

That's what it means.

Instead of that, there is clustering and it turns out

to be well described by a power-law with a slope of minus 1.8 and looks like this.

So this is the log of the correlation function,

which is actually probability of finding a galaxy of some separation, versus log of

their separation in megaparsecs, inverse megaparsecs, which you can see.

And it's almost perfect, but not exactly perfect parallel.

At some point it, has to cross zero and

become negative because if you have excess of galaxies with small separations,

then in order to achieve the mean you have to have deficit in some other separation.

And, indeed, at large separations this trends to be negative and

that's the voids.

So in some sense you can think of,

from a uniform distribution galaxies have evacuated from the voids and

piled up in these structures that we seek, filiments and clusters and what have you.

So we can then look what galaxies of different kind do, in terms of clustering,

and so there are interesting phenomena there.

For example,

turns out that the bright galaxies cluster more strongly than the faint galaxies.

High density galaxies cluster more strongly then low density galaxies.

Early Hubble types, ellipticals, cluster more closely than spirals.

So these are all hints about how galaxies form.

So, in fact, we now understand fairly well why do galaxies of different kind of

a different star formation histories cluster or scale differently.

In some sense, this is telling you that there is always a connection

between large scale environmental galaxies and how they evolve.

Now, a more modern way to quantify galaxy clustering is so-called power spectrum.

And if you are familiar with Fourier transforms,

which are standard tool in many different fields, engineering and science.

Basically you decompose the density field of the universe into

whole lot of sine waves in 3D.

With different amplitudes and random orientation of phases, and

you can then ask the question.

How much mass is clumping on what spatial scale?

So the Fourier spectrum of sound, the base will be at very low notes.

And there will be a lot of power on low frequencies and

treble will be a low power, high frequencies, and so on.

So we can do the same thing for anything, not just sound, but, you know,

density of galaxies in the universe.

And you can compute that.

Now this has the advantage of be easily compared to theoretical models,

but it's mathematically equivalent to the 2-point correlation function.

So, they're so-called Fourier pair of functions.

So, one or the other, nowadays this is the more popular way of doing it.

And just to illustrate it, here's some fake distributions.

Each corresponds to one almost delta function with a narrow distribution of

power of particular spatial skill and depending on what

is the location of the spike in the wave length space but

then tells you what structure you're going to see.

If it's at high-spatial frequency, there's going a lot of small blobs.

If it's a low-spatial frequency, there will be a few large blobs.

Now, of course in real life, there is a mixture in all different scales.

And when you actually go ahead and measure this, you get something like this.

Now this is sort of state of the art measurement or many orders of magnitude.

And this is shown in, unit is, spatial frequency, one over wavelength.

So, it's the large structures on the left and

small structures on the right and it's a log log plot.

So you can see at very large scales, there is more power as smaller scales.

They're are few really really big blobs and more smaller ones, but

then turns over and cuts off.

And somehow that extrapolation is missing.

And now we understand why that happens and this is due to cold dark matter.

But this is the kind of modeling that's now compared to simulations and

observations that then try to constrain different models, so start your formation.

So, all this has been telling you how much

mass is clustered in what spatial scale, but

it's saying exactly nothing about how's that arranged in space on large scales.

And that, in Fourier speak, is phase distribution,

which is lost in power spectrum.

So you've seen these maps,

now the real universe looks just like these simulations.

Sheets, filaments, connections.

And, on the right,

there is a nice illustration of what it means to ignore the phase distribution.

So the image on the top is so-called Voronoi foam,

it's just a fake density distribution.

It looks, you know, spongy.

So we take its Fourier transform, compute its power spectrum.

But take phases of that Fourier transform, scramble them randomly, and

then transform back, and then you get the image on the bottom.

It looks just like a total mess.

So those two density fields have identical power spectra by construction.

One of them has ordered phases, the other one has random phases.

And believe it or not, even though so many smart people have worked on this for so

many years, decades, we still don't have a reasonably good way

of quantifying phase coherent distribution in the universe.

And we've seen that this is a pretty important part of the total description.

So this is a really good task for you guys.

If you figure out how to do something as simple and

elegant as 2-point correlation function or power spectrum, but for the phase

distribution which then tells you about the topology of large scale structure.

Why this sponge as opposed to a whole bunch of little blobs?

That'll be a really good thing to do.