Target: 1 billion stars
October 29, 2013 4:38 PM   Subscribe

The European Space Agency will be launching the Gaia Mission on December 12 It's mission? To get high res pics of 100 billion stars. But that is only the beginning of the coolness of the Gaia mission. It will have two telescopes projecting onto a single camera's CCD. The heat shield that protects the instruments will also generate power for the telescope. And it's destination is L2. But is that an American billion or a European billion? What? I don't know that! ...ahhh!
posted by BillW (30 comments total) 23 users marked this as a favorite
 
THE L2 ORBIT

Gaia will be placed in an orbit around the Sun, at the second Lagrange point L2, which is named after its discoverer, Joseph Louis Lagrange (1736-1813). For the Sun-Earth system, the L2 point lies at a distance of 1.5 million kilometres from the Earth in the anti-Sun direction and co-rotates with the Earth in it's 1-year orbit around the Sun.

One of the principal advantages of an L2 orbit is that it offers uninterrupted observations, since the Earth, Moon and Sun all lay within the orbit of the L2 point. From L2 the entire celestial sphere can be observed during the course of one year. To ensure Gaia stays at L2, the spacecraft must perform small manoeuvres every month.

Gaia will not be the only ESA mission going to L2. Herschel and Planck have operated from there and currents plans call for JWST to be placed there, too.

posted by KokuRyu at 5:03 PM on October 29, 2013


Always loved to see pics of stars. If they can manage to take photos of so many stars it would be awesome.
posted by viktorzi80 at 5:07 PM on October 29, 2013


And sorry, not to be blue-dantic but on the first paragraph it say's it's surveying 1% which comes to just 1 billion. Like still millions and millions (well a good hundred million) which is a lot of pictures of stars. And thanks for the post, there should be some great images and great science from this!!

Yeah yeah, just a measly billion.
posted by sammyo at 5:08 PM on October 29, 2013


HA!
I have not worked on Gaia, I must admit, but I did small things for Herschel/Planck and was very heavily involved in JWST. You think I can claim a little part of L2 as my own?
/bragging off, it was years ago
posted by MessageInABottle at 5:08 PM on October 29, 2013 [2 favorites]


Oh and if you want a detail to really do a number on your visualization abilities it says

Orbit: Lissajous-type orbit around L2

So if you don't remember your Spirograph drawings, first google lissajous pictures, how can it orbit in a wavy swirly way?
posted by sammyo at 5:29 PM on October 29, 2013 [1 favorite]


It's not just high res pics of stars, though. The key thing that Gaia will do is astrometry, where they measure the positions of a billion (many millions) of stars. And that stupidly simple-sounding thing turns out to be pretty profound...

When someone talks about the position of any astronomical object, the first thing you have to ask is, "with respect to what?" Or equivalently, in what coordinate system? The right answer for distant objects, as defined by the International Astronomers Union (hey, we have a union! But it's not that kind of union) is the International Celestial Reference Frame, the ICRF. Basically, this defines a coordinate grid on the sky by specifying where some reference sources are located.

The current best definition of the ICRF comes from Very Long Baseline Interferometry, where we use radio telescopes to survey distant quasars. Because of their enormous distance, we can treat these quasars as effectively static (at our resolutions, in the tens of micro-arcseconds, although nothing is really static, of course. Eppur si muove...) and if we measure the positions of enough of these, we can define a reference grid, the radio reference frame. By definition, this reference frame is the ICRF today, and everything else has to find a way to get on this grid. Working to cross match an optical image and an X-ray source? First you have to get them both on to the ICRF. This matters - a lot - and it's a hard job to do right.

Gaia, though, will perform a neat trick. It will measure the positions of millions of stars as well as some galaxies, all of which are much closer than the distant quasars. So they are all moving targets. But then, by doing this position measurement over time, they will simultaneously solve for the positions and motions of each of these objects in a self-consistent frame. The measurements go in, in one gigantic matrix, and out comes a set of (position, proper motion) for each object. And almost as a by-product, they get a reference frame definition. We expect that once Gaia has done their analysis, their reference frame definition will replace the radio definition, and become the ICRF.

Oh, and almost as an aside, they will get parallaxes to these stars, hence accurate distances, hence (combined with brightness and color information) the luminosity and temperatures of these stars. They'll have stellar motions, tracing the Galactic gravitational potential well and the history of ancient mergers that contributed to our Galaxy's growth. They'll be able to see galaxies moving in the Local Group. And so on and so forth. All from measuring the positions of stars.

It's a pretty cool mission.
posted by RedOrGreen at 5:47 PM on October 29, 2013 [109 favorites]


This is possibly the most technical discussion I've ever gotten into on metafilter, but my understanding was that you still need the quasars to put the GAIA catalog on the ICRF, because you need to "inertialize" (not initialize!) the whole system to define your "zero" three-space velocity. Same was true with Hipparcos/USNO-B, apparently. I'm pretty fuzzy on the details, despite (or rather because of) having it explained in great detail by someone who was very very interested in inertial vs. non-inertial (and I was not ...).

Also Gaia is going to basically destroy my field of astronomy and it's going to be completely crazy. It's really hard to know what it's going to be like without having the data to play with, so it's actually kind of scary at the moment.
posted by kiltedtaco at 6:21 PM on October 29, 2013 [4 favorites]


Gaia Launch Postponement Update:
23 October 2013 Yesterday, the decision was taken to postpone the launch of ESA’s Gaia mission after a technical issue was identified in another satellite already in orbit.[...] Gaia is scheduled for launch on 20 December.
(The ESA could improve the presentation of such information on their site a bit as well.)
posted by yz at 6:34 PM on October 29, 2013


"Inertialize" - very cool kiltedtaco! And an excellent explanation from RedOrGreen and you! Many thanks.

At the risk of sounding stupid, does the distance of the quasars and the expansion of the Universe imply that for things more distant than our own galaxy the reference system begins to slip because you are trying to map things on a dynamic volume - think of where you are relative to a reference point marked on the inside of a balloon is as the balloon expands?
posted by BillW at 6:39 PM on October 29, 2013



Oh, let me emphasize something: We're going to get positions and 3-space velocities of tons and tons of stars. That's a crazy huge deal. That's the whole point of the mission. Like, most distances to stars are pretty crap. We make do, but they're a big uncertainty. If they're really close by, hipparcos already got them. 3-space motions are just not even a thought beyond the nearby vicinity of the sun. Gaia is going to get distances and real velocities (not just radial velocities) for a big fraction of the galaxy, not just a couple thousand disk stars, which is the best we can do at this point. Understanding the motion of stars in the galaxy is a damn hard piece of the field of astronomy (the consensus view is: most stars orbit in a disk, but some do not. Beyond that there's tons of disagreement), and this is going to unleash a deluge of data onto the problem. That's basically what I mean when I say it's going to destroy the field. We're going to look back at our "pre-Gaia" understanding and laugh. It's terrifying to be in a field like that, because you don't know if what you're doing today is going to be thrown out in two years.
posted by kiltedtaco at 6:39 PM on October 29, 2013 [8 favorites]


kiltedtaco - Yeah, but think of the opportunities too!
posted by BillW at 6:42 PM on October 29, 2013 [1 favorite]


ESA's rocket science blog has some interesting posts about Gaia's operations and the history of measuring star locations.

From the blog:

Gaia works in the astrometric mode just like Hipparcos: two telescopes (for Gaia, separated by the basic angle of 106.5 degree) will image the sky and the star positions in the two fields of view are measured relative to each other. In order to scan the full sky, Gaia rotates about its principal axis once every six hours with the spin axis tilted at 45 degrees relative to the Sun.
...
The total data volume expected from Gaia will be a staggering 1 Petabyte.

(That's a million gigabytes.)
posted by jjj606 at 6:45 PM on October 29, 2013


Hmm, a petabyte? Wikipedia has this:
The overall data volume that will be retrieved from the spacecraft during the 5-year mission assuming a nominal compressed data rate of 1 Mbit/s is approximately 60 TB, amounting to about 200 TB of usable uncompressed data on the ground.
Not that 1/5th of a petabyte is anything to sneeze at!
posted by jepler at 6:51 PM on October 29, 2013 [1 favorite]


And it's destination is L2.

Flagging as offensive.

no, not really. This is fascinating.
posted by kafziel at 7:14 PM on October 29, 2013


The measurements go in, in one gigantic matrix, and out comes a set of (position, proper motion) for each object. And almost as a by-product, they get a reference frame definition.

So if I'm understanding correctly, you're going to take bearing and received frequency, observe them over time, and determine target range, course and speed? I bet some submarine officers would be interested in knowing if you know a better way of doing that than they do.

Except you have to do it in 3 dimensions, with a target that can have a cuved path, and without the ability to maneuver own ship to drive bearing rate. Although, to be fair, you don't have to avoid collision or counterdetection and your target doesn't maneuver to intentionally make your job harder.
posted by ctmf at 7:52 PM on October 29, 2013


I love this thing. I'm not sure which is the most impressive stat. The imager -- 106 CCDs of 4500 x 1966 pixels each -- or the thrusters to keep the thing pointed in the right direction, which put out 1.5 micrograms of nitrogen a second.

There's no way in hell they can output full fields of that massive CCD array, and they're not going to even try. Most of the image, after all, will be dark, since they're only imaging down to magnitude 20. So, they'll only send the few dozen pixels that make up each object and the area around it, and the rest of the "frame" will be assumed to be empty.

Finally, the new launch date is now 20 Dec, not 12 Dec. The launch windows is constrained by the moon -- the easiest way to get to L2 is to swing by the moon. You launch into a an elliptical orbit with an apogee about 2/3rds the way to lunar orbit, and do several orbits changing the phase from your launch orbit to the injection orbit, then when the moon's coming into the right place, you boost, swing by the moon and pick up the extra velocity you need to get to L2.

They couldn't swap the transponders they need to swap out and get the craft back on the launcher before this window closed, which is why they had to delay. The next window runs 17-Dec-2013 to 5-Jan-2014, and they've decided to go fairly early in that window.
posted by eriko at 7:55 PM on October 29, 2013 [5 favorites]


kiltedtaco: My understanding was that you still need the quasars to put the GAIA catalog on the ICRF, because you need to "inertialize" (not initialize!) the whole system to define your "zero" three-space velocity.

Yes, you're right, of course - outside the Earth's rotation, the choice of North (Declination or Latitude) is arbitrary, and the choice of Right Ascension (Longitude) is always arbitrary, so there needs to be some bootstrapping off of the existing definition. But if I understood the idea correctly, they do solve for a self-consistent grid.

Also: Understanding the motion of stars in the galaxy is a damn hard piece of the field of astronomy [...] and this is going to unleash a deluge of data onto the problem.[...] We're going to look back at our "pre-Gaia" understanding and laugh. It's terrifying to be in a field like that.

But as BillW said, think of the opportunity! This is when you write those bold "stick my neck out" prediction papers. The Galactic stellar stream people I've heard from seem both excited and twitchy about the prospect.

ctmf: you're going to take bearing and received frequency, observe them over time, and determine target range, course and speed?

Well, phrased that way the problems do have some similarity, don't they? But they're really making a very basic measurement - the (vector) offset between pairs of stars, over and over and over again, over as many pairs of widely separated stars as possible. (Thus the pair of telescopes 106.5 degrees apart, scanning the sky, with wide fields of view and 106 CCDs tiling the focal plane.) And then you solve for {x,y,z}(t) for each star that was observed, as well as (I assume) things like CCD distortions and telescope motion. Conceptually very simple - I bet I could work the problem on paper for 4 stars. It's when you get to 400 million stars that things get ... interesting.
posted by RedOrGreen at 8:21 PM on October 29, 2013 [1 favorite]


I really love that these kinds of projects are going on all the time nowadays. They don't get enough coverage. Most people are oblivious to these incredible undertakings.
posted by Jacob Knitig at 8:53 PM on October 29, 2013


I don't think I'll ever understand how there are people alive who don't find this kind of mission just absolutely fascinating. This is just tremendous, and I'm excited to learn more.

Quick question for those in the know: how does a mission link this actual transmit data? Are there reserved frequencies within the normal range of broadcast for space science? Are those frequencies even suitable for the distance/energy/atmospheric challenges involved?
posted by graphnerd at 8:57 PM on October 29, 2013


ESA rocks star mapping.
posted by Ironmouth at 9:56 PM on October 29, 2013


Satellites use reserved radio frequencies in the microwave spectrum known as X-band, Ka-band and things like that. The receivers are dishes in the Deep Space Network or ESAs own similar setup, with 35 meter antennas placed around the world in Kiruna Sweden, New Norcia australia etc.
posted by Catfry at 1:43 AM on October 30, 2013


Am I correct in thinking that these measurements will tell us something about the distribution of dark matter in our vicinity?
posted by Joe in Australia at 1:57 AM on October 30, 2013


Joe, yes it will. Right now, we know that the Milky Way as an aggregate object has a lot of dark matter. This is derived from the bulk motion of the stars in the Galaxy, and we can infer from that that the local dark matter density is approximate 0.3 GeV/cm^3 (which works out to be about 1 dark matter particle per coffee cup, assuming that dark matter is made up of something like a supersymmetric neutralino). However, that number comes with fairly large error bars. It is even conceivable that right where the Sun and Earth is in the Milky Way, there's very little dark matter, due to some unlikely downward fluctuation in the dark matter distribution. Since we're looking for dark matter here on Earth through direct detection experiments, knowing the local density is very important (if we see something, we need the local density to correctly normalize the rate of events we might see here to observations we make about dark matter elsewhere in the Universe).

Using GAIA, we'll be able to measure the local motion of many stars very accurately. Using the ones closest to us, we can figure out what forces they are experiencing, and thus the gravitational potential they are moving in. That potential will be the sum of the gravitational effects of visible matter (stars and gas), and the gravitational effects of dark matter. Locally, the visible matter is more important, but after we subtract that off, we'll get a better estimate of the dark matter component, which will translate backwards to a better estimate of the amount of dark matter in the volume of space near Earth. Near here still being many hundreds of parsecs, but hey, that's small on Galactic scales.
posted by physicsmatt at 5:00 AM on October 30, 2013 [8 favorites]


Right now, we know that the Milky Way as an aggregate object has a lot of dark matter.

Why? Because the stars orbit at the wrong speed!

Newtonian physics says that, because of the visible mass distribution of the galaxy, stars very close to the center will orbit the galactic center slowly, then speed up as they get further away a bit, then drop off with distance once there's more mass inside the orbit than out. So, in the outer reaches of the arms should orbit the galactic center at a much slower velocity than once up close.

Problem: They don't. The stars very close orbit slowly, but instead of reaching a peak and decaying, the orbital velocities around the galactic center reach that peak and flatten out. The stars in outer and middle orbits have basically the same velocities as the stars in inner orbits, only the very innermost ones have lower velocities.

Now, if the whole galaxy was embedded in a large halo of matter than we just can't happen to see, but happened to be a very large fraction of the total mass of the galaxy, we'd observe just that sort of orbital velocity distribution. We'd also see stronger gravitational lensing than we'd expect from the visible matter, and we'd see different velocity dispersions between members of galactic clusters, because each member that had a large dark matter halo would have a more dramatic gravitational effect on the others.

And, well, that's *exactly* what we see. In one case, the Bullet Cluster, something very odd seems to have happened, and the boring, normal, baryonic matter is one one side, which we can see directly, and the dark matter seems to be on the other side, because that's where the gravitational lensing effects are happening.

There have been other proposals to explain why galaxies are the way they are, but none of them have stood up as well as the "we just can't see 95% of the matter that's actually part of the galaxy."

which works out to be about 1 dark matter particle per coffee cup, assuming that dark matter is made up of something like a supersymmetric neutralino

And 3 dark matter particles per coffee cup if you use French roast.
posted by eriko at 5:44 AM on October 30, 2013 [10 favorites]


eriko, one correction. Normal (baryonic) matter is known to be about 5% of the total energy budget of the Universe, meaning 95% is made up of unknown new stuff. However, of the total energy of the Universe, 70% or so is "dark energy," which is not likely to be a particle in nature. It also has essentially zero impact on the structure and energy density of galaxies like our own. The remaining 25% or so is dark matter, which, like baryonic matter, clumps up through gravitational effects. There are also some negligible other components: photons and neutrinos, but since I'm rounding all these numbers anyway, we can ignore them.

However, these are the numbers for the Universe as a whole. Pretty obviously, we live in a special place in the Universe, in that we're living on a planet made of baryons, orbiting a star made of baryons, in a galaxy made stars (made of baryons). Here, the local baryon density is WAY higher than the Universal average, and, while the dark matter density is also way higher than the Universal average (after all, dark matter falls into gravitational potential wells), it turns out that locally, baryons win. We estimate that, near us, the mean density of baryons is about 1 proton/cm^3, or 1 GeV/cm^3, three times higher than our estimate for the dark matter. Again, this is because we're standing in a special place in the Universe: right near not just one star, but many stars. That means we must be near an "unusual" overdensity of baryons.

Interestingly, the Milky Way has a dark matter halo mass of 1-2 10^12 solar masses* and a baryonic mass of about 10% of that, 10^11 M_sol. So over the whole Milky Way, there are more dark matter particles per baryon than you'd find on average across the whole Universe. However, the baryons can collapse in on themselves, due to their interactions with a massless force carrier (the photon). This means they can radiate energy away as they collide with each other, and after losing energy via photon emission, fall deeper into potential wells. Dark matter forms "halos," big puffy objects that extend over much larger distances than the visible galaxy. This is why the visible Galaxy is a thin disk with a big bulge at the middle, but if you could see the dark matter (which we can reconstruct using gravitational lensing and stellar motion measurements, as eriko said), you'd see it as a puffy cloud extending many times further out than the piddling little disk. Dark matter, whatever it is, cannot have a light force carrier that acts just like photons do: allowing dark matter to collapse in on itself by radiating excess energy away, so there's a limit to how much a ball of dark matter can collapse under its own gravity. As we live inside the baryonic disk of the Milky Way (though perhaps in one of the less fashionable outer arms), the baryons are more concentrated here than the dark matter, so we locally have a lot more baryonic stuff around than we'd expect to find in most places in the Universe.

*It turns out there's considerable disagreement on the Milky Way dark matter mass, and nailing this down more accurately would go a long way to addressing some of the suggested issues with dark matter on small scales. Small here being "the size of the inner portion of the Milky Way, or the size of a dwarf galaxy orbiting the Milky Way. So 100-1000 parsecs. Small is relative. Most of these disagreements come from simulations of galaxy formation, which due to computational issues, usually simulate only the dark matter. All the regions where there are disagreements with observation are also regions where baryons dominate, so perhaps we haven't found a problem with cold dark matter, we've just found that you have to include baryons in your simulations. Which people are starting to do now (though it takes much more computing power, as baryons are complicated, yo).
posted by physicsmatt at 6:51 AM on October 30, 2013 [9 favorites]


I got curious: Here's some Further info on the L2 orbit
posted by Devils Rancher at 8:20 AM on October 30, 2013 [1 favorite]


RedOrGreen: "Oh, and almost as an aside, they will get parallaxes to these stars, hence accurate distances"

Hey, my great-great-great-great-grand-paw Friedrich Wilhelm Bessel was the first to calculate the distance to a star using parallax back in 1838 by measuring the distance of 61 Cygni to be 10.3ly.
posted by Hairy Lobster at 5:26 PM on October 30, 2013 [11 favorites]


Devils Rancher: I love how they mention that "the main operational influence to consider is the torque created by sunlight on the sunshield."
posted by Joe in Australia at 7:04 PM on October 30, 2013


Sidebarred? Holy %$%$#%#$! Achievement unlocked!
posted by RedOrGreen at 1:09 PM on October 31, 2013 [4 favorites]


and your target doesn't maneuver to intentionally make your job harder.

Wouldn't that be interesting if it did!
posted by zippy at 8:59 PM on November 5, 2013 [1 favorite]


« Older Ban the Box, the Private Edition   |   Music Is The Shorthand Of Emotion Newer »


This thread has been archived and is closed to new comments