• RSS
  • Facebook
  • Twitter
Showing posts with label Space. Show all posts
Showing posts with label Space. Show all posts
Abhijeet
Comments


The Drake equation (also sometimes called the "Green Bank equation", the "Green Bank Formula," or often erroneously labeled the "Sagan equation") is a famous result in the speculative fields of exobiology and the search for extraterrestrial intelligence (SETI).
This equation was devised by Dr. Frank Drake (now Professor Emeritus of Astronomy and Astrophysics at the University of California, Santa Cruz) in 1960, in an attempt to estimate the number of extraterrestrial civilizations in our galaxy with which we might come in contact. The main purpose of the equation is to allow scientists to quantify the uncertainty of the factors which determine the number of such extraterrestrial civilizations.

History
Frank Drake formulated his equation in 1960 in preparation for the Green Bank meeting. This meeting, held at Green Bank, West Virginia, established SETI as a scientific discipline. The historic meeting, whose participants became known as the "Order of the Dolphin," brought together leading astronomers, physicists, biologists, social scientists, and industry leaders to discuss the possibility of detecting intelligent life among the stars.
The Green Bank meeting was also remarkable because it featured the first use of the famous formula that came to be known as the "Drake Equation". This explains why the equation is also known by its other names with the "Green Bank" designation. When Drake came up with this formula, he had no notion that it would become a staple of SETI theorists for decades to come. In fact, he thought of it as an organizational tool — a way to order the different issues to be discussed at the Green Bank conference, and bring them to bear on the central question of intelligent life in the universe. Carl Sagan, a great proponent of SETI, utilized and quoted the formula often and as a result the formula is often mislabeled as "The Sagan Equation". The Green Bank Meeting was commemorated by a plaque.
The Drake equation is closely related to the Fermi paradox in that Drake suggested that a large number of extraterrestrial civilizations would form, but that the lack of evidence of such civilizations (the Fermi paradox) suggests that technological civilizations tend to destroy themselves rather quickly. This theory often stimulates an interest in identifying and publicizing ways in which humanity could destroy itself, and then countered with hopes of avoiding such destruction and eventually becoming a space-faring species. A similar argument is The Great Filter,[1] which notes that since there are no observed extraterrestrial civilizations, despite the vast number of stars, then some step in the process must be acting as a filter to reduce the final value. According to this view, either it is very hard for intelligent life to arise, or the lifetime of such civilizations must be depressingly short.
The grand question of the number of communicating civilizations in our galaxy could, in Drake's view, be reduced to seven smaller issues with his equation.


The equation

The Drake equation states that:
N = R^{\ast} \times f_p \times n_e \times f_{\ell} \times f_i \times f_c \times L \!
where:
N is the number of civilizations in our galaxy with which communication might be possible;
and
R* is the average rate of star formation in our galaxy
fp is the fraction of those stars that have planets
ne is the average number of planets that can potentially support life per star that has planets
f is the fraction of the above that actually go on to develop life at some point
fi is the fraction of the above that actually go on to develop intelligent life
fc is the fraction of civilizations that develop a technology that releases detectable signs of their existence into space
L is the length of time such civilizations release detectable signals into space.

Alternative expression

The number of stars in the galaxy now, N*, is related to the star formation rate R* by
 N^{\ast} = \int_0^{T_g} R^{\ast}(t) dt , \,\!,
where Tg is the age of the galaxy. Assuming for simplicity that R* is constant, then N* = R* Tg and the Drake equation can be rewritten into an alternate form phrased in terms of the more easily observable value, N*.
N = N^{\ast} \times f_p \times n_e \times f_{\ell} \times f_i \times f_c \times  L / T_g \,\!

R factor

One can question why the number of civilizations should be proportional to the star formation rate, though this makes technical sense. (The product of all the terms except L tells how many new communicating civilizations are born each year. Then you multiply by the lifetime to get the expected number. For example, if an average of 0.01 new civilizations are born each year, and they each last 500 years on the average, then on the average 5 will exist at any time.) The original Drake Equation can be extended to a more realistic model, where the equation uses not the number of stars that are forming now, but those that were forming several billion years ago. The alternate formulation, in terms of the number of stars in the galaxy, is easier to explain and understand, but implicitly assumes the star formation rate is constant over the life of the galaxy.

Expansions

Additional factors that have been described for the Drake equation include:
nr or reappearance factor: The average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended
fm or METI factor: The fraction of communicative civilizations with clear and non-paranoid planetary consciousness (that is, those which actually engage in deliberate interstellar transmission)
With these factors in mind, the Drake equation states:
N = R^{\ast} \times f_p \times n_e \times f_{\ell} \times f_i \times f_c \times (1+n_r) \times f_m \times L \!

Reappearance factor

The equation may furthermore be multiplied by how many times an intelligent civilization may occur on planets where it has happened once. Even if an intelligent civilization reaches the end of its lifetime after, for example, 10.000 years, life may still prevail on the planet for billions of years, availing for the next civilization to evolve. Thus, several civilizations may come and go during the lifespan of one and the same planet. Thus, if nr is the average number of times a new civilization reappears on the same planet where a previous civilization once has appeared and ended, then the total number of civilizations on such a planet would be (1+nr), which is the actual factor added to the equation.
The factor depends on what generally is the cause of civilization extinction. If it is generally by temporary inhabitability, for example a nuclear winter, then nr may be relatively high. On the other hand, if it is generally by permanent inhabitability, such as stellar evolution, then nr may be almost zero.
In the case of total life extinction, a similar factor may be applicable for f, that is, how many times life may appear on a planet where it has appeared once.

METI factor

Alexander Zaitsev said that to be in a communicative phase and emit dedicated messages are not the same. For example, we, although being in a communicative phase, are not a communicative civilization; we do not practice such activities as the purposeful and regular transmission of interstellar messages. For this reason, he suggested introducing the METI factor (Messaging to Extra-Terrestrial Intelligence) to the classical Drake Equation.

Historical estimates of the parameters

Considerable disagreement on the values of most of these parameters exists, but the values used by Drake and his colleagues in 1961 were:
  • R* = 10/year (10 stars formed per year, on the average over the life of the galaxy)
  • fp = 0.5 (half of all stars formed will have planets)
  • ne = 2 (stars with planets will have 2 planets capable of supporting life)
  • fl = 1 (100% of these planets will develop life)
  • fi = 0.01 (1% of which will be intelligent life)
  • fc = 0.01 (1% of which will be able to communicate)
  • L = 10,000 years (which will last 10,000 years)
Drake's values give N = 10 × 0.5 × 2 × 1 × 0.01 × 0.01 × 10,000 = 10.
The value of R* is determined from considerable astronomical data, and is the least disputed term of the equation; fp is less certain, but is still much firmer than the values following. Confidence in ne was once higher, but the discovery of numerous gas giants in close orbit with their stars has introduced doubt that life-supporting planets commonly survive the creation of their stellar systems. In addition, most stars in our galaxy are red dwarfs, which flare violently, mostly in X-rays—a property not conducive to life as we know it (simulations also suggest that these bursts erode planetary atmospheres). The possibility of life on moons of gas giants (such as Jupiter's moon Europa, or Saturn's moon Titan) adds further uncertainty to this figure.
Geological evidence from the Earth suggests that fl may be very high; life on Earth appears to have begun around the same time as favorable conditions arose, suggesting that abiogenesis may be relatively common once conditions are right. However, this evidence only looks at the Earth (a single model planet), and contains anthropic bias, as the planet of study was not chosen randomly, but by the living organisms that already inhabit it (ourselves). Whether this is actually a case of anthropic bias has been contested, however; it might rather merely be a limitation involving a critically small sample size, since it is argued that there is no bias involved in our asking these questions about life on Earth. Also countering this argument is that there is no evidence for abiogenesis occurring more than once on the Earth—that is, all terrestrial life stems from a common origin. If abiogenesis were more common it would be speculated to have occurred more than once on the Earth. In addition, from a classical hypothesis testing standpoint, there are zero degrees of freedom, permitting no valid estimates to be made.
One piece of data which would have major impact on fl is the discovery of life on Mars or another planet or moon. If life were to be found on Mars which developed independently from life on Earth it would imply a higher value for fl. While this would improve the degrees of freedom from zero to one, there would remain a great deal of uncertainty on any estimate due to the small sample size, and the chance they are not really independent.
Similar arguments of bias can be made regarding fi and fc by considering the Earth as a model: intelligence with the capacity of extraterrestrial communication occurs only in one species in the 4 billion year history of life on Earth. If generalized, this means only relatively old planets may have intelligent life capable of extraterrestrial communication. Again this model has a large anthropic bias and there are still zero degrees of freedom. Note that the capacity and willingness to participate in extraterrestrial communication has come relatively "quickly", with the Earth having only an estimated 100,000 year history of intelligent human life, and less than a century of technological ability.
fi, fc and L, like fl, are guesses. Estimates of fi have been affected by discoveries that the solar system's orbit is circular in the galaxy, at such a distance that it remains out of the spiral arms for hundreds of millions of years (evading radiation from novae). Also, Earth's large moon may aid the evolution of life by stabilizing the planet's axis of rotation. In addition, while it appears that life developed soon after the formation of Earth, the Cambrian explosion, in which a large variety of multicellular life forms came into being, occurred a considerable amount of time after the formation of Earth, which suggests the possibility that special conditions were necessary. Some scenarios such as the Snowball Earth or research into the extinction events have raised the possibility that life on Earth is relatively fragile. Again, the controversy over life on Mars is relevant since a discovery that life did form on Mars but ceased to exist would affect estimates of these terms.
The astronomer Carl Sagan speculated that all of the terms, except for the lifetime of a civilization, are relatively high and the determining factor in whether there are large or small numbers of civilizations in the universe is the civilization lifetime, or in other words, the ability of technological civilizations to avoid self-destruction. In Sagan's case, the Drake equation was a strong motivating factor for his interest in environmental issues and his efforts to warn against the dangers of nuclear warfare.
By plugging in apparently "plausible" values for each of the parameters above, the resultant expectant value of N is often (much) greater than 1. This has provided considerable motivation for the SETI movement. However, we have no evidence for extraterrestrial civilizations. This conflict is often called the Fermi paradox, after Enrico Fermi who first asked about our lack of observation of extraterrestrials, and motivates advocates of SETI to continually expand the volume of space in which another civilization could be observed.
Other assumptions give values of N that are (much) less than 1, but some observers believe this is still compatible with observations due to the anthropic principle: no matter how low the probability that any given galaxy will have intelligent life in it, the universe must have at least one intelligent species by definition otherwise the question would not arise.
Some computations of the Drake equation, given different assumptions:
R* = 10/year, fp = 0.5, ne = 2, fl = 1, fi = 0.01, fc = 0.01, and L = 50,000 years
N = 10 × 0.5 × 2 × 1 × 0.01 × 0.01 × 50,000 = 50 (so 50 civilizations exist in our galaxy at any given time, on the average)
But a pessimist might equally well believe that life seldom becomes intelligent, and intelligent civilizations do not last very long:
R* = 10/year, fp = 0.5, ne = 2, fl = 1, fi = 0.001, fc = 0.01, and L = 500 years
N = 10 × 0.5 × 2 × 1 × 0.001 × 0.01 × 500 = 0.05 (we are probably alone)
Alternatively, making some more optimistic assumptions, and assuming that 10% of civilizations become willing and able to communicate, and then spread through their local star systems for 100,000 years (a very short period in geologic time):
R* = 20/year, fp = 0.1, ne = 0.5, fl = 1, fi = 0.5, fc = 0.1, and L = 100,000 years
N = 20 × 0.1 × 0.5 × 1 × 0.5 × 0.1 × 100,000 = 5,000

Current estimates of the parameters

This section attempts to list best current estimates for the parameters of the Drake equation.
R* = the rate of star creation in our galaxy
Estimated by Drake as 10/year. Latest calculations from NASA and the European Space Agency indicates that the current rate of star formation in our galaxy is about 7 per year.
fp = the fraction of those stars which have planets
Estimated by Drake as 0.5. It is now known from modern planet searches that at least 30% of sunlike stars have planets, and the true proportion may be much higher, since only planets considerably larger than Earth can be detected with current technology. Infra-red surveys of dust discs around young stars imply that 20-60% of sun-like stars may form terrestrial planets.
ne = the average number of planets (satellites may perhaps sometimes be just as good candidates) which can potentially support life per star that has planets
Estimated by Drake as 2. Marcy, et al. notes that most of the observed planets have very eccentric orbits, or orbit very close to the sun where the temperature is too high for earth-like life. However, several planetary systems that look more solar-system-like are known, such as HD 70642, HD 154345, or Gliese 849. These may well have smaller, as yet unseen, earth sized planets in their habitable zones. Also, the variety of solar systems that might have habitable zones is not just limited to solar-type stars and earth-sized planets - it is now believed that even tidally locked planets close to red dwarves might have habitable zones, and some of the large planets detected so far could potentially support life - in early 2008, two different research groups concluded that Gliese 581d may possibly be habitable.[7] [8] Since about 200 planetary systems are known, this implies ne > 0.005.
Even if planets are in the habitable zone, however, the number of planets with the right proportion of elements may be difficult to estimate. Also, the Rare Earth hypothesis, which posits that conditions for intelligent life are quite rare, has advanced a set of arguments based on the Drake equation that the number of planets or satellites that could support life is small, and quite possibly limited to Earth alone; in this case, the estimate of ne would be infinitesimal.
fl = the fraction of the above which actually go on to develop life
Estimated by Drake as 1.
In 2002, Charles H. Lineweaver and Tamara M. Davis (at the University of New South Wales and the Australian Centre for Astrobiology) estimated fl as > 0.13 on planets that have existed for at least one billion years using a statistical argument based on the length of time life took to evolve on Earth. Lineweaver has also determined that about 10% of star systems in the Galaxy are hospitable to life, by having heavy elements, being far from supernovae and being stable themselves for sufficient time.
fi = the fraction of the above which actually go on to develop intelligent life
Estimated by Drake as 0.01.
fc = the fraction of the above which are willing and able to communicate
Estimated by Drake as 0.01.
L = the expected lifetime of such a civilization for the period that it can communicate across interstellar space
Estimated by Drake as 10,000 years.
In an article in Scientific American, Michael Shermer estimated L as 420 years, based on compiling the durations of sixty historical civilizations. Using twenty-eight civilizations more recent than the Roman Empire he calculates a figure of 304 years for "modern" civilizations. It could also be argued from Michael Shermer's results that the fall of most of these civilizations was followed by later civilizations which carried on the technologies, so it's doubtful that they are separate civilizations in the context of the Drake equation. Furthermore since none could communicate over interstellar space, the value of L here could also be argued to be zero.
The value of L can be estimated from the lifetime of our current civilization from the advent of radio astronomy in 1938 (dated from Grote Reber's parabolic dish radio telescope) to the current date. In 2008, this gives an L of 70 years. However such an assumption would be erroneous. 70 for the value of L would be an artificial minimum based on Earth's broadcasting history to date and would make unlikely the possibility of other civilizations existing. 10,000 for L is still the most popular estimate
Values based on the above estimates,
R* = 7/year, fp = 0.5, ne = 2, fl = 0.33, fi = 0.01, fc = 0.01, and L = 10000 years
result in
N = 7 × 0.5 × 2 × 0.33 × 0.01 × 0.01 × 10000 = 2.31

Criticisms

Since there exists only one known example of a planet with life forms of any kind, several terms in the Drake equation are largely based on conjecture. However, based on Earth's experience, some scientists view intelligent life on other planets as possible and the replication of this event elsewhere is at least plausible. In a 2003 lecture at Caltech, Michael Crichton, a science fiction author, stated that, "Speaking precisely, the Drake equation is literally meaningless, and has nothing to do with science. I take the hard view that science involves the creation of testable hypotheses. The Drake equation cannot be tested and therefore SETI is not science. SETI is unquestionably a religion."
However, actual experiments by SETI scientists do not attempt to address the Drake equations for the existence of extraterrestrial civilizations anywhere in the universe, but are focused on specific, testable hypotheses (i.e., "do extraterrestrial civilizations communicating in the radio spectrum exist near sun-like stars within 50 light years of the Earth?").
Another reply to such criticism is that even though the Drake equation currently involves speculation about unmeasured parameters, it stimulates dialog on these topics. Then the focus becomes how to proceed experimentally.


Try The Drake's Equation For Yourself -

http://www.activemind.com/Mysterious/Topics/SETI/drake_equation.html


source : Wikipedia
[...]

Categories: ,
Abhijeet
Comments


Radio astronomy is a subfield of astronomy that studies celestial objects at radio frequencies. The field originated from the discovery that many astronomical objects emit radiation in the radio wavelengths as well as optical ones. The great advances in radio astronomy that took place after the Second World War yielded a number of important discoveries including Radio Galaxies, Pulsars, Masers and the Cosmic Microwave Background Radiation. Radio telescopes use many different methods to collect information, sometimes using techniques that are similar to those used in Optical telescopes (although radio telescopes have to be much larger due to the longer wavelengths being observed). The development of radio interferometry and aperture synthesis has allowed radio sources to be imaged with unprecedented angular resolution.


History

The idea that celestial bodies may be emitting radio waves had been suspected some time before its discovery. In the 1860's James Clerk Maxwell's equations had shown that electromagnetic radiation from stellar sources could exist with any wavelength, not just optical. Several notable scientists and experimenters such as Thomas Edison, Oliver Lodge, and Max Planck predicted that the sun should be emitting radio waves. Lodge tried to observe solar signals but was unable to detect them due to technical limitations of his apparatus.

The first identified astronomical radio source was one discovered serendipitously in the early 1930s when Karl Guthe Jansky, an engineer with Bell Telephone Laboratories, was investigating static that interfered with short wave transatlantic voice transmissions. Using a large directional antenna, Jansky noticed that his analog pen-and-paper recording system kept recording a repeating signal of unknown origin. Since the signal peaked once a day, Jansky originally suspected the source of the interference was the sun. Continued analysis showed that the source was not following the 24 hour cycle for the rising and setting of the sun but instead repeating on a cycle of 23 hours and 56 minutes, typical of an astronomical source "fixed" on the celestial sphere rotating in sync with sidereal time. By comparing his observations with optical astronomical maps, Jansky concluded that the radiation was coming from the Milky Way and was strongest in the direction of the center of the galaxy, in the constellation of Sagittarius . He announced his discovery in 1933. Jansky wanted to investigate the radio waves from the Milky Way in further detail but Bell Labs re-assigned Jansky to another project, so he did no further work in the field of astronomy.

Grote Reber helped pioneer radio astronomy when he built a large parabolic "dish" radio telescope (9m in diameter) in 1937. He was instrumental in repeating Karl Guthe Jansky's pioneering but somewhat simple work, and went on to conduct the first sky survey in the radio frequencies . On February 27, 1942, J.S. Hey, a British Army research officer, helped progress radio astronomy further, when he discovered that the sun emitted radio waves . By the early 1950s Martin Ryle and Antony Hewish at Cambridge University had used the Cambridge Interferometer to map the radio sky, producing the famous 2C and 3C surveys of radio sources.


Techniques

Radio astronomers use different types of techniques to observe objects in the radio spectrum. Instruments may simply be pointed at an energetic radio source to analyze what type of emissions it makes. To “image” a region of the sky in more detail, multiple overlapping scans can be recorded and piece together in an image ('mosaicing'). The types of instruments being used depends on the weakness of the signal and the amount of detail needed.

Radio telescopes


An optical image of the galaxy M87 (HST), a radio image of same galaxy using Interferometry (Very Large Array-VLA), and an image of the center section (VLBA) using a Very Long Baseline Array (Global VLBI) consisting of antennas in the US, Germany, Italy, Finland, Sweden and Spain. The jet of particles is suspected to be powered by a black hole in the center of the galaxy.

An optical image of the galaxy M87 (HST), a radio image of same galaxy using Interferometry (Very Large Array-VLA), and an image of the center section (VLBA) using a Very Long Baseline Array (Global VLBI) consisting of antennas in the US, Germany, Italy, Finland, Sweden and Spain. The jet of particles is suspected to be powered by a black hole in the center of the galaxy.

Radio telescopes may need to be extremely large in order to receive signals with low signal-to-noise ratio. Also since angular resolution is a function of the diameter of the "objective" in proportion to the wavelength of the electromagnetic radiation being observed, radio telescopes have to be much larger in comparison to their optical counterparts. For example a 1 meter diameter optical telescope is two million times bigger than the wavelength of light observed giving it a resolution of a few arc seconds, whereas a radio telescope "dish" many times that size may, depending on the wavelength observed, only be able to resolve an object the size of the full moon (30 minutes of arc).

Radio interferometry

The difficulty in achieving high resolutions with single radio telescopes led to radio interferometry, developed by British radio astronomer Martin Ryle and Australian-born engineer, radiophysicist, and radio astronomer Joseph Lade Pawsey in 1946. Radio interferometers consist of widely separated radio telescopes observing the same object that are connected together using coaxial cable, waveguide, optical fiber, or other type of transmission line. This not only increases the total signal collected, it can also be used in a process called Aperture synthesis to vastly increase resolution. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is the size of the antennas furthest apart in the array. In order to produce a high quality image, a large number of different separations between different telescopes are required (the projected separation between any two telescopes as seen from the radio source is called a baseline) - as many different baselines as possible are required in order to get a good quality image. For example the Very Large Array has 27 telescopes giving 351 independent baselines at once.

Very Long Baseline Interferometry

Since the 1970s telescopes from all over the world (and even in Earth orbit) have been combined to perform Very Long Baseline Interferometry. Data received at each antenna is paired with timing information, usually from a local atomic clock, and then stored for later analysis on magnetic tape or hard disk. At that later time, the data is correlated with data from other antennas similarly recorded, to produce the resulting image. Using this method it is possible to synthesise an antenna that is effectively the size of the Earth. The large distances between the telescopes enable very high angular resolutions to be achieved, much greater in fact than in any other field of astronomy. At the highest frequencies, synthesised beams less than 1 milliarcsecond are possible.

The pre-eminent VLBI arrays operating today are the Very Long Baseline Array (with telescopes located across the North America) and the European VLBI Network (telescopes in Europe, China, South Africa and Puerto Rico). Each array usually operates separately, but occasional projects are observed together producing increased sensitivity. This is referred to as Global VLBI. There is also a VLBI network, the Long Baseline Array, operating in Australia.

Since its inception, recording data onto hard media has been the only way to bring the data recorded at each telescope together for later correlation. However, the availability today of worldwide, high-bandwidth optical fibre networks makes it possible to do VLBI in real time. This technique (referred to as e-VLBI) has been pioneered by the EVN who now perform an increasing number of scientific e-VLBI projects per year.

Astronomical sources

A radio image of the central region of the Milky Way galaxy. The arrow indicates a supernova remnant which is the location of a newly-discovered transient, bursting low-frequency radio source GCRT J1745-3009.

A radio image of the central region of the Milky Way galaxy. The arrow indicates a supernova remnant which is the location of a newly-discovered transient, bursting low-frequency radio source GCRT J1745-3009.

Radio astronomy has led to substantial increases in astronomical knowledge, particularly with the discovery of several classes of new objects, including pulsars, quasars and radio galaxies. This is because radio astronomy allows us to see things that are not detectable in optical astronomy. Such objects represent some of the most extreme and energetic physical processes in the universe.

Radio astronomy is also partly responsible for the idea that dark matter is an important component of our universe; radio measurements of the rotation of galaxies suggest that there is much more mass in galaxies than has been directly observed. The cosmic microwave background radiation was also first detected using radio telescopes. However, radio telescopes have also been used to investigate objects much closer to home, including observations of the Sun and solar activity, and radar mapping of the planets.

[...]

Categories: ,
Abhijeet
Comments


In physical cosmology, dark energy is a hypothetical exotic form of energy that permeates all of space and tends to increase the rate of expansion of the universe. Dark energy is the most popular way to explain recent observations that the universe appears to be expanding at an accelerating rate. In the standard model of cosmology, dark energy currently accounts for 74% of the total mass-energy of the universe.

Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously,and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant is physically equivalent to vacuum energy. Scalar fields which do change in space can be difficult to distinguish from a cosmological constant because the change may be extremely slow.

High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time. In general relativity, the evolution of the expansion rate is parameterized by the cosmological equation of state. Measuring the equation of state of dark energy is one of the biggest efforts in observational cosmology today.

Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model" of cosmology because of its precise agreement with observations. Dark energy has been used as a crucial ingredient in a recent attempt to formulate a cyclic model for the universe.


Evidence for dark energy

[edit] Supernovae

In 1998, observations of type Ia supernovae ("one-A") by the Supernova Cosmology Project at the Lawrence Berkeley National Laboratory and the High-z Supernova Search Team suggested that the expansion of the universe is accelerating.[4][5] Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large scale structure of the cosmos as well as improved measurements of supernovae have been consistent with the Lambda-CDM model.

Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow the expansion history of the Universe to be measured by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, the absolute magnitude, is known. This allows the object's distance to be measured from its actually observed brightness, or apparent magnitude. Type Ia supernovae are the best-known standard candles across cosmological distances because of their extreme, and extremely consistent, brightness.

Cosmic Microwave Background

Estimated distribution of dark matter and dark energy in the universe

Estimated distribution of dark matter and dark energy in the universe

The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background (CMB) anisotropies, most recently by the WMAP satellite, indicate that the universe is very close to flat. For the shape of the universe to be flat, the mass/energy density of the universe must be equal to a certain critical density. The total amount of matter in the universe (including baryons and dark matter), as measured by the CMB, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The most recent WMAP observations are consistent with a universe made up of 74% dark energy, 22% dark matter, and 4% ordinary matter. (Note: There is a slight discrepancy in the 'pie chart'.)

Large-Scale Structure

The theory of large scale structure, which governs the formation of structure in the universe (stars, quasars, galaxies and galaxy clusters), also suggests that the density of matter in the universe is only 30% of the critical density.

Late-time Integrated Sachs-Wolfe Effect

Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the CMB aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs-Wolfe effect (ISW) is a direct signal of dark energy in a flat universe, and has recently been detected at high significance by Ho et al. and Giannantonio et al. May 2008, Granett, Neyrinck & Szapudi found arguably the clearest evidence yet for the ISW effect, imaging the average imprint of superclusters and supervoids on the CMB.

Nature of dark energy

The exact nature of this dark energy is a matter of speculation. It is known to be very homogeneous, not very dense and is not known to interact through any of the fundamental forces other than gravity. Since it is not very dense—roughly 10−29 grams per cubic centimeter—it is hard to imagine experiments to detect it in the laboratory. Dark energy can only have such a profound impact on the universe, making up 74% of all energy, because it uniformly fills otherwise empty space. The two leading models are quintessence and the cosmological constant. Both models include the common characteristic that dark energy must have negative pressure.

Negative Pressure

Independently from its actual nature, dark energy would need to have a strong negative pressure in order to explain the observed acceleration in the expansion rate of the universe.

According to General Relativity, the pressure within a substance contributes to its gravitational attraction for other things just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the Stress-energy tensor, which contains both the energy (or matter) density of a substance and its pressure and viscosity.

In the Friedmann-Lemaître-Robertson-Walker metric, it can be shown that a strong constant negative pressure in all the universe causes an acceleration in universe expansion if the universe is already expanding, or a deceleration in universe contraction if the universe is already contracting. More exactly, the second derivative of the universe scale factor, \ddot{a}, is positive if the equation of state of the universe is such that w < − 1 / 3.

This accelerating expansion effect is sometimes labeled "gravitational repulsion", which is a colorful but possibly confusing expression. In fact a negative pressure does not influence the gravitational interaction between masses - which remains attractive - but rather alters the overall evolution of the universe at the cosmological scale, typically resulting in the accelerating expansion of the universe despite the attraction among the masses present in the universe.

Cosmological constant

The simplest explanation for dark energy is that it is simply the "cost of having space": that is, a volume of space has some intrinsic, fundamental energy. This is the cosmological constant, sometimes called Lambda (hence Lambda-CDM model) after the Greek letter Λ, the symbol used to mathematically represent this quantity. Since energy and mass are related by E = mc2, Einstein's theory of general relativity predicts that it will have a gravitational effect. It is sometimes called a vacuum energy because it is the energy density of empty vacuum. In fact, most theories of particle physics predict vacuum fluctuations that would give the vacuum exactly this sort of energy. This is related to the Casimir Effect, in which there is a small suction into regions where virtual particles are geometrically inhibited from forming (e.g. between plates with tiny separation). The cosmological constant is estimated by cosmologists to be on the order of 10−29g/cm³, or about 10−120 in reduced Planck units. Particle physics predicts a natural value of 1 in reduced Planck units, quite a bit off.

The cosmological constant has negative pressure equal to its energy density and so causes the expansion of the universe to accelerate. The reason why a cosmological constant has negative pressure can be seen from classical thermodynamics; Energy must be lost from inside a container to do work on the container. A change in volume dV requires work done equal to a change of energy −p dV, where p is the pressure. But the amount of energy in a box of vacuum energy actually increases when the volume increases (dV is positive), because the energy is equal to ρV, where ρ (rho) is the energy density of the cosmological constant. Therefore, p is negative and, in fact, p = −ρ.

A major outstanding problem is that most quantum field theories predict a huge cosmological constant from the energy of the quantum vacuum, more than 100 orders of magnitude too large. This would need to be cancelled almost, but not exactly, by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero, which does not help. The present scientific consensus amounts to extrapolating the empirical evidence where it is relevant to predictions, and fine-tuning theories until a more elegant solution is found. Philosophically, our most elegant solution may be to say that if things were different, we would not be here to observe anything — the anthropic principle.Technically, this amounts to checking theories against macroscopic observations. Unfortunately, as the known error-margin in the constant predicts the fate of the universe more than its present state, many such "deeper" questions remain unknown.

Another problem arises with inclusion of the cosmic constant in the standard model: i.e., the appearance of solutions with regions of discontinuities at low matter density. Discontinuity also affects the past sign of the pressure assigned to the cosmic constant, changing from the current negative pressure to attractive, as one looks back towards the early Universe. A systematic, model-independent evaluation of the supernovae data supporting inclusion of the cosmic constant in the standard model indicates these data suffer systematic error. The supernovae data are not overwhelming evidence for an accelerating Universe expansion which may be simply gliding. A numerical evaluation of WMAP and supernovae data for evidence that our local group exists in a local void with poor matter density compared to other locations, uncovered possible conflict in the analysis used to support the cosmic constant These findings should be considered shortcomings of the standard model, but only when a term for vacuum energy is included.

In spite of its problems, the cosmological constant is in many respects the most economical solution to the problem of cosmic acceleration. One number successfully explains a multitude of observations. Thus, the current standard model of cosmology, the Lambda-CDM model, includes the cosmological constant as an essential feature.

Quintessence


In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength.

No evidence of quintessence is yet available, but it has not been ruled out either. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the standard model and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmic inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses.

The cosmic coincidence problem asks why the cosmic acceleration began when it did. If cosmic acceleration began earlier in the universe, structures such as galaxies would never have had time to form and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called tracker behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter-radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy.

Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip.

Alternative ideas

Some theorists think that dark energy and cosmic acceleration are a failure of general relativity on very large scales, larger than superclusters. It is a tremendous extrapolation to think that our law of gravity, which works so well in the solar system, should work without correction on the scale of the universe. Most attempts at modifying general relativity, however, have turned out to be either equivalent to theories of quintessence, or inconsistent with observations. It is of interest to note that if the equation for gravity were to approach r instead of r2 at large, intergalactic distances, then the acceleration of the expansion of the universe becomes a mathematical artifact, negating the need for the existence of Dark Energy.

Alternative ideas for dark energy have come from string theory, brane cosmology and the holographic principle, but have not yet proved as compelling as quintessence and the cosmological constant. On string theory, an article in the journal Nature described:

String theories, popular with many particle physicists, make it possible, even desirable, to think that the observable universe is just one of 10500 universes in a grander multiverse, says [Leonard Susskind, a cosmologist at Stanford University in California]. The vacuum energy will have different values in different universes, and in many or most it might indeed be vast. But it must be small in ours because it is only in such a universe that observers such as ourselves can evolve.

Paul Steinhardt in the same article criticizes string theory's explanation of dark energy stating "...Anthropics and randomness don't explain anything... I am disappointed with what most theorists are willing to accept".

Yet another, "radically conservative" class of proposals aims to explain the observational data by a more refined use of established theories rather than through the introduction of dark energy, focusing, for example, on the gravitational effects of density inhomogeneities or on consequences of electroweak symmetry breaking in the early universe.

Implications for the fate of the universe

Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of dark matter and baryons. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant).

If the acceleration continues indefinitely, the ultimate result will be that galaxies outside the local supercluster will move beyond the cosmic horizon: they will no longer be visible, because their line-of-sight velocity becomes greater than the speed of light. This is not a violation of special relativity, and the effect cannot be used to send a signal between them. (Actually there is no way to even define "relative speed" in a curved spacetime. Relative speed and velocity can only be meaningfully defined in flat spacetime or in sufficiently small (infinitesimal) regions of curved spacetime). Rather, it prevents any communication between them and the objects pass out of contact. The Earth, the Milky Way and the Virgo supercluster, however, would remain virtually undisturbed while the rest of the universe recedes. In this scenario, the local supercluster would ultimately suffer heat death, just as was thought for the flat, matter-dominated universe, before measurements of cosmic acceleration.

There are some very speculative ideas about the future of the universe. One suggests that phantom energy causes divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time, or even become attractive. Such uncertainties leave open the possibility that gravity might yet rule the day and lead to a universe that contracts in on itself in a "Big Crunch". Some scenarios, such as the cyclic model suggest this could be the case. While these ideas are not supported by observations, they are not ruled out. Measurements of acceleration are crucial to determining the ultimate fate of the universe in big bang theory.

History

The cosmological constant was first proposed by Einstein as a mechanism to obtain a stable solution of the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Not only was the mechanism an inelegant example of fine-tuning, it was soon realized that Einstein's static universe would actually be unstable because local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. More importantly, observations made by Edwin Hubble showed that the universe appears to be expanding and not static at all. Einstein famously referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. Following this realization, the cosmological constant was largely ignored as a historical curiosity.

Alan Guth proposed in the 1970s that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher energy density than the dark energy we observe today and is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe.

The term "dark energy" was coined by Michael Turner in 1998. By that time, the missing mass problem of big bang nucleosynthesis and large scale structure was established, and some cosmologists had started to theorize that there was an additional component to our universe. The first direct evidence for dark energy came from supernova observations of accelerated expansion, in Riess et al and later confirmed in Perlmutter et al.. This resulted in the Lambda-CDM model, which as of 2006 is consistent with a series of increasingly rigorous cosmological observations, the latest being the 2005 Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10 per cent. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration.



source : Wikipedia

[...]

Categories: ,
Abhijeet
Comments






A quasar (contraction of QUASi-stellAR radio source) is an extremely powerful and distant active galactic nucleus. They were first identified as being high redshift sources of electromagnetic energy, including radio waves and visible light that were point-like, similar to stars, rather than extended sources similar to galaxies. While there was initially some controversy over the nature of these objects, there is now a scientific consensus that a quasar is a compact region 10-10000 Schwarzschild radii across surrounding the central supermassive black hole of a galaxy.


Overview

Quasars show a very high redshift, which is an effect of the expansion of the universe between the quasar and the Earth. When combined with Hubble's law, the implication of the redshift is that the quasars are very distant. To be observable at that distance, the energy output of quasars dwarfs every other astronomical event. The most luminous quasars radiate at a rate that can exceed the output of average galaxies, equivalent to one trillion (1012) suns. This radiation is emitted across the spectrum, almost equally, from X-rays to the far-infrared with a peak in the ultraviolet-optical bands, with some quasars also being strong sources of radio and of gamma-rays. In early optical images, quasars looked like single points of light (i.e. point sources), indistinguishable from stars, except for their peculiar spectra. With infrared telescopes and the Hubble Space Telescope, the "host galaxies" surrounding the quasars have been identified in some cases.[1] These galaxies are normally too dim to be seen against the glare of the quasar, except with these special techniques. Most quasars cannot be seen with small telescopes, but 3C 273, with an average apparent magnitude of 12.9, is an exception. At a distance of 2.44 billion light-years, it is one of the most distant objects directly observable with amateur equipment.

Some quasars display rapid changes in luminosity in the optical and even more rapid in the X-rays, which implies that they are small (Solar System sized or less) as an object cannot change faster than the time it takes light to travel from one end to the other; but relativistic beaming of jets pointed nearly directly toward us explains the most extreme cases. The highest redshift known for a quasar (as of December 2007) is 6.43, which corresponds (assuming the currently-accepted value of 71 for the Hubble Constant) to a distance of approximately 28 billion light-years. (NB there are some subtleties in distance definitions in cosmology, so that distances greater than 13.7 billion light-years, or even greater than 27.4 = 2*13.7 light-years, can occur.)

Quasars are believed to be powered by accretion of material into supermassive black holes in the nuclei of distant galaxies, making these luminous versions of the general class of objects known as active galaxies. Large central masses (106 to 109 Solar masses) have been measured in quasars using 'reverberation mapping'. Several dozen nearby large galaxies, with no sign of a quasar nucleus, have been shown to contain a similar central black hole in their nuclei, so it is thought that all large galaxies have one, but only a small fraction emit powerful radiation and so are seen as quasars. The matter accreting onto the black hole is unlikely to fall directly in, but will have some angular momentum around the black hole that will cause the matter to collect in an accretion disc.

Knowledge of quasars is advancing rapidly. As recently as the 1980s, there was no clear consensus as to their origin.

Properties of quasars

More than 100,000 quasars are known, most from the Sloan Digital Sky Survey. All observed quasar spectra have redshifts between 0.06 and 6.4. Applying Hubble's law to these redshifts, it can be shown that they are between 780 million and 28 billion light-years away. Because of the great distances to the furthest quasars and the finite velocity of light, we see them and their surrounding space as they existed in the very early universe.

Most quasars are known to be farther than three billion light-years away. Although quasars appear faint when viewed from Earth, the fact that they are visible from so far away means that quasars are the most luminous objects in the known universe. The quasar that appears brightest in the sky is 3C 273 in the constellation of Virgo. It has an average apparent magnitude of 12.8 (bright enough to be seen through a small telescope), but it has an absolute magnitude of −26.7. From a distance of about 33 light-years, this object would shine in the sky about as brightly as our sun. This quasar's luminosity is, therefore, about 2 trillion (2 × 1012) times that of our sun, or about 100 times that of the total light of average giant galaxies like our Milky Way.

The hyperluminous quasar APM 08279+5255 was, when discovered in 1998, given an absolute magnitude of −32.2, although high resolution imaging with the Hubble Space Telescope and the 10 m Keck Telescope revealed that this system is gravitationally lensed. A study of the gravitational lensing in this system suggests that it has been magnified by a factor of ~10. It is still substantially more luminous than nearby quasars such as 3C 273.

Quasars were much more common in the early universe. This discovery by Maarten Schmidt in 1967 was early strong evidence against the Steady State cosmology of Fred Hoyle, and in favor of the Big Bang cosmology. Quasars show where massive black holes are growing rapidly (via accretion). These black holes grow in step with the mass of stars in their host galaxy in a way not understood at present. One idea is that the jets, radiation and winds from quasars shut down the formation of new stars in the host galaxy, a process called 'feedback'. The jets that produce strong radio emission in some quasars at the centers of clusters of galaxies are known to have enough power to prevent the hot gas in these clusters from cooling and falling down onto the central galaxy.

Quasars are found to vary in luminosity on a variety of time scales. Some vary in brightness every few months, weeks, days, or hours. This means that quasars generate and emit their energy from a very small region, since each part of the quasar would have to be in contact with other parts on such a time scale to coordinate the luminosity variations. As such, a quasar varying on the time scale of a few weeks cannot be larger than a few light-weeks across. The emission of large amounts of power from a small region requires a power source far more efficient than the nuclear fusion which powers stars. The release of gravitational energy by matter falling towards a massive black hole is the only process known that can produce such high power continuously. (Stellar explosions - Supernovas and gamma-ray bursts - can do so, but only for a few minutes.) Black holes were considered too exotic by some astronomers in the 1960s, and they suggested that the redshifts arose from some other (unknown) process, so that the quasars were not really so distant as the Hubble law implied. This 'redshift controversy' lasted for many years. Many lines of evidence (seeing host galaxies, finding 'intervening' absorption lines, gravitational lensing) now demonstrate that the quasar redshifts are due to the Hubble expansion, and quasars are as powerful as first thought.

Quasars have all the same properties as active galaxies, but are more powerful: Their Radiation is 'nonthermal' (i.e. not due to a black body), and some (~10%) are observed to also have jets and lobes like those of radio galaxies that also carry significant (but poorly known) amounts of energy in the form of high energy (i.e. rapidly moving, close to the speed of light) particles (either electrons and protons or electrons and positrons). Quasars can be detected over the entire observable electromagnetic spectrum including radio, infrared, optical, ultraviolet, X-ray and even gamma rays. Most quasars are brightest in their rest-frame near-ultraviolet (near the 1216 angstrom (121.6 nm) Lyman-alpha emission line of hydrogen), but due to the tremendous redshifts of these sources, that peak luminosity has been observed as far to the red as 9000 angstroms (900 nm or 0.9 µm), in the near infrared. A minority of quasars show strong radio emission, which originates from jets of matter moving close to the speed of light. When looked at down the jet, these appear as a blazar and often have regions that appear to move away from the center faster than the speed of light (superluminal expansion). This is an optical trick due to the properties of special relativity.

Quasar redshifts are measured from the strong spectral lines that dominate their optical and ultraviolet spectra. These lines are brighter than the continuous spectrum, so they are called 'emission' lines. They have widths of several percent of the speed of light, and these widths are due to Doppler shifts due to the high speeds of the gas emitting the lines. Fast motions strongly indicate a large mass. Emission lines of hydrogen (mainly of the Lyman series and Balmer series), Helium, Carbon, Magnesium, Iron and Oxygen are the brightest lines. The atoms emitting these lines range from neutral to highly ionized. (I.e. many of the electrons are stripped off the ion, leaving it highly charged.) This wide range of ionization shows that the gas is highly irradiated by the quasar, not merely hot, and not by stars, which cannot produce such a wide range of ionization

Iron Quasars show strong emission lines resulting from low ionization iron (FeII), such as IRAS 18508-7815.

Quasar emission generation

This view, taken with infrared light, is a false-color image of a quasar-starburst tandem with the most luminous starburst ever seen in such a combination.

This view, taken with infrared light, is a false-color image of a quasar-starburst tandem with the most luminous starburst ever seen in such a combination.

Since quasars exhibit properties common to all active galaxies, the emissions from quasars can be readily compared to those of small active galaxies powered by supermassive black holes. To create a luminosity of 1040 W (the typical brightness of a quasar), a super-massive black hole would have to consume the material equivalent of 10 stars per year. The brightest known quasars devour 1000 solar masses of material every year. The largest known is estimated to consume matter equivalent to 600 Earths per hour. Quasars 'turn on' and off depending on their surroundings, and since quasars cannot continue to feed at high rates for 10 billion years, after a quasar finishes accreting the surrounding gas and dust, it becomes an ordinary galaxy.

Quasars also provide some clues as to the end of the Big Bang's reionization. The oldest quasars (redshift >~ 6) display a Gunn-Peterson trough and have absorption regions in front of them indicating that the intergalactic medium at that time was neutral gas. More recent quasars show no absorption region but rather their spectra contain a spiky area known as the Lyman-alpha forest. This indicates that the intergalactic medium has undergone reionization into plasma, and that neutral gas exists only in small clouds.

One other interesting characteristic of quasars is that they show evidence of elements heavier than helium, indicating that galaxies underwent a massive phase of star formation, creating population III stars between the time of the Big Bang and the first observed quasars. Light from these stars may have been observed in 2005 using NASA's Spitzer Space Telescope, although this observation remains to be confirmed.

History of quasar observation

The first quasars were discovered with radio telescopes in the late 1950s. Many were recorded as radio sources with no corresponding visible object. Using small telescopes and the Lovell Telescope as an interferometer, they were shown to have a very small angular size. Hundreds of these objects were recorded by 1960 and published in the Third Cambridge Catalogue as astronomers scanned the skies for the optical counterparts. In 1960, radio source 3C 48 was finally tied to an optical object. Astronomers detected what appeared to be a faint blue star at the location of the radio source and obtained its spectrum. Containing many unknown broad emission lines, the anomalous spectrum defied interpretation — a claim by John Bolton of a large redshift was not generally accepted.

In 1962 a breakthrough was achieved. Another radio source, 3C 273, was predicted to undergo five occultations by the moon. Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to optically identify the object and obtain an optical spectrum using the 200-inch Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt realized that these were actually spectral lines of hydrogen redshifted at the rate of 15.8 percent. This discovery showed that 3C 273 was receding at a rate of 47,000 km/s. This discovery revolutionized quasar observation and allowed other astronomers to find redshifts from the emission lines from other radio sources. As predicted earlier by Bolton, 3C 48 was found to have a redshift of 37% the speed of light.

The term quasar was coined by Chinese-born U.S. astrophysicist Hong-Yee Chiu in 1964, in Physics Today, to describe these puzzling objects:

So far, the clumsily long name 'quasi-stellar radio sources' is used to describe these objects. Because the nature of these objects is entirely unknown, it is hard to prepare a short, appropriate nomenclature for them so that their essential properties are obvious from their name. For convenience, the abbreviated form 'quasar' will be used throughout this paper.

Hong-Yee Chiu in Physics Today, May, 1964

Later it was found that not all (actually only 10% or so) quasars have strong radio emission (are 'radio-loud'). Hence the name 'QSO' (quasi-stellar object) is used (in addition to 'quasar') to refer to these objects, including the 'radio-loud' and the 'radio-quiet' classes.

One great topic of debate during the 1960s was whether quasars were nearby objects or distant objects as implied by their redshift. It was suggested, for example, that the redshift of quasars was not due to the expansion of space but rather to light escaping a deep gravitational well. However a star of sufficient mass to form such a well would be unstable and in excess of the Hayashi limit. Quasars also show unusual spectral emission lines which were previously only seen in hot gaseous nebulae of low density, which would be too diffuse to both generate the observed power and fit within a deep gravitational well. There were also serious concerns regarding the idea of cosmologically distant quasars. One strong argument against them was that they implied energies that were far in excess of known energy conversion processes, including nuclear fusion. At this time, there were some suggestions that quasars were made of some hitherto unknown form of stable antimatter and that this might account for their brightness. Others speculated that quasars were a white hole end of a wormhole. However, when accretion disc energy-production mechanisms were successfully modeled in the 1970s, the argument that quasars were too luminous became moot and today the cosmological distance of quasars is accepted by almost all researchers.

In 1979 the gravitational lens effect predicted by Einstein's General Theory of Relativity was confirmed observationally for the first time with images of the double quasar 0957+561.

In the 1980s, unified models were developed in which quasars were classified as a particular kind of active galaxy, and a general consensus emerged that in many cases it is simply the viewing angle that distinguishes them from other classes, such as blazars and radio galaxies. The huge luminosity of quasars results from the accretion discs of central supermassive black holes, which can convert on the order of 10% of the mass of an object into energy as compared to 0.7% for the p-p chain nuclear fusion process that dominates the energy production in sun-like stars.

This mechanism also explains why quasars were more common in the early universe, as this energy production ends when the supermassive black hole consumes all of the gas and dust near it. This means that it is possible that most galaxies, including our own Milky Way, have gone through an active stage (appearing as a quasar or some other class of active galaxy depending on black hole mass and accretion rate) and are now quiescent because they lack a supply of matter to feed into their central black holes to generate radiation.



source : Wikipedia

[...]

Categories: ,