Since 1998, when the unexpectedly accelerated expansion of the universe
was first reported, it has become customary to describe the universe using the LCDM model. The implicit critical
assumption of a non-zero cosmological constant and a null space curvature is
shown here to be open to question. The availability of exact analytical
solutions for Einstein cosmological equation in the L-Cold Dark Matter and the K-Open Friedman-Lemaitre models makes it easy to quantitatively compare
different models of the universe regarding the existence and proportion of dark
matter and energy, the CMB anisotropies, the maximum observed redshifts, the average
cosmic density and other cosmic quantities. The fact that both models, as well
as intermediate ones, fit many of the experimental data reasonably well, while
they are still prone to criticism in different places, can be taken as proof
that further investigation is needed.
For about a century, quantitative discussions of astrophysical cosmology
have taken as the starting point Einstein’s cosmological equations, that can be
put into condensed form in the single equation:
Here We assume that equation (1) describes correctly the cosmic evolution at
least as a first approximation. General compact solutions of equation (1) are used
below to investigate what values of k and L are compatible with the available
observational evidence, especially with reasonable values of Hubble’s parameter
(Livio & Riess 2013) It is well known that in the early 1950’s there were two competing
models, based upon very different interpretations of equation (1), to describe
the apparently isotropically expanding universe: the Theories involving Theoretical cosmologists point out sometimes that it is difficult to
chose between an infinite universe and an enormously large one. This may be
true in the abstract, but it must be noted that an actually infinite universe
leads to problems with equation (1). If M is actually infinite, the second and
the third terms (involving a finite k and a finite L) become totally irrelevant. It is true that k£0 in equation (1) implies that the geometry of the universe is potentially infinite, but
to make it actually infinite, one has to add an additional postulate, namely
the so-called When Einstein boldly made the original attempt to use his general theory
of relativity to describe the cosmos as
a whole, he evidently had in mind the three particular physical cases where he
had applied successfully his general theory of relativity (Gonzalo, 2012): the
bending of light in the Sun gravitational field, the advance in the perihelion
of Mercury, and the gravitational redshift of light emitted by very massive
objects. In those three cases he had taken k > 0, and it is likely that he
originally expected that k > 0 in equation (1) would describe the whole
cosmos. For this reason he introduced the third term with L, in order to achieve a static
universe. Much later, in 1947, he said in a letter to Lemaitre (Gonzalo 2012,
p. 61):
On the other hand, it was perfectly reasonable, as Friedmann and Lemaitre
did, to take k < 0, in which case the third term (with L > 0) would be unnecessary. In
fact, during the expansion, especially in the early phase, when the cosmic
background radiation density was very high, the
Since 1998, when the unexpectedly accelerated expansion of the universe
was first reported, it has become customary to describe the universe using the
Lambda Cold Dark Matter (LCDM) model, which assumes k=0 L>0 in Einstein equation (1). Whether cosmic space-time curvature is closed, flat or open must be
decided, of course, on experimental grounds. However, at present this is not always
done. (Ade et al, 2013), for instance, present a computation of cosmological
parameters to fit the experimental results obtained by Planck satellite. In their
paper, they describe the procedure they have used, which can be summarized
thus: 1.
They
postulate the LCDM model, i.e. a flat
universe (k=0, L> 0) with six free
parameters, each of which with a given starting range: w 2.
They
adjust the values of those six parameters so that their model fits the
experimental data. Obviously they start adjusting the values of the two parameters
with the smallest allowed range amplitude (the last two) and then tune the
other four. With such a number of free handles, the fact that they could find a
proper fitting is not so surprising. Table 2 in their paper show the values
they have obtained for the six free parameters. 3.
They
next use those adjusted parameter values to compute the values of other
cosmological parameters in the LCDM
model, such as H 4.
In
section 6 of the paper, the curvature of the universe, as deduced from their
results, is said to be very small, of the order of W Has the LCDM model been really
validated? Having been chosen as the starting point, it is not surprising that a
low value of W “Figure 1 is based on a full likelihood
solution for foreground and other “nuisance” parameters assuming a cosmological
model. A change in the cosmology will lead to small changes in the Planck
primordial CMB power spectrum because of differences in the foreground
solution.” In computer simulation, the validation of a model, after it has been
adjusted to the available experimental data, can be done in two different ways.
The first is best, but not always possible, when the second alternative may be
used: a)
By
using the model to make predictions of possible results, different from those
used to adjust the model and confirming those predictions with new experimental
results. In principle, measurements of W b)
By
comparing the model with other available models and coming to the conclusion
that this model fits better the available data. In this case, the best-fitted
model is accepted provisionally as the good one, until new experimental data
make a proper validation possible. In the following section we shall offer a comparison of several models (LCDM, KOFL, and a family of mixed models) and will try to signal the current
strengths and deficiencies of the first two.
To compare different cosmological models, we have started from reasonable values for the
two cosmological parameters H
. In all three, R_{1/2}=R[W_{m}=1/2]_{.}
The values we have used for H The difference between the three models is obvious: in the KOFL model, W If we assume that L = 0, the solutions of equation (1) for arbitrary k lead to
Euclidean
(Gonzalo 2012)
With the data we have used for H
therefore compatible with an open universe. The
corresponding values for the LCDM and
the mixed model are quite similar,
but in these cases the universe would be either
flat or open. Further results for these and other cosmological models, including KOFL
models for different values of k and another mixed case, where k=-0.25 and
When they presented the final results of the
We intend to update that comparison with the current estimations of H As shown in (Gonzalo & Alfonseca 2013) in more detail, equation (1)
for an open universe with the KOFL model, k < 0, L = 0, can be solved analytically. The
corresponding compact parametric solutions obtained are given by
where
and
Equation (3) is compatible with the observational evidence for the
current value H Taking into account that W
which results in In this model, the dark matter effect on gravitational lensing could
perhaps be explained, at least in part, because the average density of matter
was much more dense at early times In the LCDM model (k = 0, L > 0), the density of the universe W=W Equation (1) for a flat universe with the LCDM model can also be solved analytically. The
corresponding compact parametric solutions obtained are
where
(H
Curiously enough, equation (8) has the same shape as equation (4) for
the KOFL model, although the meaning of the parameter y is different in both
cases. The derived current density is now W
which
results in Figure 1a, which corresponds to Figure 9 in (Freedman et al 2001), with
the horizontal axis modified to unify both models (KOFL and LCDM), shows clearly that both are
compatible with the current estimations of H (a) (b)
´ time for an open (KOFL), flat (ΛCDM) and mixed
universeOne interesting difference between the models is the time when the
universe would have reached its Schwarzchild radius, i.e. it stopped behaving
like an exploding black hole. As indicated in Table 1, with the KOFL model this
would have happened 414 million years after the Big Bang, at a redshift
z=18.38, which means that the light of every object we have been able to detect
was emitted by that object long after the universe could no longer be
considered as a black hole. With the LCDM
model, however, things are quite different, as the universe would have reached
its Schwartzchild radius much later, over 3 billion years after the Big Bang,
at a redshift z=2.134, which means that the light of some of the objects we
have detected comes technically from within a black hole (see figure 2). This
bizarre situation is difficult to explain.
Another point of comparison between different models uses the situation
at the time of Last Scattering (LS), about when the cosmic background microwave
radiation was generated. As it is well known, the minute anisotropies in the
CMB can be studied by means of spherical harmonics of order l = 0, 1, 2..., where the multipole
moment l, detected by analyzing anisotropies at an angular separation qº, can be approximated by: l » 180º/qº. Theory states that l < 100 represents anisotropies between
points which at the time of last scattering were separated further than the
horizon at that time (the distance where the expansion of the universe would
have reached the speed of light), while l > 100 represents anisotropies between points inside the horizon. The
multipole corresponding to maximum anisotropy would take place, for different
models, at about 200 for the LCDM
model and at a higher value for the KOFL model (due to the high curvature
assumed by this model (K=-1). Table II compares the results for both models.
Experimental measurements by the Planck satellite result in l
It is easy to check that the relativistic expression giving the distance
r=R
result in
that for z << 1 results in the usual
Hubble ratio
but for increasingly higher z values it gives
which mimics When S. Perlmutter reported (Schwarzschild 2011) the accelerated
expansion of distant type Ia supernovae, he pointed out that the apparent
magnitudes reported might need corrections due to a possible dimming by interfering
dust at early cosmic times. Let us make a quantitative evaluation of the
corrections to the measured magnitudes due to dimming by cosmic dust. These
corrections must be expected to bring down the observed magnitudes for high
redshift type Ia supernovae, and to become negligible for supernovae closer to
us, affected by much less cosmic dust ( The uncorrected SN magnitude m
resulting in
in a direct relation to the light intensities
where Therefore
that implies
The correction in magnitude
which becomes The corrections are substantial even at moderate redshifts of the order
of
The LCDM model has been
considered standard in most publications on cosmology since 1998. However, the
fitting of the Planck satellite experimental results to that model by (Ade et
al 2013) cannot be considered a full validation, since there are still
important discrepancies to be explained. It is interesting to notice that, when
a detailed comparison was done in 2001 by Freedman and colleagues (Freedman et al
2001), their conclusion was:
experimental data, at that time, were compatible with both the LCDM and KOFL models, as well as with a whole
family of intermediate (mixed) models. In this paper we have compared three different models, LCDM, KOFL for k=-1, and a mixed model, taking
into account the values of H The KOFL model seems to have an important problem to solve: the behavior
of the CMB anisotropies for multipoles l > 100, especially the fact that it predicts
a relatively high value for l A more accurate determination of H Another point to be considered is the fact that the temperature of the
universe at the time of last scattering is usually approximated to exactly 3000
K. Perhaps it should be noted that a slight change in this temperature (100 K
up or down) moved the time when the CMBR happened by about 25,000 years in each
direction. The search for
In this Appendix we show that the KOFL model requires a reasonable time for
the first proto galaxies to be formed, so that the maximum observable red
shifts may be expected to be substantially lower than z_Sch, the red shift
corresponding to the universe´s Schwarzschild radius. If we start from the KOFL model and assume that protogalaxies started
forming by an aggregation of cosmic dust around a cosmic irregularity after the
universe reached its Schwartzchild radius (when it stopped behaving like a
black hole), their construction should have ended before the time of the
maximum observed redshift (which currently is about 10), when galaxies can be
observed now. Let us estimate the protogalaxy formation time for a galaxy with mass M The protogalaxy radius R
and the protogalaxy density is related to the
cosmic average density and to the Schwarzschild density
where according to Peebles (quoted in Weinberg
2008, p. 424) the factor The mean value of the bracketed
factor in equation (13), for r in the interval
Therefore, assuming that the galaxy
had the same density as the universe at Schwarzschild time divided by
Using the data in Table 1 for the
KOFL model and z
resulting in
quite close to the value 2.7 estimated by
Peebles.
- Ade,
P.A.R. et al (261 authors) 2013, Planck 2013 results. XVI. Cosmological
parameters, arXiv:1303.5076.
- APS
News 2013, 22, 6, p. 1.
- Aurich,
R., Lustig, S., Steiner, F., Then, H., Hyperbolic Universes with a Horned
Topology and the CMB Anisotropy,
*Class.Quant.Grav*. 21 (2004) 4901-4926. arXiv:0403597. - Einstein,
Albert 1952,
*The Principle of Relativity*(Toronto: Dover). - Freedman,
Wendy L. et al (15 authors) 2001, Final Results from the Hubble Space
Telescope to Measure the Hubble Constant. The Astrophysical Journal,
553:47-72.
- Gonzalo,
Julio A. 2012, Cosmic Paradoxes, chap. 6 (Singapore: World Scientific).
- Gonzalo, Julio A. &
Alfonseca, Manuel 2013, arXiv:1306.0238.
- Jaki,
Stanley L. 2000, The Paradox of Olbers Paradox (Pickney, MI: Real View
Books).
- Livio, M. & Riess, A.G. 2013, Measuring
the Hubble constant,
*Physics Today*66, 10, 41. - Rees,
Martin 2000, Just Six Numbers (New York: Basic Books).
- Schwarzschild,
B. M. 2011,
*Physics Today*64, 12, 14. - Siegfried,
T. 2014, Cosmic Question Mark,
*Science News*185:7, 18-21, April 5, 2014. - Wall,
Mike 2012, http://www.space.com/18879-hubble-most-distant-galaxy.html
14.
Weinberg,
S. 2008, Cosmology (Oxford: Oxford University Press). [ BWW Society Home Page ] © 2014 The Bibliotheque: World Wide Society |