• No results found

Line-Intensity Mapping with COMAP

N/A
N/A
Protected

Academic year: 2022

Share "Line-Intensity Mapping with COMAP"

Copied!
121
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Erik Alexander Levén

Thesis submitted for the degree of Master of Science in Astronomy

Institute of Theoretical Astrophysics University of Oslo

1/6-2019

(2)

Copyright c 20XX, Erik Alexander Leven

This work, entitled “Line-Intensity Mapping with COMAP” is distributed under the terms of the Public Library of Science Open Access License, a copy of which can be found at http://www.publiclibraryofscience.org.

(3)

Over the last few decades science has gained a remarkable insight into the workings of the early universe. Modern CMB measurements and galaxy surveys have provided us with precise estimations of both the statistical properties of the universe at the time of decoupling, as well as the distribution of large scale structures, such as individual galaxies and clusters. However, there is a significant gap between the time of the first structures and the ones bright enough to be observed by modern surveys. As the first luminous structures heralded the Epoch of Reionization, thus ending the dark ages, the detection and measurement of their statistical distribution may help to shed further light on several different science areas, such as galaxy evolution and complementing precision experiments, such as CMB projects.

In this thesis we will present the work of the CO Mapping Array Pathfinder exper- iment, which through the technique of Line-Intensity mapping, aims to measure these old large scale structures. Special emphasis will be placed on the numerical pipeline, created and maintained by the Oslo group, which processes the data in order to create sky maps of the observations.

Finally, we present the preliminary results of the pipeline where we conclude that through simple filters, we are able to remove the vast majority of the correlated noise.

This in turn enables the implementation of a Naive filtering and Binning map-making scheme, which significantly reduces the computational time of the pipeline.

(4)
(5)

The focus of this thesis is the current status of, the CO-Mapping Array Pathfinder (COMAP) experiment. The aim of COMAP is to investigate the properties of the first Large Scale Structures (LSSs) in the universe, such as galaxies and galaxy clusters, through Intensity mapping (IM). These structures originate from the increasing grav- itational pull of collapsing gas clouds of primarily neutral hydrogen, which eventually become dense enough to form stars. The first of these primordial galaxies and clusters were created before the universe was 500 million years old and are among the oldest structures in the universe. A result of their long life is that the luminous objects (such as stars) inside these structures have either died or are only radiating very faintly. The emissions from these galaxies are therefore very difficult to register, and the detection and measurement of these objects are some of main goals of astronomy today.

Traditionally these detections have been performed by galaxy surveys, where each object is identified individually. However, galaxy surveys are not optimized to resolve the very oldest sources as these are often too faint be detected. In IM, instead of attempting to measure the radiation from a single structure, we measure the total intensity from several unresolved sources. We can therefore gain statistical information regarding the populations of these LSSs that has previously been unavailable. IM could potentially be used as an additional resource to distinguish population properties of the primordial galaxies and galaxy clusters, and to fill in the gaps where present day experiments fall short.

To study the radiation from the oldest LSSs in the universe we use structural tracers, often referred to as probes. A probe is a relatively easily distinguishable emission line, which can be traced to the statistical property we are interested in, e.g. the amount of neutral gas or Star Formation Rates (SFRs). The most common probes in IM today are the 21cm line, the CII line, the Ly-α line, and the CO lines. The CO lines are the probes utilized in the COMAP project. As different probes trace different volumes and time scales, the detection and identification is crucial for the field on IM.

As with all intensity measuring experiments where the signal is not significantly stronger than the noise, we normally need to apply different statistical schemes to attempt to filter out the noise. The CO rotational transition could potentially prove to have a distinct advantage compared to the other lines. The cross-correlation between the CO(1-0) and the CO(2-1) transitions could theoretically yield the CO-only signal, eliminating any bias.

The goal of this thesis is to account for the work on IM by the COMAP experiment,

(6)

with a special focus on the work of the numerical machinery (pipeline) by the Oslo and Manchester groups. COMAP consists of a 10m Leighton telescope in Owen’s Valley Ob- servatory in California. As this telescope had to be re-modelled to fit the specifications of the COMAP experiment, much of the statistical analysis of the signal is done in the purpose of calibrating the instrument. The cross-correlation with other CO transitions will not be measured until later phases of the experiment. Consequently, we will still need to apply different statistical schemes to "clean up" the signal from the noise.

We will present the methods used to calculate the signal gain through different scanning techniques, as well as how this gain is eventually applied to the filtered signal.

We will account for the purpose and effects of different filters applied to the signal and how these effects can help us identify both instrumental and theoretical issues. We will also present the different map-making schemes available, as well as their advantages and draw-backs.

In order to provide a perspective of the role and importance of IM and the COMAP experiment, part I, the Introduction, will start by providing a summary of modern cos- mological theory. Applied with the necessary historical perspectives and mathematical tools we will continue to the role of observational astronomy in modern science. Here we will lay the groundwork for the role of IM by determining the shortcomings of cur- rent astronomy and cosmology experiments. We will end part I by introducing IM in general, with focus on the different IM probes and science goals.

Part II will begin with a section concerning the instrument. Being aware of the mechanical structure and workings of the telescope is essential for identifying systematic noise, as well as the telescope’s limitations. We will also summarize current modelling theory in IM as applied to the COMAP experiment. We will then introduce the nu- merical pipeline and the different steps taken to remove correlated noise from the data.

Part II will end with a presentation of the results, followed by a summary, and outlook section regarding future phases.

My contribution to the Oslo COMAP-group can be summarized into two parts.

First, I carried the main responsibility for absolute calibration and the calculation and analysis of the system temperature, Tsys. Second, I have served as a member of the general research team, and worked on the pipeline as a whole. I estimate that my time has been divided approximately 50%between these two part. However, in the following I will describe the work as "my work" only when talking about the tasks regarding the system temperature and calibration. All other tasks will be referred to as "our work", even though I have contributed an equal amount of my time to it.

(7)

As I started to write the acknowledgements, I found it really hard to know what to say.

So many people have helped me in so many different ways that it is hard to know where to start.

For me, the most important thing when writing a master’s thesis, second only to start writing early, is to remember to live a relatively ordinary life away from the context of formulas and figures. For this reason I’d like to start by thanking Maria, Rasmus and Daniel for, not only their help, but most importantly for being there. Whether it’s watching football, doing sports or just hanging out. I don’t know how I would have kept my head cool during this last year without the three of you.

Secondly I would like to thank my family for their support during these last five years. It would definitely not have managed the struggles of studies without having you guys to fall back on.

I would also like to thank my supervisors, Hans-Kristian Eriksen and Ingunn Kath- rine Wehus, for their help and patience during my thesis. The same certainly goes for the rest of the COMAP group, both in Oslo and overseas. Most noticeable among these for me was Marie and H˙avard in Oslo, thank for your time and help these last months.

Special mentions to James Lamb and Dongwooc Chung for their assistance in the early stages of my thesis work

Finally a huge thanks goes out to all of my friends, team-mates and co-students for their help and support all these last years.

Sincerely, Erik Alexander Lev´en Blindern, June 1st, 2019

(8)
(9)

Abstract iii

Preface v

Acknowledgments vii

List of Figures xi

I Introduction 1

1 A brief history of cosmology 3

1.1 The Cosmological Principle . . . 4

1.2 Static or expanding? . . . 4

1.2.1 General relativity . . . 4

1.2.2 Different solutions . . . 5

1.2.3 Understanding the Friedmann equations . . . 6

1.3 The Big Bang . . . 7

1.3.1 The Horizon problem . . . 7

1.3.2 The Flatness problem . . . 8

1.4 Inflation . . . 9

1.5 Big Bang Nucleosynthesis . . . 9

1.6 Recombination and decoupling . . . 9

1.6.1 Dark ages and Epoch of re-ionization . . . 10

2 Observational cosmology 11 2.1 Space-based vs ground-based experiments . . . 11

2.2 CMB . . . 12

2.2.1 COBE . . . 13

2.3 Galaxy surveys . . . 15

2.3.1 Signal-to-Noise . . . 15

2.3.2 Red-shift . . . 16

2.3.3 The hydrogen line . . . 17

2.3.4 Mapping out density . . . 17

(10)

2.3.5 The Hubble Space Telescope . . . 18

3 Intensity Mapping 21 3.1 What is intensity mapping? . . . 21

3.2 IM targets . . . 22

3.3 Science goals . . . 24

3.3.1 The Evolution of Large-Scale Structures . . . 24

3.3.2 ΛCDM . . . 24

3.3.3 EoR physics and galaxy assembly . . . 25

3.4 IM experiments . . . 25

3.4.1 Why CO? . . . 27

II COMAP 29 4 Instrument 31 4.1 Mechanics . . . 32

4.2 Signal systematics . . . 34

4.2.1 The strength of the signal . . . 34

4.2.2 Units . . . 35

4.3 The beam and scanning strategies . . . 38

5 Modeling the CO-signal 41 5.1 Methods . . . 41

5.1.1 Bayes theorem . . . 41

5.1.2 Markov chain Monte Carlo . . . 42

5.1.3 IM Power spectrum . . . 44

5.1.4 Voxel intensity distribution (VID) . . . 44

5.1.5 Combining observables . . . 45

5.1.6 Sampling parameters . . . 45

6 The COMAP analysis pipeline 47 6.1 Noise . . . 48

6.1.1 1/f-noise . . . 48

6.2 Level-1 files . . . 50

6.3 Scan detect . . . 50

6.4 Level-2 files . . . 50

6.4.1 Interpolate NaNs . . . 51

6.4.2 Frequency mask . . . 52

6.4.3 Compute Tsys . . . 52

6.4.4 Relative gain calibration . . . 54

6.4.5 Remove elevation gain . . . 54

6.4.6 Poly-filter . . . 56

6.4.7 Principle Component Analysis filter . . . 56

(11)

6.4.8 Absolute gain calibration . . . 59

6.4.9 Decimate (downgrading) data . . . 59

6.4.10 Noise fit . . . 59

6.5 Map-making . . . 61

6.5.1 Maximum Likelihood . . . 61

6.5.2 Naive Filtering and Binning . . . 62

6.5.3 Destriping . . . 63

7 Results 65 7.1 System temperature . . . 65

7.1.1 Time dependency . . . 68

7.2 Correlated Noise . . . 72

7.2.1 Relative gain calibration . . . 72

7.2.2 Remove elevation gain . . . 74

7.2.3 Poly-filter . . . 77

7.2.4 PCA filter . . . 80

7.3 Absolute gain calibration . . . 86

7.4 Mapmaking . . . 88

8 Conclusion and outlook 95 8.1 Conclusion . . . 95

8.2 Outlook . . . 95

8.2.1 Reconstructing the pipeline . . . 96

8.2.2 Future testing of the pipeline . . . 96

8.2.3 Ordinary science observations . . . 96

Appendicies 97

Bibliography 99

(12)
(13)

1.1 The history of the universe from the Big Bang until today [1]. . . 3 2.1 The cosmic background data from COBE with the blackbody curve fit

at 2.74 K. Figure from 2.1 . . . 13 2.2 An overview of some of the current galaxy surveys. Image from [2] . . . 16 2.3 The Pillars of Creation. The image shows star formation and the eroding

of material in the gas pillars from which the stars are born . . . 19 3.1 Visualisation of intensity mapping. The left image shows a simulation of

galaxies with the red dots being galaxies bright enough to be registered by a galaxy survey. The right image shows the corresponding intensity map. Figure from [3]. . . 22 3.2 Illustration showing the origin of some of the astrophysical probes in use.

CO and CII radiation originate from the region inside the ionized bubble where star formation occurs. Lyα radiation traces both galaxies and the halos outside the star-forming region but inside the ionized bubble. The 21cm radiation traces the neutral hydrogen gas. Illustration by Breysse. 23 3.3 An overview of IM experiments and their redshift coverage. Figure from

[4]. . . 25 3.4 Image from [5] . . . 26 3.5 The frequency levels of the rotational transitions of CO. Image from [6]. 28 4.1 The COMAP collaboration with the COMAP telescope in the back-

ground. On the dish we can both see the cryostat in the center and the secondary reflector mounted on the support legs. Picture (private). . 31 4.2 The receiver (left) and the 19-pixel array (right) as installed on the

COMAP telescope. Picture from [7]. . . 33 4.3 The azimuth-altitude orientations. . . 34 4.4 Signal flow block diagram from the receiver to the ROACH2. (Courtesy

of James Lamb) . . . 35 4.5 Typical beam pattern for a directional antenna. Figure from [8]. . . 39

(14)

5.1 The above plots shows the posterior distributions generated by Ihle [9]

where blue, red and black lines correspond to the power spectrum, the VID and the power spectrum + VID analysis respectively where each curve are shown in both 68%and 95%credibility regions. Figure from [9]. 46 6.1 A flow-chart over the main steps of the Oslo-pipeline. . . 47 6.2 The binned Fourier domain power spectrum (full line) and the noise fit

(dashed line) given by equation 6.1 for an example scan. The best noise fit parameters were fknee ≈ 0.02 Hz, α ≈ -1.66 and σ0 ≈ 0.00131. We notice high noise at low frequencies and very close to the ideal white noise expected from integration alone (Pnoise) at high frequencies. We also notice that scanning frequency is well above the fknee frequency. . . 49 6.3 The main methods contained in l2gen in chronological order. . . 51 6.4 A TOD before and after relative gain calibration. . . 55 6.5 The gain-normalized and elevation filtered TOD (red line) with the low-

order polynomial (dashed line). . . 56 6.6 Figure 6.5 before and after the application of the poly-filter. . . 57 6.7 The TOD before after a PCA filter with up to the first three principal

components removed. . . 58 6.8 The TOD after relative gain calibration, poly filter and binning of fre-

quencies. . . 60 6.9 The normalized power spectrum with the 1/f-noise approximation where

νscan is the azimuth scanning frequency. . . 60 7.1 Side-band averaged system temperature for scan 1845. . . 66 7.2 Figure 7.1 with a linear approximation for AM < 3. . . 66 7.3 Tsys extrapolated to AM = 0 for the six different detectors in ground

profile scan ids 932-945 and the zenith opacity τ0. . . 67 7.4 The different Tsys-values before and after air mass effects are taken into

account. Scan 926 is for reference as its elevation angle corresponds to AM ≈1. All Tsys-values are taken from feed 15. . . 68 7.5 The system temperature as a function of time for 27 different sky-dip/ambient-

load combinations. . . 69 7.6 The Tsys-values together with the air temperatures during the measure-

ments. Both curves are normalized by their mean value. . . 70 7.7 The Tsys-values together with the relative air humidity during the meas-

urements. Both curves are normalized by their mean value. . . 70 7.8 A scatter-plot of the normalized Tsysvalues and the normalized RH values. 71 7.9 The initial NaN-filtered TOD before relative gain normalization. . . 72 7.10 The data before and after application of the relative gain normalization

filter. The top figure shows the TOD after the relative gain normalization filter. The bottom figure shows the binned values of the power spectrum as well as the expected ideal white noise level (dashed line) and the temporal frequency of the scan (dotted line). . . 73

(15)

7.11 Correlations after NaN-filtering and relative gain calibration. . . 74 7.12 The data before and after application of the elevation filter. The top

figure shows the TOD and the bottom figure shows the binned values of the power spectrum as well as the expected ideal white noise level (dashed line) and the temporal frequency of the scan (dotted line). . . . 75 7.13 Correlations after the elevation gain filter. . . 76 7.14 The data before and after application of the poly-filter. The top figure

shows the TOD and the bottom figure shows the binned values of the power spectrum as well as the expected ideal white noise level (dashed line) and the temporal frequency of the scan (dotted line). . . 77 7.15 Correlations after the poly-filter. The middle and bottom plots has a

lower color scale, which enables us to see the remaining correlations in more detail and the bottom plot is zoomed in on feeds six and seven. . . 79 7.16 Correlations after the PCA filter. The top, middle and bottom plots have

had the first, first two and first three PCA modes removed respectively. 81 7.17 The frequency channel acceptance rate (top plot) and the%of the white

noise standard deviation for each detector. . . 82 7.18 The time streams after the removal of each corresponding principal com-

ponent. . . 83 7.19 The data before and after application of the PCA filter. The top figure

shows the TOD and the bottom figure shows the binned values of the power spectrum as well as the expected ideal white noise level (dashed line) and the temporal frequency of the scan (dotted line). . . 84 7.20 Correlations after the PCA-filter. The middle and bottom plot has a

lower color scale, which enables us to see the remaining correlations in more detail on detectors 6-8. The bottom plot is zoomed on feed 8. . . . 85 7.21 The final antenna temperature for scan 4852 at 26 GHz for the first eight

detectors. . . 87 7.22 The resultant map from Oslo scan 485204, detector 7 at 26 MHz. The

color bar indicates antenna temperature where the aperture efficiency is taken into account. The peak temperature is at 1.94 K. . . 88 7.23 The resultant map from scan ids 4722, 4767, 4852 and 4975. All detectors

have been added through variance weighting at 26 MHz. The color bar indicates antenna temperature where the aperture efficiency is taken into account. The peak temperature is at 1.28 K. . . 89 7.24 Jupiter map based only on scan 4722, detector 7 at 26 GHz. . . 90 7.25 Maps of two separate scans of Jupiter. The top map shows scan 4722

from April 3rd and the bottom map shows scan 4852 taken April 6th.

The average change in Jupiter’s position between the two maps is ap- proximately 246 arc-seconds in RA and Dec. . . 91 7.26 Preliminary map of Tau A. . . 93

(16)
(17)

Introduction

(18)
(19)

A brief history of cosmology

The study of cosmology is the study of the origin and evolution of both the universe and its large scale properties. These properties can entail anything from density fluctu- ations of dark matter to the accelerated growth and age of the universe. To be able to understand the framework of modern cosmology it is important to know how and why the leading theories of today originated, in addition to how current experiments help us to further these theories. We will, in this section, summarize the important steps and theories leading up to modern cosmological theory so as to set current and future cosmology experiments in perspective. Figure 1.1 gives a rough overview of the history of the universe. Unless otherwise stated, this section will be based on the the work Modern Cosmology by Scott Dodelson [10]) andEinstein’s conversion from his static to an expanding universe by Harry Nussbaumer [11]

Figure 1.1: The history of the universe from the Big Bang until today [1].

(20)

1.1 The Cosmological Principle

As with most branches of science we need to make some fundamental assumptions about the nature of our field of study, before attempting to address the problems at hand. In cosmology, the most fundamental of these assumptions are known as The Cosmological Principle. The cosmological principle states that the universe on large scales is homogeneous and isotropic. The homogeneous part means that on large enough scales the universe has the same physical properties everywhere. The isotropic part means that on large enough scales the universe looks the same in all directions. This principle comes from the notion that the laws of physics are the same for all observers throughout the universe. This turns out to be a very good assumption, as shown by e.g. the CMB temperature.

1.2 Static or expanding?

1.2.1 General relativity

Modern cosmology is often said to start with Einstein’s work on the theory of "Gen- eral Relativity" (GR) in the years 1907-1915. Having proposed the theory of "Special Relativity" (SR) in 1905, Einstein introduced GR as a replacement for Newton’s laws of gravity. Instead of seeing gravity as a force, Einstein suggested that the gravity is a manifestation of the curvature of spacetime, and thus merely a geometrical effect.

Einstein’s motivation for his theory was to find a set of equations that could explain the curvature of spacetime by mass and energy in an arbitrary spacetime geometry.

Einstein’s equations, linking the curvature to the energy and momentum in spacetime, supplied a way to mathematically test cosmological theories in a fashion that was not possible before.

The famous "Einstein Field Equations" (EFE) when written in tensor format takes the form

Rµν−1

2Rgµν+ Λgµν = 8πG

c4 Tµν, (1.1)

where the left-hand side (LHS) contains the Ricci curvature tensor Rµν, the Ricci scalar R, the metric tensor gµν and the cosmological constant Λ. The LHS connect the curvature of spacetime with the metric for that given spacetime. The right hand side (RHS) contains the gravitational constant G and the energy-stress tensor Tµν. To- gether, G and Tµν represent the energy and mass content of spacetime. The tensors are 4 x 4 symmetric tensors yielding 10 independent equations to be solved (6 when considering the Bianchi identities).

Shortly after publishing his work, Einstein’s theory gained significant acknowledge- ment for being able to provide the missing 43” perihelion succession of Mercury. Still to this day GR remains one of the world’s best tested theories.

(21)

1.2.2 Different solutions

In the decades to come scientists tried to apply Einstein’s field equations to the existing universe, to find sets of solutions which could explain the cosmology we observe. In GR, the object describing the observed universe is the metric tensor, which captures the geometric and causal structure of spacetime. According to Einstein’s theory, free particles moves in straight lines (geodesics), but not necessarily lines that are straight in Euclidean space. We therefore need the metric to tell us how these lines deviate from Euclidean lines, which follow Pythagoras’s theorem.

Static universe

When Einstein applied his equations to the universe he found an unexpected result.

Assuming a homogeneous distribution of matter, with gravity as the only acting force, Einstein’s universe-model seemed to collapse under gravity. At the time of the public- ation of GR the general consensus was that the universe was static. This general belief was clearly in contrast to Einstein’s findings. To remedy this problem Einstein added his famous cosmological constant Λ, to act as counter measure for gravity, ensuring a static model. Einstein’s metric, expressed by the line element takes the form

ds2=−R2

c2(dx21+ sin2(x1)dx22+ sin2(x1) sin2(x2)dx23) +dx24 (1.2) wherexµ=x1, x2, x2, x4 is the 4-vector or spacetime and R is the radius of curvature.

In 1917, Willem de Sitter found another set of static solutions. This universe model solved a matter-less and spatially flat universe model called a "de Sitter universe" with a line element on the form

ds2=−R2

c2 (dx21+ sin2(x1)dx22+ sin2(x1) sin2(x2)dx23) + cos(x1)dx24. (1.3) In the de-Sitter universe, the metric component g44(cos(x1)dx24), depends on the values ofx1. This allowed for the redshifting of sources far away, an effect discovered by Vesto Slipher [12] in 1917, and was up to that point yet to be explained. However, this model was disproved by Georges Lemaitre [13] in 1925, by the conclusion that the model violated the cosmological principle of spatial homogeneity.

Expanding universe

After the findings of Einstein and de Sitter, the Russian scientist Alexander Friedmann found a set of solutions for an expanding universe. In 1922, Friedmann published a paper [14] where he used the assumptions of the Cosmological Principle and a perfect fluid, and applied these to Einstein’s equations using a metric called theFriedmann-Lemaitre- Robertson-Walker (FLRW) metric. The major difference between Friedmann’s and the Einstein and de-Sitter models was that Friedmann allowed the radius R to change in time,R =R(x4). The metric had the corresponding line element

ds2 =R2(dx21+sin2(x1)dx22+sin2(x1)sin2(x2)dx23) +M2dx24 (1.4)

(22)

whereM = ρ2π2R3 is the total mass of a spatially closed universe. The result was a set of equations governing the expansion of the universe, which today are called the Friedmann equations. The paper was also one of the first to mention "the time since the creation of the world", the time at which the size of the universe goes towards zero.

The first independent Friedmann equation for a homogeneous and isotropic universe and can be written as

˙

a2+kc2

a2 = 8πGρ+ Λc2

3 (1.5)

where a is the scale factor, a˙ indicates the time derivative of a, k is the curvature parameter,cis the speed of light in vacuum,Gis the Newton’s gravitational constant, ρ is the density of the perfect fluid and Λ is Einstein’s cosmological constant. This equation is derived from the 00-component of equation 1.1.

The second independent Friedmann equation can be written as

¨ a

a = 4πG 3

ρ+ 3p

c2

+Λc2

3 (1.6)

wherep is the pressure of the perfect fluid. Equation 1.6 is derived from the trace of equation 1.1.

To further credit the view of an expanding universe, in 1927 and 1929 Lemaitre and Edwin Hubble respectively, separately discovered a linear relationship between the distance and redshift of galaxies, where galaxies with large separation move away from each other faster than galaxies with smaller separation. Hubble’s discovery came from observational data, while Lemaitre used the Einstein equations. Lemaitre assumed a time varying radius of curvature R. He also allowed the mass density ρ to vary in time, and thus circumventing the fault in the de-Sitter model. From this model, and through the observational data from Slipher and Hubble, Lemaitre proposed an expanding universe, which implications later became known as the Big Bang theory.

Einstein himself was convinced of the theory of an expanding universe in the early 1930’s. The reasons for this change in view may be many, but undoubtedly Arthur Eddington’s 1930 publication "On the instability of Einstein’s spherical world [15] was known to Einstein at this time. In his paper Eddington showed that Einstein’s solution was unstable. For small perturbations, the Einstein universe would either collapse or expand but never be able to return to a steady state where da/dt = 0. Although a relic of Einstein’s view of a static universe, theΛ-parameter is still in use today. It now represent dark energy, which dominates the total energy density of the universe and acts as a counter to gravity.

1.2.3 Understanding the Friedmann equations

Looking at equation 1.5, it is not easy for the untrained eye to gain an understanding of what the different parts mean. For this reason we will re-write the equation in an attempt to further our intuition regarding the physical aspects involved. We start by defining the Hubble parameter,

H= a˙

a, (1.7)

(23)

wherea˙ is the time derivative of the scale factor. The Hubble parameter tells us about the rate of expansion of the universe. We will also split up the mass density ρinto two parts, radiation densityρr and mass densityρm and solve for H2 yielding

H2 = 8πGρm

3 +8πGρr

3 +Λc2 3 −kc2

a2 . (1.8)

Each of the parts of equation 1.8 is a measure of density, but without a reference point it is hard to compare them to each other. We therefore introduce the critical density

ρc= 3H02

8πG (1.9)

where H0 is the value of the Hubble parameter today. The critical density is the density of the universe had it been completely flat (more of this in section 1.3.2). Each part in equation 1.8 divided by the critical density yield a normalized densityΩwith which we can re-write the first Friedmann equation as

H2 =H02m a3 +Ωr

a4 + ΩΛ+ Ωk a2

. (1.10)

The LHS now describes the expansion rate of the universe while the RHS describes the fractions of the total energy density in the universe as viewed today. The curvature parameter k can take three different values, -1, 0 and 1. All these three values represent a specific universe structure, open, flat and closed as described in section 1.3.2. TheΩ- parameters represent fractions of the total energy density of the universe, whereΩm,Ωr, ΩΛandΩkrepresent matter, radiation, dark energy and curvature densities respectively.

1.3 The Big Bang

Due to the expansion of the universe, and the fact that according to standard models the universe has always been expanding, cosmic distances used to be smaller at earlier times. If we extrapolate the known laws of physics back in time to t=0 we eventually reach a singularity where all the energy of the universe is contained in a point (this interpretation of t=0 is however widely debated as the laws of physics might have been different at very early times [16]). The Big Bang theory is a model explaining the event where the universe expanded from its primordial high-density high-temperature state, which according to the Planck 2015 results [17] occurred 13.813 billion years ago.

The Big Bang theory gave rise to explanations of a variety of phenomenons, including Hubble’s law and the abundance of light elements in the universe. There were however still several problems that the Big Bang theory could not explain. We will here address two of these issues following the Big Bang theory, the horizon problem and the flatness problem.

1.3.1 The Horizon problem

The horizon problem points to the apparent homogeneity of regions in space, when these regions should not have been causally connected. Two regions in space are causally

(24)

connected if they are within each others particle horizons. A (co-moving) particle horizon is defined by the spatial distance light can travel,

Hp =c Z t0

t

dt0

a(t0) (1.11)

wheret0 and t are the time variables between which you wish to calculate the particle horizon, anda is the scale factor. Looking at a region 10 billion light years away, we see how that region appeared 10 billion years ago. Looking at a region in the opposite direction, also 10 billion light years away, we see how that region appeared 10 billion years ago. In order for light to travel from the former to the latter it would require 20 billion years (disregarding expansion), much more than the current age of the universe.

From this we can conclude that information can not have been exchanged between these two regions as nothing can travel faster than light. Thus these two regions are not causally connected. As an example we can regard the CMB temperature. The CMB temperature is nearly isotropic (T0 ≈ 2.726) across the sky. The CMB was released approximately 380 000 years after the Big Bang, which compared to today is approximately zero (considering the universe is more than 13.8 billion years old).

When taking expansion into account, the particle horizon calculated from the Big Bang to today would correspond to approximately one degree on the sky. Why then is the CMB temperature approximately the same on the full sky in all directions without regions having been causally connected?

1.3.2 The Flatness problem

In order to understand the flatness problem we first need to understand what is meant by a flat or non-flat universe. In section 1.2.3 we introduced the critical density ρc. The critical density is the density that is sufficiently high to slow down the expansion of the universe due to gravity, but not strong enough enough to not halt the expansion completely. It is the density where the expansion of the universe is balanced between forever expanding and collapsing by gravity. Therefore, a universe withΩtot = 1 is a flat universe. The 3D analogy for an open universe (Ω<1) would be the surface of a saddle, while a closed universe (Ω>1) would be the surface of a sphere.

In section 1.2.3 we also described the expansion of the universe in terms of the fractional energy densitiesΩ. We stated that theseΩ-values were fractions of the total energy density of the universe, indicating that

tot=X

i

i= 1.0 (1.12)

It might be tempting to state that the universe just happens to be flat, i.e. Ωtot ≡ 1.0, and simple say that there is no problem. In cosmology however we would like an explanation to why this is the case. Consider in addition that Ωtot = 1.0 is the only stable equilibrium state. If Ωtot would have an other value than 1, the universe would rapidly diverge from flatness. Modern experiments have measured the universe to be very close to flat [18]. This means that ifΩtot is very close to one today, it must have

(25)

been extremely close to one around the time of the Big Bang. What is the explanation for this apparent fine-tuning?

1.4 Inflation

In 1981, Alan Guth published his theory of an inflationary universe [19] as a possible solution to the horizon problem and the flatness problem. Guth proposed a situation where the early universe was super-cooled down to temperatures of 28 or more mag- nitudes below the phase transition temperatures of matter,Tc. In this regime, both the horizon and flatness problem would disappear. The inflationary period denotes a period shortly after the Big Bang (t≈10−36 to t≈10−33−10−32) in which the expansion of the universe occurred at an exponential rate. During this short time period the universe expanded by about 60 e-folds. As an effect of inflation, the quantum-mechanical per- turbations created in the causally connected regime of the early universe are "frozen".

These fluctuations set the initial conditions for the growth of structure and anisotropy in the universe. It is also theorized that this rapid expansion generated so called primordial gravitational waves which perturb the universe. This geometric effect on space-time has not yet been detected, although gravitational waves from colliding neutron stars were detected by LIGO in 2017 [20].

1.5 Big Bang Nucleosynthesis

In 1948 Ralph Alpher and George Gamow published a paper calledThe Origin of Chem- ical Elements1 [21] where they explained how elements could be created by radiative capture of neutrons in the early universe. In the early universe all matter existed in a state of highly compressed neutron gas. As the universe rapidly began to expand, the neutron gas cooled down and started decaying into protons and electrons. The neutrons that remained after this decay could start to couple with the newly formed protons to created heavier nuclei. This model explains the creation of elements up to helium-4, but not heavier ones due to the lack of stable nuclei with 5 or 8 nucleons.

1.6 Recombination and decoupling

In the epoch following the Big Bang the universe was opaque to electromagnetic radi- ation, due to the Thomson scattering of free electrons with very short mean free path.

As the universe expanded, the energy of these electrons decreased to the point where free electrons could bind to protons, creating the first neutral hydrogen. During this process, the amount of free electrons rapidly decreased until the remainder could travel freely without collisions. This event is referred to as Recombination, and the last time photons collided (scattered) is calledthe surface of last scattering(SLS) which occurred

1Hans Bethe is not mentioned on purpose.

(26)

approximately 380 000 years after the Big Bang. As the electrons, now bound to pro- tons to form neutral hydrogen, settled into more stable energy states, photons were released. These decoupled photons continued travelling freely and can be seen today as the CMB radiation. The study of the CMB is a whole field within cosmology by itself, and provides fundamental insights regarding the early universe. The CMB provides us with the earliest information regarding the statistical properties of the universe . 1.6.1 Dark ages and Epoch of re-ionization

Following Recombination our universe entered what is called the "Dark ages". The Dark ages are so called since all photons are travelling freely within the universe, and no new photons are generated since stars have yet to form. We therefore have very few means of collecting information from this time period. However, as the age of the universe grows, the baryonic matter in the form of dense molecular clouds, falls further down into the DM-halos potential wells. Eventually the baryonic matter collapses under gravitational pressure, resulting in the creation of the first luminous objects. At 400 million years (z ≈ 15) after the Big Bang the first galaxies started to evolve. These galaxies contained population III2 and population II stars, as well as black hole driven sources such as mini-quasars. The ultraviolet radiation emerging from these galaxies starts to ionize the surrounding gas. This is known as the beginning of the epoch of reionization. The Inter Galactic Medium (IGM), being neutral, ionizes and heats up the universe, ending the dark ages. When a sufficient amount of radiative sources has formed the IGM becomes completely ionized, which is the situation today to a very good approximation. Recent evidence suggest that the epoch of ionization occurred at redshift z= 15 to z = 6, where z = 6 corresponds to the time when the IGM is completely ionized [22]. As this ionizing radiation propagates the IGM it leaves behind radiative fingerprints, which we can analyze. These are i.e. the 21cm line, CII lines, Lyα lines and CO lines. These signals can act as tracers of star-forming regions and are thus a very important research topic in modern day cosmology.

2Population III stars have as of yet not been detected

(27)

Observational cosmology

Ever since Karl Jansky detected the first radio signal from outside our own solar sys- tem in the 1930s, observational astronomy has played a big role of modern cosmology.

In observational astronomy, we either focus on individual sources of radiation or the statistical properties of a wider field. Galaxy surveys, solar studies and exoplanerary studies are some examples of experiments focusing on the study of individual sources while the CMB and the CIB experiments are examples of wider field experiments. In this chapter we aim to summarize the main features of modern observations with the hope of both providing a broad picture of the history of cosmological experiments as well as highlighting their shortcomings. We will place extra focus on the Cosmic Mi- crowave Background (CMB) and galaxy surveys as they are the most complementary for intensity mapping (IM).

2.1 Space-based vs ground-based experiments

The space-based experiments can be divided into two types, orbital experiments (satel- lites) and interplanetary experiments (ie. "New Horizons"). However, since interplanet- ary missions yield little information in terms of cosmology my main focus in this section will be on satellites.

The biggest advantage with satellites is the absence of light pollution and atmo- spheric pollution which ground-based experiments have to deal with. Although very effective experiments, there are significant draw-backs concerning satellites efficiency.

Satellite missions have an unfortunate historic tendency to cost more than originally planned. One example of this is the James Webb Space Telescope (JWST). Named after the former NASA administrator James Webb, the JWST was originally planned as a low-cost experiment. In 1997 the JWST (then named NGST) had a budget of 0.5 billion USD and a preliminary launch date in 2007. After a multiple budget corrections and launch date extensions, in 2011 the US House of Representatives effectively can- celed the whole project by withdrawing funding to NASA. This decision was fortunately reversed by the US congress later that same year. Congress set a new budget limit at 8 billion USD for the planned launch in 2018. Latest updates from NASA [23] estimates

(28)

a launch in March 2021 and have as of now exceeded the congress approved budget by 800 million USD.

Although the JWST is somewhat of a extreme example in terms of satellite budgets it does highlight some important factors for long term projects. First of all, long term projects depending on state funding can greatly suffer due to a change in policy. Other examples of this are the Hubble Space Telescope (HST) and the COBE satellite which suffered delays following the Challenger disaster [24] in 1986. Second, since no mech- anical adjustments can be made once the satellite is launched it is important that the technology on board is up to date. The last factor especially is a common source of delays for space based missions.

2.2 CMB

Following Lemaitre’s article in 1931 regarding the expansion of the universe, several theories involving the implication of such an expansion emerged. One of these theor- ies was "The Origin of Chemical Elements" by Alpher and Gamow. The same year, following this article, Alpher and Robert Herman published a prediction of a "relic ra- diation" as an effect of the expansion of the universe which at present time should have a temperature of 5 K. This signature radiation would not be discovered until over a decade later. In an attempt of removing interference from a receiver at Bell Labs in New Jersey, Arno Penzias and Robert Wilson found a microwave signal coming from all directions. Unable to remove the noise signal, Penzias and Wilson came into contact with Princeton physicist Robert Dicke who, having done research on this radiation, informed them what they had actually found. As a result of their findings, Penzias and Wilson received the Nobel Prize in physics 1978 for their discovery of the CMB.

After the discovery of the CMB, the hunt for anisotropies in the CMB signal began.

According to theory, large scale structures in the universe such as galaxy clusters and filaments originated from small perturbations which grew to modern day size due to gravitational instability. The measurements of these fluctuations remains an ongoing endeavor. Already in the 1980s their upper boundary was set to (∆T /T .10−4). This value is too low for baryons to alone explain the large scale structures we see in the universe today. Remember from section 1.6 that during the early universe baryons were coupled and could not collapse under gravity to form structures. The time baryons have had to collapse is simply insufficient to account for the large structures we observe today. To explain these structures we need particles that could start to collapse under gravity before the baryons, such that after recombination baryons could fall down into these already existing gravitational halos. These unseen non-interacting particles are called dark matter and the search for them continues to this day. Additional proof of dark matter include the direction of the CMB dipole and the distribution of mass in rotating spiral galaxies.

Since its discovery, many different experiments have been carried out to measure the CMB anisotropies and polarization. Three of the most notable are "The Cosmic Background Explorer" (COBE),The Wilkinson Microwave Anisotropy Probe (WMAP)

(29)

and Planck.

2.2.1 COBE

COBE was NASA’s first cosmological satellite, and its goal was to measure the CMB radiation. Active between 1989 and 1993, COBE is often regarded as the "inception of cosmology as a precise science" (The Royal Swedish Academy of Sciences, 2006).

Installed on COBE were three data gatherings instruments,The Differential Microwave Radiometer (DMR), The Far-InfraRed Absolute Spectrophotometer (FIRAS) and The Diffuse Infrared Background Explorer (DIRBE).

The DMR instrument was designed to map the fluctuations in the brightness of the CMB radiation. One of the great successes of COBE was the DMR detection of CMB anisotropies on all observed angular scales. This detection provided support to the Big Bang theory which had begun to receive criticism due to the previous failure to detect these fluctuations [25].

The FIRAS instrument was a spectrophotometer installed to measure the CMB spectrum. The FIRAS measurements of the monopole and the dipole provided a CMB spectrum that fit a blackbody spectrum so precisely that it was determined that less than 1 part in 10 000 of the total energy of the CMB was released more than 1 year after the Big Bang [26].

Figure 2.1: The cosmic background data from COBE with the blackbody curve fit at 2.74 K. Figure from 2.1

The main goal of the DIRBE instrument was to map dust emissions from distant

(30)

galaxies. These emissions are referred to as Cosmic Infrared Background (CIB). CIB originates from redshifted, absorbed or re-emitted radiation from the earliest galaxies and stars. CIB is one of the foregrounds of CMB and through a detailed mapping over a wide frequency range this component could theoretically be subtracted from CMB.

CIB is however contaminated with dust emissions from both our own solar system (the zodiacal cloud) and from dust grains in the Interstellar Medium (ISM). The difficulty of separating the dust-emission foreground from our galaxy to the CIB foreground has led the treatment of both as a single foreground,thermal dust.

WMAP

After the release of the COBE data there were still several questions that needed to be answered. Some of these included

• Do we live in a flat or curved universe?

• How old is the universe?

• Will the universe keep expanding forever?

• How old are the oldest large structures?

• What is dark matter and how much of it is there?

COBE, which studied the CMB with an angular resolution of 7 across the sky, did not have the resolution, sensitivity or accuracy to answer these questions. A new, more sensitive experiment was needed, and from 2001 until its decommission in 2010 theWilkinson Microwave Anisotropy Probe (WMAP) did just this. WMAP managed to constrain the shape of the universe to flat within 0.4%and has determined the age of the universe to be 13.77 billion years to within 0.5%. It managed to determine that baryons only make up for 4.6%of the universe. 24%is made out of dark matter and the remaining part consists of dark energy. Due to the apparent flatness of the universe the WMAP team also concluded that the universe is undergoing an accelerated expansion, and that it will continue to expand forever [27].

Planck

The Planck satellite was active from 2009 to 2013. Its motivation was a once and for all measurement of the intensity and polarizations of the CMB anisotropies. Having exact measurements of the CMB allowed for precise calculations of cosmological parameters which in turn enables realistic models of how the universe works. Among other scientific goals, the Planck mission also studied the Milky Way to map the cold dust distribution along the spiral arms, catalogued galaxy clusters by measuring theSunyaev-Zel’dovich (SC) effect and searched for gravitational waves [28]. In order to achieve the necessary precision and resolution of the angular power spectrum the Planck experiment had an angular resolution corresponding to three times smaller scales than for the WMAP mission. In addition, Planck observed in nine frequency bands compared to the five

(31)

bands on WMAP yielding much more information regarding foreground models than previously accessible.

The final results from the Planck experiment shows strong support for the Λ-CDM (dark energy and cold dark matter) model which is the dominating cosmology model today. Some of the parameters the Planck experiment measured included, the age of the universe (13.797 billion years), that 68.47%of the universe consists of dark energy and that 31.53%consists of dark matter and baryons.

The Planck satellite was deactivated in 2013. The experiment officially ended in July 2018, when the final science paper from the Planck team was published [29].

2.3 Galaxy surveys

The CMB gives us an insight into what the universe looked like at recombination, approximately 380 000 years after the Big Bang. Unfortunately, the CMB says very little about what happened after that epoch. In addition to the parameter values in the early universe we would like to know how the large scale structures in the universe evolved into what we can see today. A natural point to start at would be to try to register how many structures of different types we can observe, and through these populations make statements about their origin and evolution. These surveys usually detect galaxies (hence the name), but they also search for less common objects, such as quasars and clusters. As the galaxy surveys scan the selected sky area, only sources brighter than a set low-cap are registered. These sources are then further studied through spectroscopy or photometry to try to determine their composition and red-shift. Figure 2.2 lists some of the previous, current and future galaxy surveys.

In spectroscopy, we compare the spectra registered from a star with the spectra from standard star measured by the same spectrometer. A spectroscopy survey is therefore focused on having a good spectral resolution so that small details, such as the relative strength of weak spectral lines, can be registered, and thus identifying the composition and redshift of the source to a high accuracy [30]. To achieve this level of precision each object needs to be studied for a longer time period which tends to make spectroscopy experiments more costly than photometric experiments.

In Photometry we often measure different magnitudes through various filters. By comparing a star’s brightness with the brightness of a standard star we can infer in- formation about the star’s spectral type, luminosity class and temperature.

2.3.1 Signal-to-Noise

Whenever we register a signal from a source, the data will contain both the information we are interested in as well as noise. This noise might come from other sources in the sky, atmospheric effects or systematic effects from the instrument. TheSignal-to-Noise Ratio (SNR) is a measurement of how strong the desired signal is relative to the noise.

The ratio is given by

S

N = Tsrc Tsys

τ∆ν (2.1)

(32)

Figure 2.2: An overview of some of the current galaxy surveys. Image from [2]

where Tsrc is the signal, Tsys is the system temperature,τ is the integration time and

∆ν is the bandwidth. In galaxy surveys we need the desired signal to be significantly stronger than the noise in order to determine for example the redshift. A high SNR could be achieved by observing the object for a longer time which is an effective but costly approach.

2.3.2 Red-shift

When we talk about red-shift we often talk about objects themselves being red-shifted.

What we really mean is that the radiation we observe coming from that object is red- shifted. In section 1.2 we talked about the fact that the universe is not static, but is instead expanding. One of the consequences of an expanding universe is that it affects radiation. Remember that light can be characterized by its frequency, its wavelength and its phase. As photons from a very distant star travel through space towards the Earth the space around them expands with time, and so does their wavelengths. In the visible spectrum red light has the longest wavelength, and we therefore call light with increasing wavelength red-shifted. The red-shift z of radiation is given by the formula

z= λobs−λem λem

(2.2)

(33)

whereλobs is the wavelength we observe andλem is the wavelength when the radiation was emitted. Since the expansion of the universe is a function over time, the red-shift is effectively a measure of the age of the radiation.

How do we know whatλem actually is? How can we differentiate between radiation with originally short wavelengths and high red-shift, and radiation with originally long wavelength and only slightly red-shifted? In addition to studying sources with a high SNR we might study the prominent spectral lines from the selected sources. By com- paring the spectral lines to laboratory results we can determine the redshift. Some of these spectral lines are the rotational transitions of CO. As this specific technique will be covered later in section 3.4.1, we will instead focus on the hydrogen line.

2.3.3 The hydrogen line

The hydrogen line, also called the 21cm line, is an effect of the hyper-fine interactions between the electron and the proton in the ground state of neutral hydrogen, where the particles undergo a spin-flip. This spin-flip releases a photon with a wavelength of 21cm, hence the 21cm line. Although a forbidden transition, roughly occurring only once every 10 million years, the shear amount of hydrogen in these areas of interest result in detectable emission lines. Being relatively low-energetic, this radiation can travel more freely through the universe and thus maintaining a strong signal.

To detect the 21cm line we therefore need galaxies with a relative high neutral hydro- gen mass. Unless the 21cm line is significantly stronger than other lines (galactic noise or noise from the instrument) there is no way of determine which line is actually the 21cm line. These relatively high neutral hydrogen mass galaxies are often young galax- ies, which makes the detection of old galaxies (low-red-shift) through 21cm detection more challenging.

2.3.4 Mapping out density

Once the red-shift of a galaxy is determined we can establish its distance to us by Hubble’s law,

d= v H0

≈ cz H0

(2.3) wheredis the proper distance between us and the galaxy, and is the galaxy’s recession velocity and z is the red-shift. Note that Hubble’s law is only valid for small red-shifts, z . 0.1. For higher red-shifts we need what is called a standard candle. A standard candle is a astronomical object with a known absolute magnitude M such that

5·log10D=m−M−10 (2.4)

wherem is the apparent magnitude, andDis the distance to the object in kpc.

Once a galaxy’s redshift and angular position is determined we can create a 3D-map based on the placement of objects relative to us on the sky, and their corresponding redshift. The most advanced such map made so far was made by the SLOAN Digital Sky Survey (SDSS) [31] mapping over 3 million object’s spectra covering a third of the sky.

(34)

2.3.5 The Hubble Space Telescope

The Hubble Space Telescope (HST) was launched in 1990 with the aim of studying radi- ation in the visible, infrared and ultraviolet spectrum. These are relatively high-energy photons which the universe is much more transparent to compared to radio-wavelength radiation. The Earth’s atmosphere completely filters out infrared and ultraviolet light which is why a space-based mission is crucial for the study of this sort of radiation.

Since the HST was the first space-based telescope designed to being able to be repaired and maintained in space, it has provided ground breaking data until this day.

The project has been one of the most productive in astronomy history, providing data for over 14 000 peer-reviewed articles [32] covering many different branches of astronomy.

Two of the more note-worthy fields of study, in addition to galaxy surveys, are A Runaway Universe andThe Evolution of Stars.

A Runaway Universe

Due to the sensitivity of the HST scientists have been able to measure the Hubble con- stant H0 to significantly higher precision than before [32]. This was done by measuring the distance to distant galaxies and comparing these to galaxy-velocity measurements from other experiments. In addition to measuring H0 the data also revealed that not only is the universe expanding, but the expansion is in fact accelerating. This discovery yielded the Nobel Prize in Physics in 2011 [33]. The source of the accelerated expansion is thought to be dark energy which makes up for approximately 68% of the universe.

Dark energy acts like antigravity, forcing matter apart, and has therefore taken over the role of Einstein’s cosmological constant in the Einstein equations.

The Evolution of stars

The HST detectors managed to penetrate enormous gas clouds and doing so shed light on the violent nature of star births. Some of the most famous images taken by the HST depict regions in space where young stars are formed, and their impact on the enormous gas clouds from which they emerge.

The HST has also in unprecedented detail disclosed the death throes of stars. Prior ground-based experiments showed these dying stars as spherical objects. The HST on the other hand depicts these stars in many different shapes, some taking the shape of a butterfly or an hour-glass [32]. The reason for these odd shapes is the dying star’s shedding of layers before collapsing to a white dwarf. The HST also revealed supernova information not before seen, such as the rotating pulsar in the Crab Nebula. Figure 2.3 depicts one of the most famous pictures taken by the HST, thePillars of Creation.

(35)

Figure 2.3: The Pillars of Creation. The image shows star formation and the eroding of material in the gas pillars from which the stars are born

(36)
(37)

Intensity Mapping

Intensity mapping is quickly becoming a strong tool for probing the fainter sources in the universe, sources that are not available through traditional galaxy surveys. We will in this section explain what intensity mapping is, to which extent it is utilized today, and what its advantages and goals are.

3.1 What is intensity mapping?

Intensity mapping (IM) is a growing field in astronomy that opens up new possibilities in terms of studying the growth and evolution of structures in the universe. Until recent years the science of this field has been mainly focused on CMB experiments and galaxy surveys. However, with the 2020 decadal survey approaching [34], new and improved methods are sought after to further improve the development of the field.

One of the ongoing quests in modern astrophysics is the detection of the oldest galaxies in the universe. The mechanisms that controlled the formation of the first galaxies, how much radiation they produced and how that radiation interacted with the gaseous surroundings are topics that are still not fully understood today. In order to fully study the properties of these primordial structures we need a representative sample, which means a more qualitative search.

Galaxy surveys measure the distribution of mass by locating a large number of galaxies and studying them in detail individually. These surveys usually contain a low- end flux limit where only galaxies with emissions above this limit are registered. Thus galaxy surveys only register high-redshift galaxies that are bright enough to pass this cut-off. Additionally, galaxy surveys are often insufficient at detecting extended sources, and suffers from selection biases. Another important detail to account for is the cost efficiency. In spectroscopy we often need the signal from the source to be well above the noise level for a detection to be confirmed. As stated in section 2.3.1 this mean we need to study the object for a longer time period which increases the total cost of the experiment.

IM however registers all emergent photons from the specific sky patch, consequently making IM much more sensitive to faint galaxy radiation and therefore more sensitive

(38)

Figure 3.1: Visualisation of intensity mapping. The left image shows a simulation of galaxies with the red dots being galaxies bright enough to be registered by a galaxy survey. The right image shows the corresponding intensity map. Figure from [3].

to higher redshifts. IM experiments in general cover a larger section of the sky relative to classic galaxy surveys, making them more efficient at detecting faint large scale structures. In addition, when detecting radiation sources through IM we remove the noise statistically, which means that we do not require the signal to be significantly higher than the noise. This results in much cheaper experiments as the time needed on the telescope is less compared to galaxy surveys. The statistical removal of the noise from the signal however is non-trivial and depends on which probe is used. Another disadvantage with IM is the fact that the signal will only put constraints on global quantities and not local ones. Figure 3.1 visualizes the idea behind IM.

3.2 IM targets

IM can be done with several different probes yielding information about time periods and volumes not easily accessible otherwise. Some of the main probes in IM are Lyα lines, 21cm lines, HII lines and CO lines, as shown in figure 3.2.

As young stars emitting highly energetic photons begins to ionize the surrounding gas, recombination and collisional excitation processes results in the emission of Ly-α photons. These photons can act as tracers for the ionized molecular gas as well as for star formation. Ly-α photons correspond to electron transitions between the first and second energy level of the hydrogen atom. As the electrons are de-excited they emit a photon with wavelengthλ= 121.6 nm. If emitted from the EoR the wavelength would have increased to approximately 700 nm > λ > 2000 nm, which lies in the near-IR spectrum.

(39)

Figure 3.2: Illustration showing the origin of some of the astrophysical probes in use.

CO and CII radiation originate from the region inside the ionized bubble where star formation occurs. Lyα radiation traces both galaxies and the halos outside the star- forming region but inside the ionized bubble. The 21cm radiation traces the neutral hydrogen gas. Illustration by Breysse.

The CO signal is a molecular rotational line which also acts as a tracer for star formation since the molecular gas is its fuel. Due to its low excitation temperature at lower levels CO is ideal to probe the cold dense gas of molecular clouds[35]. We will go further into the CO lines in section 3.4.1.

CII lines trace ionized gas as well as photo-dissociation regions. Carbon is one of the most abundant elements in the universe. Due to the fine structure of CII (once ionized carbon) being split at 91 K, CII can through excitation processes emit photons atλ= 157.7 µm. This line acts as a major cooling mechanism for the neutral ISM [36]. Due to its brightness (between 0.1%and 1%of the total far-infrared luminosity in a typical star-forming galaxy) the CII line is a popular probe for the EoR. Carbon is produced inside stars and CII is therefore a natural tracer of gas distribution in galaxies.

As mentioned in section 2.3.3 the 21 cm line traces the neutral gas.

By measuring the line intensity of several different tracers we gain information from a wide range of redshifts covering galaxy and star formation, re-ionization and potentially even the dark ages.

(40)

3.3 Science goals

Since the first detection from an intensity mapping experiment was reported in 2010 [37]

by the Green Bank Telescope (GBT), several more experiments have emerged during a relatively short time period utilizing all four of the probes shown in figure 3.3. The applications of intensity mapping are numerous, as it can be used both in precision calculations and in new detections. I will try to sum up some of the goals for the coming 10 years.

3.3.1 The Evolution of Large-Scale Structures

At its essence, IM addresses the growth of density perturbations in the universe by tracing the emergent intensity from large-scale structures. From these signals we can create 3D intensity maps that contain both information of where in the sky the intensity comes from and when the intensity was generated. A 3D intensity map is therefore a precise method to determine the time evolution of both the ionized gas in the universe by tracing the 21cm emission, and from galaxies and galaxy clusters by tracing Ly- α, HII or CO emissions from the neutral and ionized gas. IM can therefore help us determine the relation between molecular gas and star forming regions. Furthermore, as CO is a strong tracer of star forming regions (see section 3.4.1), by measuring the CO intensity together with other star-formation tracers we can set new restrictions to parameters connecting the star-formation rate and CO-luminosity, as described below in section 5.1.2.

Another area of interest is the research on Baryonic Acoustic Oscillations (BAO).

IM could contribute to this field by measuring the clustering pattern of matter tracers.

These experiments could contribute to the measurement of both the Hubble rate and the angular diameter distance at high redshifts [38].

3.3.2 ΛCDM

Even though IM is promising, can it compete or complement high precision experiments, such as CMB experiments, or with the extensiveness of modern galaxy surveys? To answer this question we need to look at what sets IM and other methods apart. While current structure surveys are generally limited to small scales (k < 10−2 Mpc−1), using IM to probe the very largest scales at high redshifts could be both cheaper and more efficient. In addition, when studying high redshifts, IM would often be more suited for the task as the dilution of the aggregate emission from a host of galaxies is less than for a single galaxy. Standard surveys will therefore struggle to detect fainter objects at larger scales than IM would. Examples of high redshift, large scale IM include LSS- Lyα emission [39] and future potential surveys to set constraints of primordial non- Gaussianity [40]. Lyα IM can also be used as a probe on baryon acoustic oscillations (BAO), as in HETDEX [41] which will provide a Lyαintensity map which can be used for this purpose.

IM can also be used for probing further into theΛCDM model. One example of this

(41)

is the Lyαintensity map provided by HETDEX particularly with focus on the baryonic acoustic oscillations (BAO) and the neutrino mass.

3.3.3 EoR physics and galaxy assembly

In addition to connecting molecular gas and star forming regions, IM could shed new light onto the physics of the EoR. As shown in figure 3.2, 21cm, Lyα, CO and CII radi- ation traces both galaxy populations and the IGM. Combining measurements from all these sources could greatly increase our understanding regarding the physical processes of the EoR. During this epoch, the 21cm line emission can trace the thermal state and the ionized state of the IGM while CO, CII and Lyα can trace the ionizing sources.

The galactic lines could also yield information regarding the cosmic evolution of metal abundance as well as the evolution of the star-formation rate density.

3.4 IM experiments

During the last decade several experiments have been undertaken to investigate the potential of IM. Figure 3.3 depicts some of them, as well as their redshift of investigation.

As CO intensity mapping will be addressed in section 3.4.1, here we will focus on the SPHEREx, HERA and TIME experiments.

Figure 3.3: An overview of IM experiments and their redshift coverage. Figure from [4].

SPHEREx

TheThe Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx) is a space-based NASA mission due to launch in 2023 [5].

Being a spectro-photometer the SPHEREx experiment will be able to both look closer

Referanser

RELATERTE DOKUMENTER

There had been an innovative report prepared by Lord Dawson in 1920 for the Minister of Health’s Consultative Council on Medical and Allied Services, in which he used his

The ideas launched by the Beveridge Commission in 1942 set the pace for major reforms in post-war Britain, and inspired Norwegian welfare programmes as well, with gradual

The dense gas atmospheric dispersion model SLAB predicts a higher initial chlorine concentration using the instantaneous or short duration pool option, compared to evaporation from

In April 2016, Ukraine’s President Petro Poroshenko, summing up the war experience thus far, said that the volunteer battalions had taken part in approximately 600 military

This report documents the experiences and lessons from the deployment of operational analysts to Afghanistan with the Norwegian Armed Forces, with regard to the concept, the main

Based on the above-mentioned tensions, a recommendation for further research is to examine whether young people who have participated in the TP influence their parents and peers in

Azzam’s own involvement in the Afghan cause illustrates the role of the in- ternational Muslim Brotherhood and the Muslim World League in the early mobilization. Azzam was a West

The data for this thesis has consisted of the burial site at Borre and documents and reports from the 1988-1992 Borre Project, including field journals (Elliot, 1989; Forseth, 1991b,