• No results found

2. Theory

2.2 Heat transfer

2.2.1 Infrared thermography

2.2.1.2 The atmospheric effects

As thermography is a remote sensing method, other sources of radiation affect the thermal readings to different degrees. In order to retrieve the correct kinetic temperature of the object, the theoretical background of thermal radiation described in section 2.2.1.1 is used. There are three sources of radiation that affect the total radiation, Wtot, captured by the infrared camera (Usamentiaga et al., 2014). This can be expressed as

π‘Šπ‘‘π‘œπ‘‘= π‘Šπ‘œπ‘π‘—+ π‘Šπ‘Ÿπ‘’π‘“π‘™+ π‘Šπ‘Žπ‘‘π‘š (2.15)

where Wobj is the targeted object’s emitted radiation, Wrefl is the surrounding, or background, radiations reflected of the target object, and Watm is the radiation from the atmosphere (Usamentiaga et al., 2014). With the use of Stefan-Boltzmann’s law, the three sources radiation may be expressed in the following way (Usamentiaga et al., 2014)

π‘Šπ‘œπ‘π‘— = πœ€π‘œπ‘π‘— βˆ™ πœπ‘Žπ‘‘π‘š βˆ™ 𝜎 βˆ™ (π‘‡π‘œπ‘π‘—)4 (2.16)

π‘Šπ‘Ÿπ‘’π‘“= πœŒπ‘œπ‘π‘— βˆ™ πœπ‘Žπ‘‘π‘š βˆ™ 𝜎 βˆ™ (π‘‡π‘Ÿπ‘’π‘“)4 (2.17) π‘Šπ‘Žπ‘‘π‘š = πœ€π‘Žπ‘‘π‘šβˆ™ 𝜎 βˆ™ (π‘‡π‘Žπ‘‘π‘š)4 (2.18)

10

Here, Ξ΅obj is the object’s emissivity, Ο„atm is the atmosphere’s transmittance, Tobj is the object’s temperature, ρobj is the object’s reflectivity, Tref is the reflected temperature, Ξ΅atm is the emissivity of the atmosphere, and Tatm is the atmosphere’s temperature (Usamentiaga et al., 2014). Equation (2.17) can be re-written with the use of equation (2.14), resulting in equation (2.19) (Usamentiaga et al., 2014). With the use of equation (2.11) and assuming ρatm = 0, equation (2.18) can be re-written into equation (2.20) (Usamentiaga et al., 2014).

π‘Šπ‘Ÿπ‘’π‘“ = (1 βˆ’ Ξ΅π‘œπ‘π‘—) βˆ™ Ο„π‘Žπ‘‘π‘šβˆ™ 𝜎 βˆ™ (π‘‡π‘Ÿπ‘’π‘“)4 (2.19) π‘Šπ‘Žπ‘‘π‘š = (1 βˆ’ Ο„π‘Žπ‘‘π‘š) βˆ™ 𝜎 βˆ™ (π‘‡π‘Žπ‘‘π‘š)4 (2.20) Equation (2.15) can then be rewritten and solved for the object’s temperature as follows

Tπ‘œπ‘π‘— = 4√(Wtot) βˆ’ (1 βˆ’ πœ€π‘œπ‘π‘—) βˆ™ πœπ‘Žπ‘‘π‘šβˆ™ 𝜎 βˆ™ (π‘‡π‘Ÿπ‘’π‘“)4βˆ’ (1 βˆ’ πœπ‘Žπ‘‘π‘š) βˆ™ 𝜎 βˆ™ (π‘‡π‘Žπ‘‘π‘š)4

πœ€π‘œπ‘π‘— βˆ™ πœπ‘Žπ‘‘π‘š βˆ™ 𝜎 (2.21)

The objects emissivity, the reflected temperature, and the atmospheric transmittance and temperature must be applied in order to correct the temperature readings of the object (Usamentiaga et al., 2014). As the atmospheric transmittance is very close to one, as it is estimated from the atmospheric relative humidity and distance to the object, its influence is considered neglectable at fairly small distances (Quang Huy et al., 2017). However, the object’s emissivity and the reflected temperature highly influence the thermal readings (Usamentiaga et al., 2014). In most cases, the reflected temperature can be set equal to the atmospheric temperature for objects with high emissivity (Quang Huy et al., 2017).

A correction of the emissivity is in order because real materials have a lower emissivity than 1, as presented in section 2.2.1.1. This means that the real kinetic temperature of an object’s surface, Tkin, is lower than the radiated temperature, Trad (Kuenzer & Dech, 2013). Based on the Stefan-Boltzmann formula, equation (2.8), and the definition of emissivity, the real kinetic temperature of an object’s surface, can be written as (Kuenzer & Dech, 2013)

Trad = Ξ΅14 βˆ™ Tkin (2.22)

This indicates that the temperature sensed with the infrared camera, the radiance temperature, can differ significantly even for objects with the same kinetic temperature on the surface [16].

The thermal images must therefore be corrected in order to retrieve the correct kinetic temperature of an object [16]. From the corrected thermal readings, the thermal transmittance may be calculated.

11 2.2.1.3 Thermal transmittance of thermal images

The thermal transmittance, U , can be defined as the β€œHeat flow rate in the steady state divided by the area and by the temperature difference between the surroundings on each side of the system” (International Organization for Standardization, 2014). This can be written as

π‘ˆ = 𝑄

(𝑇𝑖 βˆ’ π‘‡π‘œ) Β· 𝐴 (2.23)

where A [m] is the surface area the heat, Q , flows through, Ti is the indoor atmospheric temperature and To is the outdoor atmospheric temperature (Madding, 2008).

As a result of conduction in steady state, between the building element and the atmosphere, the thermal transfer can be treated as the sum of the convective, Qc, and radiative, Qr, contributions (Bienvenido-Huertas et al., 2019; Madding, 2008). Equation (2.24) can therefore be rewritten as

π‘ˆ = 𝑄𝑐 + π‘„π‘Ÿ

(𝑇𝑖 βˆ’ π‘‡π‘œ) Β· 𝐴 (2.24)

Several methods have arisen from equation (2.23) in order to determine the thermal transmittance of building element (Bienvenido-Huertas et al., 2019). Bienvenido-Huertas, D., et al. (Bienvenido-Huertas et al., 2019), analyzed some equations created to determine the thermal transmittance, formulated by different authors, and found there to be little to difference between these equations. Equation (2.25) (Bienvenido-Huertas et al., 2019) was therefore used in order to determine the thermal transmittance of a building element with the use of infrared thermography.

π‘ˆ =β„Žπ‘(π›₯π‘‡π‘€π‘Žπ‘‘π‘š) + 4πœ€πœŽ 𝑇𝑀3 (𝑇𝑀 βˆ’ π‘‡π‘Ÿπ‘’π‘“)

𝑇𝑖 βˆ’ π‘‡π‘œ (2.25)

Here, the radiative thermal transfer is given by

π‘„π‘Ÿ= 4 βˆ™ πœ€ βˆ™ 𝜎 βˆ™ 𝐴 βˆ™ 𝑇𝑀3(π‘‡π‘€βˆ’ π‘‡π‘Ÿπ‘’π‘“) (2.26) where Ξ΅ is the emissivity of the wall, the wall’s temperature, Tw, and Tref is the reflected temperature (Bienvenido-Huertas et al., 2019; Madding, 2008). As for the convective thermal transfer, it is given by

𝑄𝑐 = β„Žπ‘βˆ™ 𝐴 βˆ™ (βˆ†π‘‡π‘€π‘Žπ‘‘π‘š) (2.27) where hc is the convective heat transfer coefficients, and Ξ”Twatm is the temperature difference between the wall surface, Tw, and the atmospheric temperature between the infrared camera and

12

the object, Tatm (Madding, 2008). The convective coefficient is influenced by the air flow conditions, or convection, experienced by the wall (Jayamaha et al., 1996). Commonly, this coefficient is usually estimated with the use of the following correlation

β„Žπ‘ = 5.7 + 3.8 βˆ™ 𝑣 (2.28)

where v is the wind speed (Jayamaha et al., 1996). Based on experimental studies, other correlations have been developed in order to estimate the convective heat transfer coefficient (Jayamaha et al., 1996).

The aim of Bienvenido-Huertas, D., et al. article (Bienvenido-Huertas et al., 2019) was to analyze the internal convective heat transfer coefficient, hi . Out of the 25 different correlations of temperature differences analyzed by Bienvenido-Huertas, D., et al. (Bienvenido-Huertas et al., 2019), the following internal convective heat transfer coefficient was selected

β„Žπ‘– = 3.08(βˆ†π‘‡π‘€π‘–)0.25 (2.29)

Here, Ξ”Twi is the absolute temperature difference between the wall surface, Tw, and the indoor atmospheric temperature, Ti (Bienvenido-Huertas et al., 2019).

2.2.2 Thermal resistance and thermal transmittance with the use of the heat flow meter method

Section 2.2.1 describes one of the methods used to calculate the thermal transmittance. Another method, which can be used in order to determine the thermal transmittance, is with the use of the heat flow meter method. The heat flow meter method can be used in order to indicate the total thermal resistance of a building element, which then can be used in order to determine the thermal transmittance of said building element (International Organization for Standardization, 2014).

Thermal resistance, or R-value, is a measurement for thermal transfer or heat loss through a building element (Edvardsen & Ramstad, 2014b). All respective materials in the element consist of a distinctive thermal resistance, which can be summed together in order to achieve the element’s total thermal resistance (Edvardsen & Ramstad, 2014b).

If the materials in the building element are unknown, it is possible to measure the total thermal resistance of a building element with the use of the heat flow meter method. The measurements and calculations are to be in accordance with ISO 9869.

According to ISO 9869:2014, the measurements needed, to obtain the thermal resistance, are the surface temperature on both sides of the element as well as the heat flux through the element, (International Organization for Standardization, 2014). It is of importance that the measurements are taken over a sufficiently long period (International Organization for Standardization, 2014). In order to analyze the measurements, ISO 9869:2014 present two

13 different methods. The average method is a simple method, while the dynamic method is considered more sophisticated (International Organization for Standardization, 2014). For this thesis, the dynamic method will not be further described as it was not used.

With the use of the average method, the thermal resistance can be obtained by dividing the average heat flux by the average temperature difference (International Organization for Standardization, 2014). The average thermal resistance for the period is displayed in equation (2.30) (International Organization for Standardization, 2014).

π‘…π‘’π‘™π‘’π‘šπ‘’π‘›π‘‘ =

If the average thermal resistance does not differ by more than 5 %, for three subsequent nights, the measuring period can be concluded (International Organization for Standardization, 2014).

Only the data acquired at night, 1 hour after sundown and until sunrise, are recommended for the analysis to avoid the influence of solar radiation (International Organization for Standardization, 2014).

In order to calculate the total thermal resistance, surface layers of air must also be included as they function as a thermal resistance (Γ‡engel et al., 2015; Edvardsen & Ramstad, 2014b). The heat flux through a layer of fluid by convection can be expressed as

π‘žπ‘π‘œπ‘›π‘£π‘’π‘π‘‘π‘–π‘œπ‘› = β„Žπ‘ βˆ™ Δ𝑇 (2.31)

where Ξ”T is the temperature difference (Γ‡engel et al., 2015). Another way to define the heat flux is as the heat flow per unit area, equation (2.32) (Γ‡engel et al., 2015).

π‘ž =𝑄

𝐴 (2.32)

As the thermal resistance can be defined as the inverse of the thermal transmittance (Edvardsen

& Ramstad, 2014b) the resulting thermal transmittance by combining equation (2.23), (2.31) and (2.32) is

14

where Rsi is the interior surface thermal resistance and Rse is the exterior surface thermal resistance (Edvardsen & Ramstad, 2014b). As the thermal resistance was the inverse of the thermal transmittance, the thermal transmittance can be expressed as (Edvardsen & Ramstad, 2014b)

π‘ˆ = 1

𝑅𝑇 (2.35)

A well-insulated element has a low thermal transmittance, while a high value indicates the element is thermally deficient (Edvardsen & Ramstad, 2014b).

2.2.3 Thermal bridges

Thermal bridges are areas of the climate screen, such as walls, where there is a significant change in the thermal resistance (SINTEF Byggforsk, 2008). There are several conditions that can create thermal bridges, such as floor to wall junctions, wall to wall junctions and changes in the materials’ thickness, which can lead to numerous unfortunate consequences (SINTEF Byggforsk, 2008). The main consequences are low surface temperatures and an increase in the heat loss (SINTEF Byggforsk, 2008).

Low surface temperatures locally on the inside of structures can be caused by thermal bridges (SINTEF Byggforsk, 2008). The surface temperature depends on the thermal bridge’s influence on the structure and the indoor and outdoor temperatures (SINTEF Byggforsk, 2008).

Condensation may occur as a result of the low surface temperature caused by the thermal bridges (SINTEF Byggforsk, 2008). The conditions and consequences of low surface temperatures were presented in section 2.1.

The linear thermal bridge can be indicated as the heat loss per unit length of the thermal bridge and per degree temperature difference, known as the linear thermal transfer coefficient, Ξ¨ transmittance in areas without the thermal bridge, A is the areal of the wall, and l is the length of the linear thermal bridge (SINTEF Byggforsk, 1999; SINTEF Byggforsk, 2008). Equation (2.36) can be re-written as

Ξ¨ =(π‘ˆπ‘βˆ’ π‘ˆ) Β· 𝐴

𝑙 = (π‘ˆπ‘βˆ’ π‘ˆ) Β· 𝑏 (2.37)

15 where b is the width of the partition wall or the thickness of the floor junctions (SINTEF Byggforsk, 1999).

2.3 Photogrammetry

The scientific field known as photogrammetry, uses imagery to acquire three-dimensional measurements of objects and surfaces (Kemp, 2008; Thomas et al., 2013). A single image contains insufficient information to perform any three-dimensional mapping, as the two-dimensional image only represent a perspective projection of the three-two-dimensional world (Kemp, 2008). Overlapping images of the same scene are therefore required for three-dimensional mapping (Kemp, 2008). The necessary overlap depends on the mapping area, but an overlap of at least 60 % is required (Kemp, 2008).

It is possible to determine the coordinates of the photographed object in a three-dimensional space by applying the principle of triangulation using the measurements made in two or more images taken from different angles (Thomas et al., 2013). By reestablishing the geometric situation during exposure, it is possible to derive placement, shape, and size of objects (Andersen, 1981). The geometric properties of the camera are the internal, external, relative and absolute orientation (Andersen, 1981). Successively establishing these parameters correlates with the quality and precision of the products from photogrammetry (Andersen, 1981).

The geometric properties concerning the internal parts of the camera are considered the camera’s interior orientation (Andersen, 1981). Focal length, principal points, radial and tangential lens distortion relate to the camera’s interior orientation (Andersen, 1981; Thomas et al., 2013).

The camera’s spatial rotation and position are the exterior orientation parameters (Andersen, 1981). With these parameters, the camera’s projection center may be placed in a three-dimensional space (Andersen, 1981). The position can be defined as a vector; [x, y, z], expressing the camera’s projection center position (x, y), and elevation (z) (Andersen, 1981).

The camera’s rotation is defined as a vector; [Ο‰, Ο†, ΞΊ], where the angles (Ο‰, Ο†) indicate the direction, and the angle (ΞΊ) indicates the rotation around the z-axis (Andersen, 1981).

The relative position and orientation between images relate to the relative orientation (Andersen, 1981). As the images are orientated in relation to each other, the computed model’s orientation may not be equivalent with the photographed scenes orientation (Andersen, 1981).

An example could be that the model’s orientation is upside-down in comparison with the photographed scene. Orientating the model in equivalence with the photographed scene is known as the absolute orientation (Andersen, 1981). The absolute orientation is not needed in order to triangulate the three-dimensional coordinates of the scene in the overlapping images.

16

2.3.1 Orthophoto

Images are a representation of the reflected rays of light from a scene, in a two-dimensional format (Andersen, 1981). Cameras capture the reflected light by converging it through its projection center and then transfer it onto a two-dimensional plane (Andersen, 1981). A straight line can be drawn from a given point, P, through the project center and back to the initially reflecting object in the three-dimensional world (Andersen, 1981).

As a result of the photo’s central projection, stitching images together may create a misrepresentation of the photographed scene (Pix4D). An orthophoto is a photo with map like qualities and is unaffected by the misrepresentation caused by the central projection (Pix4D).

The orthophoto consists of an orthogonal projection of the photographed object and has the same scale throughout the product, just like maps (Dick, 2003).

Image point, P

Object point, P

Vanishing point Project plane

Central projection

Project plane

Object point, P Image point, P

Figure 2.2: Central and Orthogonal projection. Modified figure from (Andersen, 1981).

Orthogonal projection

17

3. Equipment and software

This chapter presents the main equipment and software used in order to acquire the measurements further described in section 4.2.

3.1 TRSYS01

TRSYS01 is a robust and highly accurate in-situ measuring system for monitoring heat flux and surface temperatures of building elements (Hukseflux). Even with a low temperature difference across the wall, TRSYS01 is assured to continue measuring because of the high accuracy of the sensor measurements (Hukseflux).

The apparatus can measure at two locations at the same time, leading to a high confidence level in the resulting measurements because of the redundancy (Hukseflux). The two locations are provided with one heat flux sensor and a pair of temperature sensors (Hukseflux).

The sensor for measuring heat flux is HFP01, while temperature sensor model, TC, measures the surface temperatures of the different sides of the element (Hukseflux). The uncertainty of the measured temperature difference, between the paired TC-type thermocouple, is better than 0.1 Β°C and applies over the entire rated temperature range (Hukseflux). Figure 4.5 displays the sensors mounted to a building element.

The right location and conditions are important when installing the sensors. They should not be mounted in areas where they are exposed to e.g. sun, rain, lateral heat fluxes, and drafts (Hukseflux). Thermal bridges and heating devices should also be avoided (Hukseflux). Strongly cooled or heated rooms are ideal, as it results in a constant high level of heat flux (Hukseflux).

The difference in temperature must be higher than 10 Β°C (Bienvenido-Huertas et al., 2019).

Additionally, the indoor temperature shall not change by more than 3 Β°C during the measured period (Hukseflux). More extensive information about the sensor’s installation can be found in TRSYS01’s user manual.

The measurements from the mounted sensors are stored in the MCU01, which is a measuring system with memory and a clock (Hukseflux). The measurements from the sensors, stored in the MCU, can be downloaded to a computer and be further processed in order to calculate the building element’s thermal resistance and thermal transmittance (Hukseflux). In order to determine the thermal resistance and thermal transmittance, the measurements should be used in accordance with ISO 9869 and ASTM C1155/C1046 (Hukseflux).

18

3.2 Cameras 3.2.1 FLIR T620bx

FLIR T620bx, manufactured by FLIR, is a high-performance infrared camera with the latest technology available (FLIR, 2014). The camera has a field of view of 25Β° horizontally and 19Β°

vertically as well as a 640 x 480 pixels sized uncooled focal plane array (FLIR, 2014). Technical characteristics of the infrared camera can be consulted in table 3.1.

Table 3.1: FLIR T620bx 25Β° technical characteristics (FLIR, 2014).

Resolution 640 x 480 pixels

Measurement range - 40Β°C to + 150Β°C

+ 100Β°C to + 650Β°C

Spatial resolution (IFOV) 0.68 mrad

Field of view (FOV) 25Β° x 19Β°

Frequency 30 Hz

Accuracy Max(Β± 2Β°C; 2 %) at 25Β°C nominal

3.2.2 CANON EOS 100D

CANON EOS 100D is a single-lens reflex camera that generates images only representing the reflected visible light of three-dimensional objects. The images can be in the format of JPEG and/or RAW, with a maximum resolution of 5184 x 3456 pixels.

3.3 Pix4D mapper

The image processing software, Pix4D mapper, applies photogrammetry to transform images into digital spatial maps and models (Pix4D, 2017b). The digital images processed in PIX4D can either be in the format of JPEG or TIFF. With TIFF-files, it is possible to use both RGB images as well as thermal images (Pix4D).

3D point clouds and texture models, orthophotos, Digital Surface Model (DSM), and Digital Terrain Models (DTM) are the main outputs of Pix4Dmapper (Pix4D, 2017b). The generated products can be exported to many different formats, making it possible to further process the results in other software such as AutoCAD (Pix4D, 2017b).

There is a desktop version of the software, as well as an opportunity to process the projects in the cloud (Pix4D, 2017b). Images may be directly uploaded to the cloud and processed

19 automatically (Pix4D, 2017b). In order to define the processing options, the desktop version must be used (Pix4D, 2017b). After the desired parameters are defined, the project may be processed with the desktop version or uploaded to the cloud (Pix4D, 2017b). If the cloud is used, the results may be downloaded for further processing and re-processing until desired results are achieved. Manual tie point, MTP, may be added in order to improve the automatically processed point cloud. The processed model may also be forced to have a certain orientation, absolute orientation (Pix4D, 2017b). From the generated models it is possible to make e.g. orthophotos and index maps (Pix4D, 2017b).

3.3.1 Outline of processing steps

Pix4D mapper consists of three main processing steps:

1. Initial processing 2. Point Cloud and Mesh

3. DSM, Orthomosaic and Index

Step 1 is necessary to create index maps, while both step 1 and 2 are needed to create the orthophotos. In step 1, the software identifies the specific features in the images as keypoints (Pix4D). These keypoints are then matched with a similar keypoint in other images (Pix4D).

The necessary overlap percentage between the images may differentiate depending on the image scene (Pix4D). For projects consisting of thermal images, an overlap of 90 % may be required (Pix4D). The internal and external parameters of the camera are also calibrated in this step (Pix4D). The product of this step is a three-dimensional ray-cloud consisting of automatic tie points (Pix4D).

Step 2 builds upon the automatic tie points found in step 1, creating a densified point cloud (Pix4D). The densified point cloud consists of additional tie points created on the basis of the automatic tie points (Pix4D). A three-dimensional texture mesh can be created from the densified point cloud (Pix4D). Figure 3.1 displays the results from step 1 and 2.

20

Figure 3.1: The results from step 1 and 2 are; (a), the resulting images orientations, (b), the automatic tie points, (c), the point cloud, and (d), the triangle meshes.

3.3.2 Mapping

How images are obtained can greatly affect the generated products from the photogrammetry.

The exterior orientation of the camera may affect the number of matched points between images. This also applies to Pix4Dmapper.

A shooting angle perpendicular to the wall is considered the best method to achieve the most numerous and well-distributed matches in Pix4Dmapper (Pix4D, 2017a). Different angles, both vertically and horizontally, usually result in less matches among images (Pix4D, 2017a).

Another element to be considered is how the operator move compared to the photographed surface. The operator should always face perpendicular to the measured area, as this is considered the best option for most spaces (2017a). This result in less distortion in the images as the area of interest appears at the center (2017a).

Indoor mapping is often considered more challenging than outdoor mapping because of restricted space, dim light conditions and flat surfaces with few definite details (2017a). The method may also be applied to areas outside with similar challenges.

(d) (c)

(a) (b)

21

4. Methodology

This chapter presents the historical building, Snekkenes, at which the case study was performed, section 4.1, the data collection, section 4.2, and data processing, section 4.3.

4.1 Snekkenes

A case-study of Snekkenes was performed at Borgarsyssel Museum in Sarpsborg, Norway.

Borgarsyssel Museum is an open-air museum founded in 1921 with a county-wide collection of building masses from Østfold, stretching from world war II and all the way back to the Middle Ages (Borgarsyssel museum). A remarkable collection of historical artifacts ranging from the Stone Age and up to our own time is also housed and displayed at the museum (Borgarsyssel museum).

The historical building, Snekkenes is an Empire style wooden building from the latter half of the 18th century, was the first building moved to Borgarsyssel Museum (Jensen). There are several portraits displayed at Snekkenes, as well as furniture in Baroque, Rococo and Empire style (Jensen).

In May 2016, an inspection of Snekkenes was performed (Borgarsyssel Museum, 2016). The furniture and ceiling in the Werenskiold-hall were inspected and mold was found on several furnitures. This also includes mold previously revealed on furniture during seasonal cleaning

In May 2016, an inspection of Snekkenes was performed (Borgarsyssel Museum, 2016). The furniture and ceiling in the Werenskiold-hall were inspected and mold was found on several furnitures. This also includes mold previously revealed on furniture during seasonal cleaning