• No results found

Machine learning for predictions of strains due to long-term effects and temperature in concrete structures

N/A
N/A
Protected

Academic year: 2022

Share "Machine learning for predictions of strains due to long-term effects and temperature in concrete structures"

Copied!
104
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

NTNU Norwegian University of Science and Technology Faculty of Engineering Department of Structural Engineering

Mas ter’ s thesis

Maren Hauge

Machine learning for predictions of strains due to long-term effects and temperature in concrete structures

Master’s thesis in Civil and Environmental Engineering Supervisor: Daniel Cantero

June 2019

(2)
(3)

Maren Hauge

Machine learning for predictions of strains due to long-term effects and temperature in concrete structures

Master’s thesis in Civil and Environmental Engineering Supervisor: Daniel Cantero

June 2019

Norwegian University of Science and Technology Faculty of Engineering

Department of Structural Engineering

(4)
(5)

Department of Structural Engineering Faculty of Engineering

NTNU- Norwegian University of Science and Technology

MASTER THESIS 2019

SUBJECT AREA:

Structural Engineering

DATE:

16.06.19

NO. OF PAGES:

95

TITLE:

Machine learning for predictions of strains due to long-term effects and temperature in concrete structures

Maskinlæring for beregning av tøyninger som følge av langtidseffekter og temperatur i betongkonstruksjoner

BY:

Maren Hauge

RESPONSIBLE TEACHER: Associate Professor Daniel Cantero

SUPERVISOR(S): Daniel Cantero (NTNU) and Håvard Johansen (Statens vegvesen Vegdirektoratet) CARRIED OUT AT: Department of Structural Engineering, NTNU Trondheim

SUMMARY:

The effect of creep, shrinkage and temperature, and the development of strains in concrete over time are complex problems which are hard to define accurate models for. In this thesis, the possibility of using machine learning to define more accurate models for predictions of strain due to long-term effects and temperature in concrete has been investigated.

A simply supported concrete beam has been modelled. Strain results for the long-term effects have been obtained with DIANA FEA while strains from the temperature effects were calculated with MATLAB. The Neural Network Time Series toolbox in MATLAB was used for training neural network models with the total strain results. Several models were trained for different points at the beam with time and temperature as inputs and strain as output.

It was found that when the neural network was trained with three years of strain signals the models that were generated had good generalization and could predict strain signals with high accuracy. The length of the strain signal used for training, the complexity of the signal, whether the general trend of the signal changes after the period used for training and the amount of noise in the signal affects the performance of the models. It was seen that it is possible to train neural network models that can eliminate some of the strain due to temperature effects when predicting future strains using a constant temperature as input. The predicted strains followed the general trend of the long-term strains and the yearly variation due to temperature was eliminated. However, the models could not completely remove the daily strain variation due to temperature.

Based on the findings in this thesis and other studies done on machine learning, it seems like there is potential for using neural networks to predict strains with long-term effects with higher accuracy than the material models used for calculating these effects today.

ACCESSIBILITY Open

(6)
(7)

Institutt for konstruksjonsteknikk FAKULTET FOR INGENIØRVITENSKAP

NTNU – Norges teknisk-naturvitenskapelige universitet

MASTEROPPGAVE 2019

for

Maren Hauge

Machine learning for predictions of strains due to long-term effects and temperature in concrete structures

Maskinlæring for beregning av tøyninger som følge av langtidseffekter og temperatur i betongkonstruksjoner

Machine learning for creep, shrinkage and temperature:

Long-term effects in concrete (creep and shrinkage) are complex problems, which are currently considered in the design using approximate methods. Furthermore, the effects of temperature on structural behaviour are not limited only to its thermal expansion, but also affects its structural stiffness and the long-term effects. The goal of this thesis is to try to provide better descriptions of these phenomena using machine learning. The students would develop a numerical model of a generic bridge and simulate the influence of temperature and the long- term effects. The results from this model would be the inputs for machine learning techniques.

At the end of the work, the students should provide an overview on machine learning and its applicability to concrete structures. Additionally, they would apply several techniques to address the influences of temperature, creep and shrinkage in concrete bridges.

Besvarelsen organiseres i henhold til gjeldende retningslinjer.

Veileder(e): Daniel Cantero

Besvarelsen skal leveres til Institutt for konstruksjonsteknikk innen 25. juni 2019.

NTNU, 15. januar, 2019 Daniel Cantero

faglærer

(8)
(9)

I

Preface

This master’s thesis is the final work of the five-year master’s degree in Civil and Environmental Engineering at The Norwegian University of Science and Technology (NTNU). The work has been carried out during the spring of 2019 at the Department of Structural Engineering with associate professor Daniel Cantero as supervisor.

A big thank you will be directed to Daniel Cantero for the help he has provided with this thesis, and for regular meetings and fast correspondence on e-mail. I would also like to thank professor Jan Arve Øverli and Ph.D. Candidate Assis Arano Barenys for helping me with DIANA FEA, and Håvard Johansen, Statens vegvesen, for an informative meeting in January and for providing material on the design of prestressed concrete bridges.

Trondheim, 16th June 2019 Maren Hauge

(10)

II

(11)

III

Abstract

Concrete structures experience strains due to loading, long-term effects and temperature effects. As the material models used for calculating the effect of creep and shrinkage today are only approximate methods, calculations with these models usually differ from the strains that can be measured in a structure. Concrete also behaves nonlinearly when it starts cracking and the structural stiffness in concrete changes with varying temperatures, complicating the calculations. All of this causes the effect of creep and shrinkage and the development of strains in concrete over time to be complex problems that are hard to define accurate models for. In this thesis, the possibility of using machine learning to define more accurate models for predictions of strain due to long-term effects and temperature in concrete has been investigated.

A simply supported concrete beam has been modelled in 2D with reinforcement and post- tensioning. Strain results for the long-term effects have been obtained with DIANA FEA while strains from the temperature effects were calculated with MATLAB. The Neural Network Time Series toolbox in MATLAB was used for training neural network models with the total strain results. Several models were trained for different points at the beam with time and temperature as inputs and strain as output.

It was found that when the neural network was trained with three years of strain signals the models that were generated had a good generalization and could predict strain signals with high accuracy. The length of the strain signal used for training influences the model’s performance, but although a longer signal can give a better performing model, this is not always the case. The complexity of the signal, whether the general trend of the signal changes after the period used for training and the amount of noise in the signal also affects the performance of the models. It was seen that it is possible to train neural network models that can eliminate some of the strain due to temperature effects when predicting future strains using a constant temperature as the input. The predicted strains followed the general trend of the long-term strains and the yearly variation due to temperature was eliminated. However, the models could not completely remove the daily strain variation due to temperature.

Based on the findings in this thesis and other studies done on machine learning, it seems like there is potential for using neural networks to predict strains with long-term effects with higher accuracy than the material models used for calculating these effects today.

There is also potential to generate machine learning models that can consider the temperature effects and remove these, although further work should be done on this area to optimize the models.

(12)

IV

(13)

V

Sammendrag

Betongkonstruksjoner opplever tøyninger som følge av pålasting, langtidseffekter og temperatur. Siden materialmodellene som er i bruk i dag for å beregne effekten av kryp og svinn i betong bare er tilnærmede metoder, vil ofte beregninger med disse modellene gi forskjellige resultat fra de tøyningene som kan bli målt i en konstruksjon. Betong har en ikke-lineær materialoppførsel når den begynner å sprekke opp og den strukturelle stivheten til betong forandres med skiftende temperatur, hvilket kompliserer beregningene. Alt dette fører til at effekten av kryp og svinn, og utviklingen av tøyninger over tid i betong er komplekse problem som er vanskelige å definere med nøyaktige modeller. I denne oppgaven er det sett på muligheten for å bruke maskinlæring for å definere mer nøyaktige modeller til å forutsi tøyninger på grunn av langtidseffekter og temperatur.

En fritt opplagt betong bjelke har blitt modellert i 2D med armering og spennarmering.

DIANA FEA har blitt brukt til å få tøyningsresultater for langtidseffektene, mens tøyning på grunn av temperatureffektene har blitt beregnet med MATLAB. En applikasjon kalt Neural Network Time Series i MATLAB har blitt brukt til å trene nevrale nettverks- modeller med de totale tøyningsresultatene. Flere modeller har blitt trent for ulike punkter på bjelken med tid og temperatur som input og tøyning som output.

Nevrale nettverk som ble trent med tre års tøyningssignal viste seg å gi modeller med god generalisering og som kunne forutsi videre tøyningssignal med høy nøyaktighet. Lengden på tøyningssignalet som ble brukt for trening påvirker modellens prestasjon. Selv om et lenger signal kan gi en modell som presterer bedre, er ikke dette alltid tilfellet.

Kompleksiteten til signalet, om den generelle trenden til signalet forandres etter perioden som ble brukt for å trene modellen og mengden støy i signalet påvirker også modellens prestasjon. Det ble observert at det er mulig å trene nevrale nettverks-modeller som kan eliminere noe av tøyningene som følger av temperatureffektene når videre tøyninger beregnes med en konstant temperatur som input. De beregnede tøyningene fulgte den generelle trenden til langtidstøyningene og den årlige tøyningsvariasjonen på grunn av temperatursvingninger ble eliminert. Modellene kunne i midlertid ikke fullstendig fjerne den daglige tøyningsvariasjonen som følge av temperatursvingningene.

Basert på funnene i denne oppgaven og på andre studier om maskinlæring ser det ut til at det er potensiale for å bruke nevrale nettverk til å forutsi langtidstøyninger med høyere nøyaktighet enn det materialmodellene som brukes for beregning av disse effekten i dag kan. Det er også potensial for å generere maskinlæringsmodeller som kan ta i betraktning temperatureffektene og fjerne disse, selv om videre arbeid bør gjøres på dette området for å optimalisere modellene.

(14)

VI

(15)

VII

Content

Preface ... I Abstract ... III Sammendrag ... V Nomenclature ... X

1 Introduction ... 1

2 Machine learning ... 2

2.1 Learning algorithms ... 3

2.1.1 Unsupervised learning ... 3

2.1.2 Supervised learning ... 4

2.1.3 Reinforcement learning ... 4

2.1.4 Data representation and variation in data sets ... 5

2.2 Regression ... 6

2.3 Generalization and optimization ... 7

2.3.1 Underfitting and overfitting ... 8

2.4 Neural networks ... 9

2.4.1 Introduction to deep learning ... 9

2.4.2 Transfer functions ... 12

2.4.3 Levenberg-Marquardt algorithm ... 13

3 Strain ... 15

3.1 Strain due to mechanical loading ... 15

3.2 Strain due to temperature loading ... 15

3.2.1 The alpha-effect ... 15

3.2.2 The beta-effect ... 16

3.3 Strain sensors ... 16

3.3.1 Strain gauge ... 16

3.3.2 Optical fibre sensors ... 18

4 Creep and shrinkage ... 19

4.1 Material models ... 19

4.1.1 Creep ... 19

(16)

VIII

4.1.2 Shrinkage ... 21

4.1.3 Cracked concrete ... 22

5 Heat transfer ... 25

5.1 Conduction ... 25

5.2 Convection ... 26

5.3 Radiation ... 26

5.4 Heat transfer in concrete bridges ... 27

6 2D model of a concrete bridge ... 28

6.1 Basis of the model ... 28

6.1.1 Dimensions ... 28

6.1.2 Materials ... 28

6.1.3 Concrete covers ... 29

6.2 DIANA FEA ... 30

6.2.1 Creating the model ... 30

6.2.2 Loads ... 31

6.2.3 Reinforcement ... 32

6.2.4 Load combination... 33

6.2.5 Meshing ... 34

6.2.6 Heat transfer ... 34

6.3 MATLAB ... 35

6.3.1 Temperature variation ... 35

6.3.2 Heat transfer ... 36

6.3.3 Strain due to temperature ... 37

6.3.4 Machine learning ... 38

7 Analysis ... 41

7.1 Stresses from DIANA ... 41

7.2 Strain ... 41

7.2.1 Strain from DIANA ... 41

7.2.2 Strain from MATLAB ... 45

7.2.3 Total strain ... 46

7.3 Verifying the strain results from DIANA ... 47

7.4 Machine learning ... 48

(17)

IX

7.4.1 Length of the training period ... 50

7.4.2 Predictions with constant temperature at node 3... 51

7.4.3 Predictions with constant temperature for different points ... 53

7.4.4 Separation of strain signals ... 54

7.4.5 Signal noise ... 55

7.4.6 Discussion ... 55

8 Related work ... 57

9 Conclusion ... 60

Bibliography ... 62

Appendix ... 65

(18)

X

Nomenclature

Small Latin letters

a absorption coefficient

b bias

c specific heat capacity e emissivity coefficient

f transfer function, activation function fcd design compressive strength of concrete

fck characteristic compressive strength of concrete fcm mean compressive strength of concrete

fp0.1k characteristic 0.1% proof-stress of prestressing steel fpk characteristic tensile strength of prestressing steel fyd design yield strength of steel

fyk characteristic yield strength of steel h0 notional size of a member

hc convective heat coefficient

n neuron

neff effective refraction

q uniformly distributed load

𝑞̇𝐺 rate of heat generated per unit volume in a body

t time

ve vector with networks error

w weight

x input data

y output data or

distance from the neutral axis to point of interest predicted output

𝑦̅ mean value of the actual output data y

Small Greek letters

αdiff thermal diffusivity

αT thermal expansion coefficient

(19)

XI β thermal hardening coefficient γ partial factor

ε strain

λ thermal conductivity λB Bragg wavelength

µ damping parameter

ξ thermo-optic coefficient

ρ mass density

σ stress

φ creep coefficient

Capital letters

A cross-sectional area Cnom nominal concrete cover Cs Stefan-Boltzman constant E Young’s modulus

F axial force

H Hessian matrix

I identity matrix

I second moment of area Is solar radiation

J Jacobian matrix

KE linear coefficient for strain

KT linear coefficient for temperature

L length

M moment

N number of data points P post-tensioning force Pe photo-elastic constant R electrical resistance

T temperature

ΔCdev allowed deviation Λ spacing of the grating

(20)

XII

Abbreviations

AI artificial intelligence ANN artificial neural network

GF gauge factor

MSE mean squared error

NMSE normalized mean squared error R2 coefficient of determination RH relative humidity

SNR signal-to-noise ratio

Standards

EC2 Eurocode 2 EC3 Eurocode 3 HB N400 Håndbok N400 MC 2010 Model Code 2010 R668 Report no. 668

(21)

1

1 Introduction

The long-term effects, creep and shrinkage, in concrete are complex effects that depend on many variables such as concrete strength, the dimensions of the concrete element exposed to drying and the relative humidity. Shrinkage is independent of the loading while creep is affected by the load applied to the concrete. When comparing calculations of creep and shrinkage based on material models with creep and shrinkage that has been measured in laboratory tests or in concrete structures that have been built, there is often a gap between the results. The models that exist today for calculating the creep and shrinkage deformations are approximate models based on empirical data. Because of all the variables and the complexity of the problem, it is hard to make simple models that are suitable to be used for design and are accurate at predicting the effects of creep and shrinkage.

Temperature also affects the structural behaviour. In concrete, the temperature causes thermal expansion and, also, the Young’s modulus for concrete is dependent on the temperature, which means that the structural stiffness will vary through the concrete if there is a temperature variance in the structure. As the creep is affected by the structural stiffness, this means that the creep also depends on the temperature.

The goal with this thesis is to explore machine learning and the possibilities of using machine learning to create a model that can predict the effect of creep, shrinkage and temperature in concrete structures with higher accuracy than the existing material models that is used for calculations today. First, a literature review about basic principles in machine learning will be presented. A numerical model of a generic bridge, simplified as a simply supported beam, were made using DIANA FEA and MATLAB. Strain, including the effects of creep, shrinkage and temperature, was obtained from the concrete beam model and used together with time and temperature for machine learning in MATLAB.

Several neural network models have been created for strain predictions at different points on the beam and it has been investigated how different factors, such as signal length and noise, affect the performance of the models.

For modelling the concrete beam, the following standards, manuals and model codes have been used:

- EC2: NS-EN 1992-1-1:2004+NA:2008: Eurocode 2: Design of concrete structures [1]

- EC3: NS-EN 1993-1-1:2005+NA:2015: Eurocode 3: Design of steel structures [2]

- HB N400: Håndbok N400 Bruprosjektering, [3]

- MC 2010: CEB-FIP Model Code 2010 – Volume 1 [4]

- R668: Report no. 668: Calculation guidelines for PT concrete bridges [5]

For future references to these standards, manuals and model codes, only the abbreviations will be used.

(22)

2

2 Machine learning

From the middle of the 1900s, there have been made attempts to reproduce humanlike intelligence for computers, known as artificial intelligence (AI). Artificial intelligence is the technique a computer, or any device for that matter, has to learn, acquire knowledge, solve problems and adjust according to a situation. There are several fields that let a computer achieve artificial intelligence. One of those fields is called machine learning and makes it possible for computers to make decisions that give the impression of being subjective; as if the computer is thinking.

Machine learning is a programming technique where an algorithm is fed some input data, and in some cases also output data, and produces some new output data. An algorithm is a set of instructions listing the procedure for solving a task. The algorithm generates a model based on the known data it is fed. When new input data is run through the model, the model can predict a response, the new output. Tom M. Mitchell came with a definition of machine learning saying: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” [6]. The goal in machine learning is for an algorithm to learn how to do a task given some data. This task might be to differentiate images of a dog from images of a wolf. The quote from Mitchell says that if the algorithm improves at deciding whether an image depicts a dog or a wolf after being fed more data, it can be said that the algorithm is learning.

Machine learning is often divided into three types of learning algorithms; unsupervised learning, supervised learning and reinforcement learning. Each of these learning algorithms can be categorized into different types of tasks or problems, such as classification problems and regression problems, which are both supervised learning algorithms. Some typical learning tasks for supervised and unsupervised learning are shown in figure 2.1.

Figure 2.1: Machine learning algorithms.

(23)

3

2.1 Learning algorithms

Calculating huge systems of differential equations can be a difficult and time-consuming task for humans while it is relatively simple for computers. If a task is well defined with mathematical formulas and equations, there is almost no limit for how complex the problem can be as long as the computer has enough computational power. However, when a task cannot be well defined with formulas, it is hard to write a computer program to solve the problem. An example of such a task is recognition of objects or speech, which humans do automatically from a young age. This process happens unconsciously without any clear rules of how to do it, and so it is hard to explain the process with formulas. On the other hand, if there are huge amounts of data available to represent the task, it is possible to feed the data to an algorithm which then can extract knowledge from the data and transform it into a model that performs the task.

Machine learning is useful when a task has a certain amount of complexity to it that makes it difficult to formulate a computer program. In addition, there should also be a sufficient amount of data available. An algorithm needs data to train a model, validate the model and test the model. The test data cannot be the same as the training data. The validation data will be used either after training or during training, for cross-validation. After the validation data has been used they become a part of the training data and can thus not be used to confirm the performance of an algorithm. Based on the training data, an algorithm sets parameters for a model. The test data is necessary to check that the parameters for the model perform well on data other than the training data.

2.1.1 Unsupervised learning

A problem that only contains input data without any corresponding output, or target, data is called unsupervised learning. In [7] this type of problem is expressed as 𝐷 = {𝑥𝑖}𝑖=1𝑁 , where D is a training set, N is the number of training examples and x is the input data, often called the feature. The feature can be the eye colour and gender of a person, an image, a time series or a sentence. When using an unsupervised learning algorithm, the input data is not labelled, but the algorithm learns to differentiate between the input data by recognising patterns. Some methods for doing this are clustering, density estimation and visualization.

An example of a clustering task is to identify whether an image depicts a dog or a wolf if there is a data sample of 1000 images. A training set of images would be fed to the algorithm, which would not know what the images depict, but would try to cluster the images of similar types together in groups. A cluster could, in this case, be images of a dog.

If an image is of a wolf, it would go as a no-dog image, and therefore not be part of the dog cluster. If there were more than two cases, this process would be run through for all the cases. When the algorithm has run through all the training data and made some clusters

(24)

4

of images with similar features, a set of test data is fed to the algorithm. The performance of the algorithm can be measured as accuracy or as the error rate [8]. The accuracy will be determined by the number of images that the algorithm manages to identify correctly, i.e. the algorithm identifies an image as a dog when the image actually depicts a dog. The number of images that are wrongly identified by the algorithm is called the error rate or test error.

2.1.2 Supervised learning

A problem that contains both input data and corresponding output data is called supervised learning. In [7] this type of problem is expressed as 𝐷 = {𝑥𝑖, 𝑦𝑖}𝑖=1𝑁 , where D, N and x are the same as for unsupervised learning, while y is the output data, the targeted value, which in this case is known as well. When using a supervised learning algorithm, the data is labelled, and the algorithm generates a model that best represents the relation between the inputs and the outputs. Some methods for doing this is classification and regression.

A classification algorithm could also solve a similar task to the one described in chapter 2.1.1; differentiating images of dogs and wolfs. In this case, all the images would already be labelled as either dog or wolf before being fed to the algorithm. The inputs are the images and the outputs are the label. There is no need for the algorithm to group the images in this case, because the output, which decides the group, is already given. In this case, the algorithm needs to learn the parameters that are common for all the images in the group of dogs and in the group of wolfs so that when a new image is run through the algorithm the model can recognise these parameters and give the new image the right label. Classification algorithms work for problems that have a finite number of categories.

If a problem has output data that is one or more variables, this would be assigned as a regression problem. Regression problems are further discussed in chapter 2.2.

2.1.3 Reinforcement learning

A reinforcement learning algorithm learns by trying and failing. The goal of reinforcement learning is to find the optimal actions for a given situation so that the performance of the algorithm is the most optimal. For this type of algorithm to give good results, it is necessary to have a balanced amount of exploration and exploitation [9]. Exploration will say that the algorithm tries out new actions to see how it affects the performance, while exploitation is when the algorithm uses actions that already has proven to give good performance in a given situation.

(25)

5

2.1.4 Data representation and variation in data sets

An algorithms ability to perform well on new data is dependent on the representation and variation of data in the training set and test set. A well-known example of this is a classification example with images of dogs and wolfs, where the machine learning algorithm labels an image as a wolf or a husky. An example of inputs and outputs for this algorithm is shown in figure 2.2. When looking at the dogs and wolfs in the images there is no obvious reason to why one of the wolfs gets labelled as a husky. On the other hand, if one looks at the background it is clear that all the images of the wolfs labelled as wolfs are in snow landscapes, as well as the husky labelled as a wolf, while all the huskies labelled as huskies are not in snowy landscapes. This is probably the result of a training set where most of the images of wolfs were in snowy landscapes and the images of huskies were not, so the algorithm learned that snow was a feature of a wolf. However, this feature has nothing to do with whether the image contains a wolf or a dog, but it is a result of a machine learning algorithm learning to do a task on its own based on the information it is given without anyone telling the machine what features to look for. To avoid such problems, it is important to have both training, validation and test sets with a huge variety of data. As many different cases as possible should be present in the data sets.

Figure 2.2: The predicted response from a machine learning algorithm compared to the true response [10].

A machine learning algorithm will be able to make objective features to differentiate between cases, but the features are based on the data fed to the algorithm, and this data might not be objective. An example of this is when Amazon made an algorithm to pick out good job applications [11]. The algorithm turned out to be biased towards women. In the past, technical jobs had mainly been possessed by men, so when the algorithm learned good traits from previous job applications, being a man was favoured. When the task to be solved is labelling images as wolf or dog, it is easy to say if the algorithm has been successful in prosecuting the task. But not all tasks are easy to check. As a task and the data becomes more complex, it becomes harder to check whether the results given from the algorithm is actually good or not, as in the task of finding god job applications.

(26)

6

Another problem is deciding how the algorithm should weight mistakes under the training phase. Should a big mistake be weighted as worse than several smaller mistakes, or should it be the other way around. This is dependent on the problem one is trying to solve, and it is important to be aware of the effect this has on the performance of an algorithm.

The way data is represented is also of importance for the performance of an algorithm. In figure 2.3 some data is represented in both cartesian and polar coordinates. If an algorithm divides data with a line, it will be essential for the algorithm that the data is presented in polar coordinates.

Figure 2.3: Data represented in cartesian coordinates and polar coordinates [8].

2.2 Regression

As mentioned in chapter 2.1.2, regression algorithms are a type of supervised learning where the output is a number. A model is assumed with a given set of parameters, and the machine learning algorithm is run on a training set to define those parameters. The simplest form for regression is linear regression where a line is tried to be fitted to the underlying trend of the data. The linear function is given in equation (2.1), where x is the input, y is the output and wi is the parameters. The algorithm decides the parameters by running the training data through the function, in this case the linear function, and finding the parameter values that give minimum error.

𝑦 = 𝑤1𝑥 + 𝑤0 (2.1)

If the trend of the data is not a linear function, then the performance of the model will not be good even though the parameters might be optimal. In this case, the capacity of the model can be increased by trying a quadratic function or any other higher-order polynomial function or nonlinear function. Model capacity will be further discussed in chapter 2.3.1.

(27)

7

A way of measuring the performance of a regression algorithm is to calculate the mean squared error MSE;

𝑀𝑆𝐸 = 1

𝑁∑(𝑦𝑖 − 𝑦 𝑖)2

𝑁

𝑖=1

(2.2)

where the desired output known from the data sample is y and the output from the algorithm’s prediction is ŷ. The number of data points is denoted N.

For an algorithm to have good performance, the mean squared error on the test data should be minimized. A way to achieve this is by finding parameter values wi so that the mean squared error on the training data is as close to zero as possible. If cross-validation is used, then the mean squared error will be calculated on new data sets while the parameters are optimized on the training data. Alternative ways of measuring performance are to get the absolute difference between the actual output and the predicted output, to calculate the normalized mean squared error NMSE as in [12];

𝑁𝑀𝑆𝐸 =∑𝑁𝑖=1(𝑦𝑖 − 𝑦 𝑖)2

𝑁𝑖=1(𝑦𝑖 − 𝑦̅)2 (2.3)

𝑦̅ = 1 𝑁∑ 𝑦𝑖

𝑁

𝑖=1

(2.4)

or to calculate the coefficient of determination R2 by subtracting the NMSE from 1;

𝑅2 = 1 − 𝑁𝑀𝑆𝐸 (2.5)

where 𝑦̅ is the mean value of the known output from the data sample and y, ŷ and N is as for the MSE. The coefficient of determination R2 is related to the correlation coefficient R.

A R2 value equal to 1 means the predicted outputs ŷ are a perfect fit to the known outputs y. If the R2 value goes towards negative infinity the algorithm has a bad fit while if the value equals 0 then the predicted outputs ŷ do not match the actual outputs y better than what a straight line would have done.

2.3 Generalization and optimization

The goal of machine learning is to train a model with known data so that the model can make the right prediction for future data. How good predictions the model is able to make

(28)

8

for data other than the training set, is called the generalization of the model. The generalization of a model is connected to the error on the test data. For a regression problem, the generalization of the model is better the smaller the mean squared error on the test data is. If the error on the training data is small, the model is said to have a good optimization [8].

A model can be fitted to match the training set perfectly. This model would be optimal for the training data but might not be able to reproduce the general trend of the data. When new data is fed to the algorithm, the model cannot make good predictions and the generalization error is big. All sets of data are different. The composition of the data will be different depending on how it is divided into the training set, validation set and testing set. Also, the method for obtaining the data can be slightly different due to changes in the measurement equipment or because human observations are used to collect the data which results in slightly individual opinions. It is therefore important that the model is reproducing the general trend of the data set, and not the exact points of the data.

2.3.1 Underfitting and overfitting

When the complexity of a model matches the complexity of the function underlying the data sample, the generalization of the model is usually good. If the model is less complex than the trend of the data, the model is underfitted. For an underfitted model, the capacity of the model is lower than the complexity of the actual data sample, and so the training error is big. The model cannot get a good optimization to the actual data. An example of underfitting in a regression problem is to fit a line to data that has a higher-order polynomial function. In the case of underfitting, the training error decreases if the complexity of the model is increased.

If it is the other way around; the complexity of the model is higher than the actual trend of the data, then the model is overfitted. In this case, the optimization error of the model can be really small while the generalization error of the model is big. Overfitting can be a problem when there is noise in the data set, and the model learns to make predictions that also contain the noise parameters. To avoid overfitting, it is important to have enough training data. If the available data set is small, methods like early stopping of the training or regularization can be used.

Figure 2.4 shows an example of a data sample where the underlying trend is a quadratic function. In the underfitted example, the model is a linear function, and so the capacity of the model is lower than the complexity of the data sample. In the overfitted example, a ninth-degree polynomial is fitted to the data sample. Here the model fits a line to trends that do not exist in the actual data. The example with appropriate capacity has a quadratic function fitted to the data sample which matches the complexity of the data set exactly.

(29)

9

Figure 2.4: Three different models fitted to the same data set [8].

2.4 Neural networks

2.4.1 Introduction to deep learning

Deep learning is a field of machine learning and has existed since the 1940s but has been known under different names. In the beginning, deep learning was known as cybernetics, and for a period in the 1980s-1990s, it was known as connectionism or neural networks.

For the last decade, it has become known as deep learning [8].

What makes deep learning different from other machine learning techniques, is that in machine learning the features, that are chosen from the data sample as inputs to the algorithm, is decided upon before the data is fed to the algorithm. In deep learning, all the data is fed through the network, and the algorithm extracts the features from the data itself. This is illustrated in figure 2.5.

Figure 2.5: The process of a classification problem for a machine learning algorithm and for deep learning [10].

Neural networks are essential in deep learning, and the concept is based on how there are networks of neurons in the brain. In the beginning, to achieve artificial intelligence, the

(30)

10

biological functions in the brain were studied and reproduced as a machine learning model. This gave the structure and the name of artificial neural networks ANN which is inspired by networks of neurons in the brain. As the artificial neural networks developed, they stopped trying to replicate the biological functions of neurons but continued using the structure. ANN is basically an algorithm consisting of several layers of functions that are connected. This structure makes it possible to model quite complex concepts with simpler algorithms, which is very useful in for example object or speech recognition and computer vision [8] [13], where the task is complex and a bit abstract.

A neuron can be expressed as;

𝑦 = ∑ 𝑤𝑗𝑥𝑗+ 𝑏

𝑑

𝑗=1

(2.6)

where y is the output, xj are the inputs, wj are the weights of the connections between the inputs and the output, and b is the bias. The bias can alternatively be written as w0x0 where x0 always equals 1. In a neural network, the weights decide the steepness of the transfer function while the biases allow for the transfer function to be shifted left or right. Further explanation of transfer functions is given in chapter 2.4.2.

When d = 1 in equation (2.6), the equation represents the linear function, equation (2.1), where there is only one input for every output. When d is more than 1, the equation describes a hyperplane where there are multiple inputs for every output. An input can also be defined as an exponentiation causing the neuron to be a higher-order polynomial function [13]. Figure 2.6 shows the structure of a neuron. The neurons can also be connected parallel to each other which results in multiple outputs for the given inputs as shown in figure 2.7. A neuron or a single layer of parallel neurons can be called a perceptron, while multiple layers can be called a multi-layer perceptron or a neural network, figure 2.8. All layers between the input and the output layer in a neural network are called hidden layers.

Figure 2.6: Structure of a neuron.

(31)

11

Figure 2.7: Parallel neurons.

A common way of training neural networks is with online learning, which will say that the data set is divided into segments and then segment after segment is fed to the network allowing the algorithm to update the parameters after each segment [13]. This method requires less memory for storing data, and it also allows for an algorithm to be updated as new data samples are obtained. The error function for online learning will be for one data pair and not the whole sample. For a regression problem, the error function will be similar to equation (2.2). The difference is that in (2.2) the error is summed over all the data points, while for online learning the error function (2.7) is calculated for every data pair and for every calculation there is an update Δwj;

𝐸(𝑤|𝑥, 𝑦) = 1

2(𝑦 − ŷ)2 =1

2(𝑦 − 𝐰𝑇𝐱)𝟐 (2.7)

∆𝑤𝑗 = 𝜂(𝑦 − ŷ)𝑥𝑗 (2.8)

In equation (2.8) η is a learning factor, y is the target output, ŷ is the predicted output and x is the input. The learning factor is reduced with time. The magnitude of the update depends on the magnitude of the learning factor, the input and the difference between the predicted output and the targeted output. No update will be made if the predicted value is the same as the targeted value.

A typical structure for neural networks is the chain structure as expressed in (2.9), where the function f(1) represents the first layer, f(2) represents the second layer and f(3) represents the third layer [8]. The functions are the transfer functions discussed in chapter 2.4.2.

𝑓(𝑛) = 𝑓(3)(𝑓(2)(𝑓(1)(𝑛))) (2.9) As the chain of function increases, the depth of the structure is also said to increase. The name deep learning refers to networks of a certain depth, though there are many ways of structuring a network and so the number of layers is not a sole definition of deep learning.

(32)

12

Object recognition in an image can be an example of a neural network with multiple layers.

The pixels in the image is the input, the first layer, to the network. The next layer will be the first hidden layer. This layer can look for edges in the image. The edges can so be fed to the second hidden layer which looks for lines, curves and corners. As the data proceeds through the layers the algorithms can look for a more complex combination of edges in the image such as eyes or a mouth. The last layer is the output, which will be the object that was recognized in the image.

Figure 2.8: Neural network structure with two hidden layers.

2.4.2 Transfer functions

Neurons use something called a transfer function f, or an activation function, to transfer the sum of inputs, weights and bias given to the neuron into output that is sent further into the network.

𝑦 = 𝑓 (∑ 𝑤𝑗𝑥𝑗 + 𝑏

𝑑

𝑗=1

) = 𝑓(𝑛) (2.10)

A network can use multiple transfer functions; some are the linear function, the log- sigmoid function and the hyperbolic tangent sigmoid function as seen in figure 2.9. These transfer functions f for neurons n can be expressed as;

𝑓(𝑛) = 𝑛 (2.11)

𝑓(𝑛) = 1

1 + 𝑒−𝑛 (2.12)

𝑓(𝑛) = tanh⁡(𝑛) = ⁡𝑒𝑛 − 𝑒−𝑛

𝑒𝑛 + 𝑒−𝑛 (2.13)

(33)

13

where equation (2.11) is the linear transfer function, equation (2.12) is the log-sigmoid transfer function and equation (2.13) is the hyperbolic tangent transfer function. In [14], the hyperbolic tangent sigmoid function is defined a bit differently, as shown in equation (2.14). This transfer function is mathematically equivalent to equation (2.13) but runs faster when programming which can be important for large networks.

𝑓(𝑛) = 𝑡𝑎𝑛𝑠𝑖𝑔(𝑛) = 2

1 + 𝑒−2𝑛− 1 (2.14)

Figure 2.9: Transfer functions [15].

Transfer functions as log-sigmoid and the hyperbolic tangent sigmoid function make it possible for neural networks to solve nonlinear problems. Though, since these functions have a range from 0 to 1 and from -1 to 1 respectively, it is often useful to have a linear transfer function between the second last layer and the output layer since the linear function’s range is from negative infinity to positive infinity.

2.4.3 Levenberg-Marquardt algorithm

The least square method is used in regression problems to find the best fit for a line through a set of data points by minimizing the sum of the square errors. With the Levenberg-Marquardt algorithm, it is possible to solve nonlinear least squares problems [16]. The algorithm combines the gradient descent method with Newton’s method, which are both methods for finding a minimum of a function. Newton’s method has a faster optimization than the gradient descent methods, but the downside is that it is necessary to compute the Hessian matrix which is both complex and computationally expensive [15].

With the Levenberg-Marquardt algorithm an approximate Hessian matrix H can be calculated as;

𝐇 = ⁡ 𝐉𝑇𝐉 (2.15)

where J is the Jacobian matrix that contains derivatives of the network errors.

(34)

14

The update to the Levenberg-Marquardt algorithm can then be written as;

𝑥𝑘+1= 𝑥𝑘− [𝐉𝑇𝐉 + µ𝐈]−1𝑔⁡ (2.16) where 𝑔 = 𝐉𝑇𝑣𝑒 is the gradient with ve as a vector with the network errors, xk is a vector of current weights and biases, I is the identity matrix and µ is a damping parameter that determines whether the Levenberg-Marquardt algorithm is depending on the gradient descent method or the Newton method. When µ is zero, equation (2.16) is Newton’s method with an approximate Hessian matrix. When µ is large, the gradient descent method is dominant. At the beginning of training, µ will be large. If the performance function, usually MSE in the case of the Levenberg-Marquardt algorithm, is reducing, the damping parameter will also be reduced. If the performance function increase, so will the damping parameter. It is optimal if µ decreases rapidly and the Levenberg-Marquardt algorithm shifts over to Newton’s method, which converges faster than the gradient descent method. However, the cost of each iteration is higher with Newton’s method than with gradient descent method, which is why it is beneficial to use the gradient descent method first to get close to an error minimum.

(35)

15

3 Strain

3.1 Strain due to mechanical loading

Strain can be expressed by Hook’s law as;

𝜀 = 𝜎

𝐸 (3.1)

where E is Young’s modulus and σ is the stress, which can be expressed as equation (3.2) where the first term, σN, is due to axial force and the second term, σM, is due to the bending moment.

𝜎 = 𝜎𝑁+ 𝜎𝑀 = 𝐹 𝐸𝐴+𝑀

𝐼 𝑦 (3.2)

where F is an axial force, A is the cross-sectional area, M is the moment, I is the second moment of area and y is the distance from the neutral axis to the point of interest.

3.2 Strain due to temperature loading

Strain in concrete due to temperature contains two effects. One is the thermal expansion due to a temperature difference in a structure compared to the initial temperature. The temperature difference is causing the structure to expand or contract which results in strain. This effect will be called the alpha-effect in this document. The other effect will be called the beta-effect and occurs when there is a temperature difference from one area of a concrete structure to another. Since the Young’s modulus for concrete is dependent of the temperature, this gives a significant variation in the concrete strength that affects the strain due to the mechanical loading.

3.2.1 The alpha-effect

When the temperature in a structure changes from the initial temperature, the material will either expand if the temperature increases or contract if the temperature decreases.

This change in length or volume of the structure causes strains, ε. When there is a change in length the strain can be expressed as the axial strain;

𝜀 = ∆𝐿 𝐿⁄ (3.3)

(36)

16

where L is the initial length and ΔL is the change in length which can be expressed in terms of the thermal expansion coefficient αT, the initial length and the temperature difference ΔT in the structure. The strain due to thermal expansion can then be expressed as;

𝜀𝛼= 𝛼𝑇∆𝑇 (3.4)

3.2.2 The beta-effect

If there is a temperature difference in a structure, either because the temperature in one area differs from the temperature in another area, or because the surrounding temperature is changing as for the temperature at day and night, this will cause heat to transfer through the structure and result in temperature differences within the structure.

As mentioned, Young’s modulus for concrete is dependent on the temperature which can be expressed as [4];

𝐸(𝑇) = 𝐸0(1 + 𝛽∆𝑇) (3.5)

where E0 is the initial Young’s modulus at the initial temperature, β is the thermal hardening coefficient and the temperature difference is ΔT = Tc – T0 with T0 as the initial temperature and Tc as the current temperature.

This means that if the temperature in a concrete beam varies with the depth y of the cross section, Young’s modulus also varies with the depth of the beam, 𝐸(𝑇) = 𝐸(𝑇(𝑦)) = 𝐸(𝑦), since it is dependent of the temperature. In turn, this makes the strain due to mechanical loading, equation (3.1), dependent of the temperature and the depth, given as;

𝜀𝛽 = 𝜎

𝐸(𝑇(𝑦)) (3.6)

3.3 Strain sensors

A sensor, transducer, converts a physical phenomenon into an electrical signal. There are different types of sensors for measuring strains. Here strain gauges and optical fibre sensors will be presented.

3.3.1 Strain gauge

A strain gauge usually consists of a wire or a metal foil between a flexible backing material such as thin paper or an epoxy-type plastic film [17]. The metal foil is placed in a pattern as shown in figure 3.1. The strain gauge is bonded on to a structural element. When the

(37)

17

structure is exposed to forces, moments, pressure etc., the element will either elongate or contract. This causes the gauge length to elongate or contract as well. This change in the gauge length can either be used to calculate the strain in the structure indirectly by using equation (3.3) where the change in gauge length is divided by the initial gauge length.

More commonly today is to use an electrical resistance strain gauge, where the strain gauge is connected to an electrical circuit that can measure the change in resistance in the strain gauge. When the gauge length changes, the electrical resistance in the strain gauge also changes and it is then possible to find the strain with the expression for the gauge factor GF;

𝐺𝐹 =∆𝑅 𝑅⁄

𝜀 (3.7)

where R is the resistance of the strain gauge and ΔR is the change in resistance. The gauge factor and resistance of a strain gauge is given by the manufacturer and varies with the material used for the metal foil.

The number of desirable elements in a strain gauge, as shown in figure 3.1, is dependent on the stress-condition of interest. If there are stresses in only one direction, or only one direction is of interest, a single element foil gauge may be used. A two-element and three- element rosettes are used when there are stresses in two directions, for either known or unknown directions respectively.

Figure 3.1: Metal foil strain gauges. (a) single element, (b) two-element rosette, (c) three- element rosette.

The performance of a strain gauge is depending on the foil material. For good performance, the material should have a high gauge factor and high resistivity. If the number of loops in a strain gauge is increased while the gauge length is held constant, the resistance of the gauge is increased [18]. The material should preferably also have low temperature sensitivity. This is important because changes in the temperature cause changes in the resistivity of the material. The structural expansion or contraction due to temperature also causes strains that the strain gauge cannot separate from the mechanical loading.

(38)

18

3.3.2 Optical fibre sensors

Optical fibres are very thin, often made from fused silica when used for structural monitoring and can transmit light over large distances. The two main parts of optical fibres are the core, which will transmit the light along the optical fibre, and a cladding which surrounds the core. The purpose of the cladding is to keep the light waves in the core and not let them disappear out. To obtain this function the cladding has a slightly lower refraction index than the core [19].

Compared to electrical resistance strain gauges which need to be connected to an electrical circuit, an optical fibre sensor uses light which hinders electromagnetic interference in influencing the measurements. Similar to the strain gauge, strain measurements by optical fibre sensors are also affected by changes in the temperature, both because of thermal expansion and because temperature influences the refraction index of the optical fibre.

3.3.2.1 Fibre Bragg gratings

Fibre Bragg gratings are a variety of optical fibre sensors where the surface of the optical fibre has been exposed to UV light [20]. This cause changes in the refractive index, and by sending pulses of UV light through the core of the optical fibre, the refractive index becomes periodic and enables the fibre Bragg grating to reflect one specific wavelength of light, called the Bragg wavelength λB;

𝜆𝐵 = 2𝑛𝑒𝑓𝑓Λ (3.8)

where neff is the effective refraction index and Λ is the spacing of the grating. Fibre Bragg gratings have the ability to measure both strain and temperature and it is possible to calculate the change in temperature ΔT and the strain ε in the optical fibre if the change in the Bragg wavelength is known as in [20];

∆𝜆𝐵= 𝜆𝐵[(𝛼𝑜𝑝𝑡+ 𝜉)∆𝑇 + (1 − 𝑃𝑒)∆𝜀] = 𝐾𝑇∆𝑇 + 𝐾𝜀𝜀 (3.9) where αopt is the thermal expansion coefficient, ξ is the thermo-optic coefficient and Pe is the photo-elastic constant for the optical fibre. KT and Kε are linear coefficients for temperature and strain respectively.

(39)

19

4 Creep and shrinkage

Over time concrete are exposed to the long-term effects creep and shrinkage. When concrete is stressed for a long period of time the concrete will continue to be compressed beyond the instantaneous deformation at the moment of loading without there being added any new load. This additional deformation is called creep. When the concrete is drying out, it shrinks. This effect is called shrinkage and is not affected by the load applied to the concrete. Both creep and shrinkage are dependent of the concrete strength, the dimensions of the concrete element exposed to drying and the relative humidity [21].

4.1 Material models

Different material models give different methods for calculating creep and shrinkage in concrete. The methods are based on empirical formulas developed based on trends in data from experiments or measured data for built concrete structures. Several studies have been done where creep and shrinkage have been calculated with different material models and compared to the actual measured deformations in a given structure. The results from these studies have shown discrepancies between the measured deformations and the deformations calculated based on the material models [22] [23] [24]. This is the results of the fact that the methods for calculating creep and shrinkage are only approximations and the results from the studies show how hard it is to make accurate predictions for creep and shrinkage with the current material models.

The models for creep and shrinkage used in Eurocode 2 [1] are meant for practical application in the design of concrete structures and are therefore usually on the conservative side overestimating the effect of creep and shrinkage. There are other material models available as well, such as Model Code 2010 [4] and Bazant-Baweja B3 model [25]. However, in this chapter, only the methods for calculating creep and shrinkage in accordance with EC2 will be presented because it is this material model that later will be used in the beam model discussed in chapter 6, 7 and 9.

4.1.1 Creep

The creep deformation is defined in EC2 3.1.4 as;

𝜀𝑐𝑐(𝑡, 𝑡0) = 𝜑(𝑡, 𝑡0)𝜎𝑐

𝐸𝑐 (4.1)

where σc is the stresses in the concrete and depends on the loading of the structure. φ(t,t0) is the creep coefficient where t is the concrete age at the considered moment and t0 is the

(40)

20

concrete age at the time of loading in days. Ec is the tangent modulus of the concrete and can be set to 1.05Ecm. In EC2 7.4.3(5) an effective modulus of elasticity that can be used for calculating the long-term creep deformations is defined as;

𝐸𝑐𝐿 = 𝐸𝑐𝑚

1 + 𝜑(𝑡, 𝑡0) (4.2)

where Ecm the Young’s modulus varying with time, fcm is the mean compressive strength of the concrete and βcc is a coefficient depending on the concrete age t and the coefficient s which represents the cement type.

𝐸𝑐𝑚(𝑡) = (𝑓𝑐𝑚(𝑡) 𝑓𝑐𝑚 )

0.3

𝐸𝑐𝑚

𝑓𝑐𝑚(𝑡) = 𝛽𝑐𝑐(𝑡) ∙ 𝑓𝑐𝑚

𝛽𝑐𝑐(𝑡) = 𝑒𝑥𝑝 {𝑠 [1 − (28 𝑡 )

1 2

]}

The creep coefficient is defined in EC2 appendix B as;

𝜑(𝑡, 𝑡0) = 𝜑0 ∙ 𝛽𝑐(𝑡, 𝑡0) (4.3) where φ0 is the nominal creep coefficient with φRH as a factor allowing for the effect of the relative humidity, RH is the relative humidity, β(fcm) is a factor to allow for the effect of the concrete strength and β(t0) is a factor to allow for the effect of concrete age at loading.

𝜑0 = 𝜑𝑅𝐻∙ 𝛽(𝑓𝑐𝑚) ∙ 𝛽(𝑡0)

𝜑𝑅𝐻 = [1 +1 − 𝑅𝐻 100⁄

0.1 ∙ √ℎ3 0 ∙ 𝛼1] ∙ 𝛼2⁡ for fcm > 35 MPa

𝛽(𝑓𝑐𝑚) = 16.8

√𝑓𝑐𝑚

𝛽(𝑡0) = 1 0.1 + 𝑡00.20

(41)

21

And the notional size of a member h0 is given as equation (4.4) with Ac as the concrete cross-sectional area and u is the perimeter of the part exposed to drying.

βc(t,t0) is a factor that describes the creep development with time after loading, where βH

is a factor dependent on RH and h0.

𝛽𝑐(𝑡, 𝑡0) = [ 𝑡 − 𝑡0 𝛽𝐻+ 𝑡 − 𝑡0]

0.3

𝛽𝐻 = 1.5[1 + (0.012𝑅𝐻)18]ℎ0+ 250𝛼3 ≤ 1500𝛼3 for 𝑓𝑐𝑚 ≥ 35 α1, α2 and α3 are factors to account for the influence of the concrete strength;

𝛼1 = [35 𝑓𝑐𝑚]

0.7

𝛼2 = [35 𝑓𝑐𝑚]

0.2

𝛼3 = [35 𝑓𝑐𝑚]

0.5

It is also possible to adjust for the effect of cement type by modifying the age of loading t0

and the concrete age may be adjusted to account for the effect increased or reduced temperatures have on the concrete maturity in accordance with EC2 B.1(2) and (3).

4.1.1.1 Nonlinear creep coefficient

If the compression stresses in the concrete at the age of loading t0 is more than 0.45fck(t0) then EC2 3.1.4(4) states that non-linear creep should be considered by calculating the non-linear creep coefficient as;

𝜑𝑛𝑙(𝑡, 𝑡0) = 𝜑(𝑡, 𝑡0) ∙ 𝑒𝑥𝑝(1.5 ( 𝜎𝑐

𝑓𝑐𝑘(𝑡0)− 0.45)) (4.5) where fck(t0) is the characteristic compressive strength of concrete at the age of loading.

4.1.2 Shrinkage

The total strain due to shrinkage εcs is composed of two contributions, the drying shrinkage strain εcd and the autogenous shrinkage strain εca. The drying shrinkage strain happens as the concrete is hardening and water is evaporating causing the concrete to change volume. This development is slow. When the relative humidity in the ambient environments is high, the drying shrinkage is relatively low. The autogenous shrinkage

0 =2𝐴0

𝑢 (4.4)

(42)

22

strain is caused by a chemical reaction between the water and the cement, and as a result, the concrete is shrinking. This reaction follows the development of the concrete strength, with a steep development right after the casting of the concrete.

The total shrinkage strain is expressed in EC2 3.1.4 as;

𝜀𝑐𝑠= 𝜀𝑐𝑑+ 𝜀𝑐𝑎 (4.6)

𝜀𝑐𝑑(𝑡) = 𝛽𝑑𝑠(𝑡, 𝑡𝑠) ∙ 𝑘∙ 𝜀𝑐𝑑,0 (4.7) 𝜀𝑐𝑎(𝑡) = 𝛽𝑎𝑠(𝑡) ∙ 𝜀𝑐𝑎(∞) (4.8) where βds(t,ts) is dependent of the concrete age at the moment considered t, the concrete age at the beginning of drying shrinkage ts and the notional size of the cross-section h0. kh

is coefficient also dependent on h0 and εcd,0 is the nominal drying shrinkage strain and is defined in EC2 B.2.

𝛽𝑑𝑠(𝑡, 𝑡𝑠) = (𝑡 − 𝑡𝑠) (𝑡 − 𝑡𝑠) + 0.04√ℎ03

𝜀𝑐𝑑,0= 0.85 [(220 + 110𝛼𝑑𝑠1) ∙ 𝑒𝑥𝑝 (−𝛼𝑑𝑠2 𝑓𝑐𝑚

𝑓𝑐𝑚𝑜)] ∙ 10−6∙ 𝛽𝑅𝐻

where fcmo = 10 MPa, αds1 and αds2 are coefficient depending on the cement type and βRH is;

𝛽𝑅𝐻 = 1.55 [1 − (𝑅𝐻 100)

3

]

βas(t) is a time-dependent coefficient for the autogenous shrinkage strain and εca(∞) is the final autogenous shrinkage strain;

𝛽𝑎𝑠(𝑡) = 1 − exp⁡(−0.2𝑡0.5)

𝜀𝑐𝑎(∞) = 2.5(𝑓𝑐𝑘− 10)10−6

4.1.3 Cracked concrete

The tensile strength of concrete is relatively low, which is why concrete structures usually are reinforced with steel that can take the tensile stresses. For a structure exposed to

Referanser

RELATERTE DOKUMENTER

In our goal to create a modern model for predicting real estate prices, we will use the gathered data and use different machine learning models to obtain the best predictions

The main focus of our paper is to present an indication of what value Machine Learning can provide when making long-term stock predictions and trading based on

The primary goal of this project is to demonstrate source authentication using machine learning based fingerprinting of signal characteristics on the CAN physical

In this paper, we propose a new machine learning approach for target detection in radar, based solely on measured radar data.. By solely using measured radar data, we remove

In this thesis I tested the possibility of using machine learning to predict the active and reactive load for transformers up to 48 hours ahead in the future, and see how

Although Google Prediction API offers fast model training and model creation as compared to Windows Azure Machine Learning Studio and Amazon Machine Learning; it has lesser

Different supervised machine learning models are applied in this thesis to predict electricity prices for the different price areas in Norway using hourly data for elspot

In this thesis, machine learning and deep learning will be used to predict the measured grain yield using the data from the multispectral images as well as the maturity date (MAT)..