• No results found

To improve modelling performance, we propose a list of tasks that we think are worth investigating further for improvement of regression of resistivity.

Gathering more data: Gathering more data might be the most influential pro-cess, considering the state of our application. With enough data, the need for data generation with overlap should not be needed to the same degree, or not needed at all. This should allow us to observe a more realistic perform-ance since the degree of synthetic data will decrease. The gaps of resistivity shown earlier in the data set should also be lesser, reducing the clusters of data. Gathering enough data should also open up for other powerful model validation techniques such as K-fold cross-validation, which we did not at-tempt because of the difficulties revolving around overlapped data. Note though that we do not know at the moment how much more data would be necessary to collect to achieve this. Very likely it is more about collecting enough diverse images corresponding to the same rock types, increasing the degree of seeing enough features that a rock type may present.

Pre-processing: A big part of the project was using pre-processing to remove disturbances from the data and turning the raw data into images. This has resulted in loss of data and inconsistencies in the continuity of resistivity labels. There are probably other pre-processing methods that can enhance our data so that the learning capabilities of CNN increases.

Tune more hyperparameters: In this thesis, there was a big focus on the number of convolutional and max-pooling pairs, as well as the number of neurons in the fully-connected layer. We focused on tuning these parameters because we thought they had the most influence on feature extraction and mod-elling of the image data. Testing out other hyperparameters such as other optimizers, different activation functions, and layer-types may contribute to better performance.

Testing other data augmentation methods: The augmentation method used dir-ectly on the images in this thesis was mainly flipping the images both ver-tically and horizontally. We also obtained some augmentation through max-pooling due to downsampling, thus blurring the feature maps. The reason-ing of only usreason-ing vertical and horizontal flip was due to our assumption that for instance rotating a well would interfere with the underlying character-istic of a well. Although, other image augmentation techniques probably could be used, such as zooming, rotating, or other creative methods.

[1] Y. Wu, B. Lu, W. Zhang, Y. Jiang, B. Wang and Z. Huang, ‘A new logging-while-drilling method for resistivity measurement in oil-based mud,’Sensors, vol. 20, no. 4, p. 1075, Feb. 2020. DOI: 10 . 3390 / s20041075. [Online]. Available:https://doi.org/10.3390/s20041075.

[2] Z. Bassiouni, ‘Well logging,’ inGeophysics and Geosequestration, T. L. Davis, M. Landrø and M. Wilson, Eds. Cambridge University Press, 2019, pp. 181–

194.DOI:10.1017/9781316480724.012.

[3] Nmr Radial Saturation Profiling For Delineating Oil-Water Contact In A High-Resistivity Low-Contrast Formation Drilled With Oil-Based Mud, vol. All Days, SPWLA Annual Logging Symposium, SPWLA-2008-Y, May 2008. eprint:

https : / / onepetro . org / SPWLAALS / proceedings pdf / SPWLA08 / All -SPWLA08/SPWLA-2008-Y/1799356/spwla-2008-y.pdf.

[4] K. Chawshin, C. F. Berg, D. Varagnolo and O. Lopez, ‘Lithology classification of whole core ct scans using convolutional neural networks,’SN Applied Sciences, vol. 3, no. 6, pp. 1–21, 2021.

[5] N. Aldahoul and Z. Zaw, ‘Benchmarking different deep regression models for predicting image rotation angle and robot’s end effector’s position,’ Oct.

2019, pp. 1–6.DOI:10.1109/ICOM47790.2019.8952047.

[6] S. Tang, S. Yuan and Y. Zhu, ‘Data preprocessing techniques in convolu-tional neural network based on fault diagnosis towards rotating machinery,’

IEEE Access, vol. 8, pp. 149 487–149 496, 2020. DOI: 10 . 1109 / access . 2020.3012182.[Online]. Available:https://doi.org/10.1109/access.

2020.3012182.

[7] K. Chawshin, A. Gonzalez, C. F. Berg, D. Varagnolo, Z. Heidari and O. Lopez,

‘Classifying lithofacies from textural features in whole core ct-scan images,’

SPE Reservoir Evaluation & Engineering, vol. 24, no. 02, pp. 341–357, 2021.

[8] R. Wicklin. (2020). ‘Linear interpolation in sas,’[Online]. Available:http:

//proc-x.com/2020/05/linear-interpolation-in-sas-2/. (accessed:

28.04.2021).

92

[9] A. Gonzalez, L. Kanyan, Z. Heidari, O. Lopezet al., ‘Integrated multi-physics workflow for automatic rock classification and formation evaluation using multi-scale image analysis and conventional well logs,’ inSPWLA 60th An-nual Logging Symposium, Society of Petrophysicists and Well-Log Analysts, 2019.

[10] L. Taylor and G. Nitschke, ‘Improving deep learning with generic data aug-mentation,’ in2018 IEEE Symposium Series on Computational Intelligence (SSCI), IEEE, Nov. 2018. DOI: 10 . 1109 / ssci . 2018 . 8628742. [Online]. Available:https://doi.org/10.1109/ssci.2018.8628742.

[11] N. Tomar. (2020). ‘Data augmentation for semantic segmentation — deep learning — idiot developer,’[Online]. Available:https://nikhilroxtomar.

medium.com/data- augmentation- for- semantic- segmentation- deep-learning-idiot-developer-e2b58ef5232f. (accessed: 22.04.2021).

[12] JoJun-Mo, ‘Effectiveness of normalization pre-processing of big data to the machine learning performance,’ vol. 14, no. 3, pp. 547–552, Jun. 2019.

[13] C. D. Manning, P. Raghavan and H. Schütze,Introduction to Information Re-trieval. Cambridge University Press, 2008.DOI:10.1017/CBO9780511809071. [14] T. P. S. University. (2020). ‘Lesson 3: Describing data, part 2,’ [Online].

Available:https://online.stat.psu.edu/stat200/book/export/html/

61. (accessed: 03.05.2021).

[15] B. Ghojogh and M. Crowley, ‘The theory behind overfitting, cross

valida-tion, regularizavalida-tion, bagging, and boosting: Tutorial,’arXiv preprint arXiv:1905.12787, 2019.

[16] J. Brownlee. (2019). ‘Gentle introduction to the bias-variance trade-off in machine learning,’[Online]. Available:https://machinelearningmastery.

com / gentle introduction to the bias variance trade off in -machine-learning/. (accessed: 10.12.2020).

[17] S. Fortmann-Roe, ‘Understanding the bias-variance tradeoff.,’ 2012.[ On-line]. Available:http://scott.fortmann-roe.com/docs/BiasVariance.

html.

[18] J. Jordan. (2017). ‘Evaluating a machine learning model.,’[Online]. Avail-able: https://www.jeremyjordan.me/evaluating-a-machine-learning-model/. (accessed: 30.10.2020).

[19] S. Seema. (2018). ‘Understanding the bias-variance tradeoff,’ [Online]. Available: https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229. (accessed: 05.11.2020).

[20] A. Navlani. (2019). ‘Neural network models in r,’[Online]. Available:https:

//www.datacamp.com/community/tutorials/neural- network- models-r. (accessed: 18.11.2020).

[21] E. Alpaydin,Introduction to Machine Learning, third edition, ser. Adaptive

Computation and Machine Learning series. MIT Press, 2014,ISBN: 9780262325752.

[Online]. Available:https://books.google.no/books?id=7f5bBAAAQBAJ. [22] A. T. Henriksen, ‘Domain adaptation for maritime instance segmentation:

From synthetic data to the real-world.,’ 2019.DOI:http://hdl.handle.

net/11250/2631161.

[23] P. Singh. (2020). ‘Neural network from scratch,’[Online]. Available:https:

/ / medium . com / analytics vidhya / neural network from scratch -ed75e5e14cd. (accessed: 19.11.2020).

[24] B. Müller, J. Reinhardt and M. T. Strickland,Neural networks: an introduc-tion. Springer Science & Business Media, 2012.

[25] A. Suman. (2020). ‘Activation function,’ [Online]. Available: https : / / medium.com/analytics- vidhya/activation- function- c762b22fd4da. (accessed: 06.05.2021).

[26] S. Albawi, T. A. Mohammed and S. Al-Zawi, ‘Understanding of a convo-lutional neural network,’ in2017 International Conference on Engineering and Technology (ICET), 2017, pp. 1–6.DOI:10.1109/ICEngTechnol.2017.

8308186.

[27] F. La Rosa, ‘A deep learning approach to bone segmentation in ct scans,’

2017.

[28] K. O’Shea and R. Nash, ‘An introduction to convolutional neural networks,’

ArXiv e-prints, Nov. 2015.

[29] D. Cornelisse. (2018). ‘An intuitive guide to convolutional neural networks,’

[Online]. Available: https://www.freecodecamp.org/news/an-intuitive-guide-to-convolutional-neural-networks-260c2de0a050/. (accessed:

12.04.2021).

[30] A. Dertat. (2017). ‘Applied deep learning - part 4: Convolutional neural net-works,’[Online]. Available: https://towardsdatascience.com/applied-deep-learning-part-4-convolutional-neural-networks-584bc134c1e2. (accessed: 17.04.2021).

[31] R. Hassan and A. Mohsin Abdulazeez, ‘Deep learning convolutional neural network for face recognition: A review,’ Jan. 2021.DOI:10.5281/zenodo.

4471013.

[32] ‘Auto-keras: An efficient neural architecture search system,’ 2020.[Online]. Available:https://dl.acm.org/doi/pdf/10.1145/3292500.3330648. [33] D. Baskan. (2020). ‘An introduction to bayesian hyperparameter

optim-isation for discrete and categorical features,’[Online]. Available:https://

medium.com/analytics-vidhya/bayesian-hyperparameter-optimisation-for - discrete - and - categorical - features - a26454f77ab2. (accessed:

20.04.2021).

[34] R. Sebastian, ‘Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning,’ 2018.DOI:https://arxiv.org/abs/1811.12808. [35] Y. Dfertin. (2019). ‘Create a multi-label classification ai: Train our ai[part

2],’[Online]. Available: https://towardsdatascience.com/create- a-multi-label-classification-ai-train-our-ai-part-2-85064466d55a. (accessed: 10.11.2020).

[36] F. Cholletet al., Keras,https://keras.io/api/preprocessing/image/, 2015.

[37] S. Thatte. (2019). ‘Importance of sampling in the era of big data,’ [On-line]. Available: https : / / towardsdatascience . com / importance of -sampling-in-the-era-of-big-data-d2cf83e06c6a. (accessed: 14.04.2021).

[38] L. Yang and A. Shami, ‘On hyperparameter optimization of machine learn-ing algorithms: Theory and practice,’Neurocomputing, vol. 415, pp. 295–

316, Nov. 2020.DOI:10.1016/j.neucom.2020.07.061.[Online]. Avail-able:https://doi.org/10.1016/j.neucom.2020.07.061.

Code Listings

A.1 General code for construction of CNN model and