• No results found

The thesis aims to verify the Deep BSDE and the two variants of DBDP, as well as an ex-tension to solve equations involving the fractional Laplace. Both Deep BSDE and DBDP successfully avoids the curse of dimensionality by reformulating PDEs to learning prob-lems.

A thorough presentation of the background theory is performed. The theory is impor-tant in order to understand the deep learning algorithms. Stochastic calculus is discussed, a deep dive is made into the L´evy processes and neural networks are introduced. We focus on two main methods for solving semi-linear parabolic PDEs, namely Deep BSDE and the two variants of DBDP. Both variants of DBDP is implemented in Python using the TensorFlow 2.0 framework and numerical results for several examples are provided. At last, the extension to solve equations involving the fractional Laplace is presented. This algorithm is also implemented using the TensorFlow 2.0 framework.

Both Deep BSDE and the two variants of DBDP successfully solves 100-dimensional semi-linear parabolic PDEs in most cases. Both variants of DBDP converge to the wrong value for only one of the test examples. Being able to solve such high dimensional PDEs is in practice not possible for previous methods. Unfortunately, the results of the fractional Laplace equation are somewhat disappointing. The algorithm gives some useful results in one dimension forα= 1.8, but for other cases the results are of no use. The reason might be that the tails of theα-stable processes are simply too fat, and hence stable convergence is not possible. To confirm or discard this hypothesis, further testing should be done.

For instance, the CGMY processes which are a class of L´evy processes where the L´evy measure decays exponentially and thus have thinner tails.

During the end of the work on this thesis Germain et al. (2020) was published. This article presents a new multistep method which is somewhat similar to DBDP. The article further shows that the multistep algorithm performs better, or similar to, Deep BSDE and DBDP. In particular it would be interesting to test such an algorithm on the fractional Laplace equation to see whether or not it would be an improvement.

Bibliography

Asmussen, S., Glynn, P.W., 2007. Stochastic Simulation: Algorithms and Analysis.

Springer. doi:https://doi.org/10.1007/978-0-387-69033-9.

Asmussen, S., Rosi´nski, J., 2001. Approximations of small jumps of l´evy processes with a view towards simulation. Journal of Applied Probability 38, 482–493. URL:http:

//www.jstor.org/stable/3215901.

Barles, G., Buckdahn, R., Pardoux, E., 1997. Backward stochastic dif-ferential equations and integral-partial differential equations. Stochas-tics and Stochastic Reports 60, 57–83. URL: https://doi.org/10.

1080/17442509708834099, doi:10.1080/17442509708834099, arXiv:https://doi.org/10.1080/17442509708834099.

Carr, P.P., Geman, H., Madan, D.B., Yor, M., 2003. Stochastic Volatil-ity for Levy Processes. Mathematical Finance , 345–382.URL: https:

//engineering.nyu.edu/sites/default/files/2019-03/

Carr-stochastic-volatility-levy-processes.pdf, doi:http:

//dx.doi.org/10.2139/ssrn.314979.

Cohen, S., Rosi´nski, J., 2007. Gaussian approximation of multivariate l´evy processes with applications to simulation of tempered stable processes. Bernoulli 13, 195–210. URL:

https://doi.org/10.3150/07-BEJ6011, doi:10.3150/07-BEJ6011.

Cont, R., Tankov, P., 2004. Financial Modelling With Jump Processes. Chapman Hal-l/CRC: Financial MAthematics Series.

Cybenko, G., 1989. Approximation by superpositions of a sigmoidal function. Mathe-matics of Control, Signals and Systems 2, 303–314. URL:https://doi.org/10.

1007/BF02551274, doi:10.1007/BF02551274.

Duchi, J., Hazan, E., Singer, Y., 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159.

E, W., Han, J., Jentzen, A., 2017a. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. arXiv e-prints , arXiv:1706.04702arXiv:1706.04702.

High-Dimensional Parabolic Partial Differential Equations and Backward Stochastic Differential Equations. Communications in Mathematics and Statistics 5, 349–380.

doi:https://doi.org/10.1007/s40304-017-0117-6.

Germain, M., Pham, H., Warin, X., 2020. Deep backward multistep schemes for nonlinear PDEs and approximation error analysis. arXiv e-prints , arXiv:2006.01496arXiv:2006.01496.

Glasserman, P., 2003. Monte Carlo Methods in Financial Engineering. Springer.

Goodfellow, I., Bengio, Y., Courville, A., 2016. Deep Learning. The MIT Press.

Han, J., 2019. Deep BSDE Solver in TensorFlow (2.0). URL:https://github.com/

frankhan91/DeepBSDE.

Han, J., Jentzen, A., E, W., 2017. Solving high-dimensional partial differential equations using deep learning. arXiv e-prints , arXiv:1707.02568arXiv:1707.02568.

Hochreiter, S., 1991. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems .

Hornik, K., 1991. Approximation capabilities of multilayer feedforward networks.

Neural Networks 4, 251 – 257. URL: http://www.sciencedirect.

com/science/article/pii/089360809190009T, doi:https:

//doi.org/10.1016/0893-6080(91)90009-T.

Hur´e, C., Pham, H., Warin, X., 2019. Deep backward schemes for high-dimensional nonlinear PDEs. arXiv e-prints , arXiv:1902.01599arXiv:1902.01599.

Kingma, D.P., Ba, J., 2014. Adam: A Method for Stochastic Optimization. arXiv e-prints , arXiv:1412.6980arXiv:1412.6980.

Kwa´snicki, M., 2015. Ten equivalent definitions of the fractional Laplace operator. arXiv e-prints , arXiv:1507.07356arXiv:1507.07356.

Lawler, G.F., 2014. Stochastic Calculus: An Introduction with Applications.

Lu, Z., Pu, H., Wang, F., Hu, Z., Wang, L., 2017. The expressive power of neural networks: A view from the width, in: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R.

(Eds.), Advances in Neural Information Processing Systems 30. Curran Asso-ciates, Inc., pp. 6231–6239. URL: http://papers.nips.cc/paper/

7203-the-expressive-power-of-neural-networks-a-view-from-the-width.

pdf.

Nocedal, J., Wright, S.J., 2006. Numerical Optimization. 2 ed., Springer.

Øksendal, B., 2013. Stochastic Differential Equations. 6 ed., Springer.

Pardoux, E., 1995. Backward stochastic differential equations and applications, in:

Chatterji, S.D. (Ed.), Proceedings of the International Congress of Mathematicians, Birkh¨auser Basel, Basel. pp. 1502–1510.

Pardoux, E., R˘as¸canu, A., 2014. Stochastic Differential Equations, Backward SDEs, Par-tial DifferenPar-tial Equations. Springer.

Pardoux, E., Tang, S., 1999. Forward-backward stochastic differential equations and quasilinear parabolic pdes. Probability Theory and Related Fields 114, 123–150. URL:

https://doi.org/10.1007/s004409970001.

P´erez-Cruz, F., 2018. URL:http://www.da.inf.ethz.ch/teaching/2018/

DeepLearning/.

0 2 4 6 8 10

−25

−20

−15

−10

−5 0 5 10 NTNU Norwegian University of Science and Technology Faculty of Information Technology and Electrical Engineering Department of Mathematical Sciences

Master ’s thesis