• No results found

Vision-Aided Inertial Navigation for Underwater Vehicles Using ArUco Markers

N/A
N/A
Protected

Academic year: 2022

Share "Vision-Aided Inertial Navigation for Underwater Vehicles Using ArUco Markers"

Copied!
102
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

NTNU Norwegian University of Science and Technology Faculty of Engineering Department of Marine Technology

Roar Stavnes

Vision-Aided Inertial Navigation for Underwater Vehicles Using ArUco Markers

Master’s thesis in Marine Technology Supervisor: Asgeir J. Sørensen

Co-supervisor: Henrik Schmidt-Didlaukies and Erlend A. Basso June 2021

Master ’s thesis

(2)
(3)

Roar Stavnes

Vision-Aided Inertial Navigation for Underwater Vehicles Using ArUco Markers

Master’s thesis in Marine Technology Supervisor: Asgeir J. Sørensen

Co-supervisor: Henrik Schmidt-Didlaukies and Erlend A. Basso June 2021

Norwegian University of Science and Technology Faculty of Engineering

Department of Marine Technology

(4)
(5)

NTNU Trondheim

Norwegian University of Science and Technology Department of Marine Technology

MASTER OF TECHNOLOGY THESIS DEFINITION (30 SP)

Name of the candidate: Roar Stavnes Field of study: Marine cybernetics

Thesis title (Norwegian): Maskinsynkorrigert treghetsnavigasjon for undervannsfartøy ved bruk av ArUco markører

Thesis title (English): Vision-Aided Inertial Navigation for Underwater Vehicles Using ArUco Markers

Background

The transition from traditional offshore energy production towards subsea production has resulted in increasing interest in autonomy. By intelligent reasoning based on sensory information, autonomous systems can perform tasks with little to no human interaction. In other words, autonomous robots are an excellent choice for permanently subsea resident robots, being ready 24/7 for planned on-demand operations as well as being available as first response for unplanned operations. Replacing traditional work class ROVs has enormous potential and impact by eliminating tethered connections to surface vessels - lowering the C02 footprint and increases marine operations' efficiency. However, in the transition towards autonomous inspection, maintenance, and repair (IMR) operations, navigation is one of the problems that remain to be solved.

As the modern literature and research on underwater navigation for IMR vehicles are lean, this thesis's primary focus and objective are deriving a system for accurate pose (position and orientation) estimation near subsea intervention and maintenance locations.

Scope of Work

1. Perform a background study to provide information and relevant references on:

• Navigation systems for underwater vehicles.

• Methods of computer vision and its application for computer vision-based navigation of robots and vehicles.

• Review the available and open-source computer vision software libraries.

2. Consider how to estimate the vehicle’s pose based on vision sensors and landmarks.

3. Perform a simulation study to determine the estimation accuracy of the computer vision application and evaluate its performance against the existing solutions.

4. Propose a method to close the loop between the vision-based pose estimation and the state-of-the-art inertial navigation system.

5. Present the simulation and experimental setup, including the hardware layout.

6. Perform a simulation study to test and verify the navigation system proposed in 4.

7. Implement the design proposed in 4 on the BlueROV2 and test the implementations experimentally at the Marine Cybernetics Laboratory at the Norwegian University of Science and Technology.

(6)

Specifications

The student shall at startup provide a maximum 2-page week plan of work for the entire project period, with main activities and milestones. This should be updated on a monthly basis in agreement with supervisor.

Every weekend throughout the project period, the candidate shall send a status email to the supervisor and co-advisors, providing two brief bulleted lists: 1) work done recent week, and 2) work planned to be done next week.

The scope of work may prove to be larger than initially anticipated. By the approval from the supervisor, described topics may be deleted or reduced in extent without consequences with regard to grading.

The candidate shall present personal contribution to the resolution of problems within the scope of work. Theories and conclusions should be based on mathematical derivations and logic reasoning identifying the steps in the deduction.

The report shall be organized in a logical structure to give a clear exposition of background, problem/research statement, design/method, analysis, and results. The text should be brief and to the point, with a clear language. Rigorous mathematical deductions and illustrating figures are preferred over lengthy textual descriptions. The report shall have font size 11 pts., and it is not expected to be longer than 70 A4-pages, 100 B5-pages, from introduction to conclusion, unless otherwise agreed. It shall be written in English (preferably US) and contain the elements: Title page, abstract, preface (incl. description of help, resources, and internal and external factors that have affected the project process), acknowledgement, project definition, list of symbols and acronyms, table of contents, introduction (project background/motivation, objectives, scope and delimitations, and contributions), technical background and literature review, problem formulation, method, results and analysis, conclusions with recommendations for further work, references, and optional appendices. Figures, tables, and equations shall be numerated. The original contribution of the candidate and material taken from other sources shall be clearly identified. Work from other sources shall be properly acknowledged using quotations and a Harvard citation or Vancouver reference style (e.g. natbib Latex package). The work is expected to be conducted in an honest and ethical manner, without any sort of plagiarism and misconduct, which is taken very seriously by the university and will result in consequences. NTNU can use the results freely in research and teaching by proper referencing, unless otherwise agreed.

The thesis shall be submitted with an electronic copy to the main supervisor and department according to NTNU administrative procedures. The final revised version of this thesis definition shall be included after the title page.

Computer code, pictures, videos, data series, etc., shall be included electronically with the report.

Start date: 15 January, 2021 Due date: 10 June, 2021 Supervisor: Asgeir J. Sørensen

Co-advisor(s): Henrik Schmidt-Didlaukies and Erlend A. Basso

Trondheim, 09.06.2021

_______________________________

Professor Asgeir J. Sørensen Supervisor

(7)

Preface

The research presented in this Master’s thesis is the result of the author’s work from January to June of 2021 at the Department of Marine Technology (IMT) at the Norwegian University of Science and Technology (NTNU). The independent work was done with supervision from Professor Asgeir J. Sørensen at IMT NTNU and the co-advisors Henrik Schmidt-Didlaukies at IMT NTNU and Erlend A. Basso at the Department of Engineering Cybernetics (ITK) at NTNU.

Writing a thesis in the middle of the COVID-19 pandemic has been challenging. However, the supervisor’s and co-advisors’ adaptability in terms of digital guidance made this thesis possible, despite the lockdown restrictions experienced.

The experiment performed in this thesis was carried out in collaboration with the Ph.D.

candidates Henrik Schmidt-Didlaukies and Erlend A. Basso through three weeks at the Marine Cybernetics Laboratory (MC-lab) at NTNU in May of 2021. The interdisciplinary collabora- tion has been decisive for the product of this thesis. Thus, the practical approach has been a treasured experience in order to gain a deeper understanding of the complexities of a physical system.

i

(8)

Acknowledgements

I am grateful to have Professor Asgeir J. Sørensen at IMT NTNU as my supervisor, and I would like to thank him for his excellent guidance throughout the work of this Master’s thesis.

His multidisciplinary knowledge has been essential in defining the thesis and challenged my way of thinking. Furthermore, the discussions with Sørensen and his effort to make the Master exceptional, such as the invitation to the field test of the Eelume-robot, have been extremely motivating.

I want to thank my two co-advisors, Ph.D. candidate Henrik Schmidt-Didlaukies and Ph.D. candidate Erlend A. Basso for their eagerness and inspiring ideas. I am thankful for the exciting discussions and the fruitful weeks at the MC-lab; it has been an educational journey.

Trondheim,June 10, 2021 Roar Stavnes

(9)

Abstract

The transition from traditional offshore energy production towards subsea production has caused a rapidly increasing need for underwater operations in the recent decade. The rapid transition has motivated autonomous operations in order to increase marine operations’ effi- ciency and lower CO2footprint. In the transition towards autonomous operations, navigation is one of the main challenges which has to be solved, motivating the work of this thesis.

This thesis presents a method for fusing computer vision with state-of-the-art inertial navigation systems, focusing on improved pose estimation near intervention and maintenance locations. It starts with the presentation of the necessary hardware in Chapter 3. Moreover, Chapter 4 motivates the use ArUco markers for robust detection and pose estimation using a simple setup consisting of a single camera. Before moving on to the observer design, a simple simulation study is carried out, evaluating the accuracy of detection and pose estimation of the ArUco markers. Furthermore, the fusion of computer vision system and navigation system is derived in Chapter 5, based on similar approaches in the literature. At the end, the implementations are tested and validated through a simulation and experimental study, described in Chapter 6.

The principal contributions of the thesis are related to the extension upon an existing open- source simulation software for testing computer vision applications fused with navigation and control. Moreover, an experimentally validated navigation system design for improved local- ization using fiducial ArUco markers is derived. The proposed navigation system obtained satisfactory results, with a standard deviation in position of 5.61 cm and a mean position error of 10 cm for the experimental case. Compared to the existing geophysical and acous- tic transponders and modems techniques, the proposed navigation system has proven high performance and accuracy with potential to replace the existing navigations systems in the vicinity of intervention and maintenance locations.

iii

(10)
(11)

Sammendrag

Skiftet fra tradisjonell olje- og gassproduksjon mot produksjonenheter p˚a havbunnen har skapt et økt behov for undervannsoperasjoner. Dette har motivert bruken av autonomi for

˚a effektivisere operasjoner og redusere miljøavtrykk. Navigasjon er en av utfordringene som st˚ar uløst i overgangen mot autonome undervannsoperasjoner. Dette har motivert arbeidet bak denne hovedoppgaven.

Avhandlingen presenterer en metode for ˚a fusjonere maskinsyn med dagens treghetsnav- igasjonssystem, hvor hovedfokuset ligger i forbedring av posisjon og orienteringsestimering i nærheten av operasjonelle og vedlikeholdsomr˚ader. Oppgaven starter med ˚a presentere de nødvendige sensorene i Kapittel 3, etterfulgt av motivasjon for bruk av lettgjenkjennelige ArUco markører for estimering av posisjon og orientering i Kapittel 4. Før oppgaven g˚ar videre til designet av navigasjonssystemet er det utført en simulasjonsstudie for ˚a avgjøre nøyaktigheten til maskinsynestimatet for videre anvendelse. Kapittel 5 utleder det fusjonerte navigasjonssystemet basert p˚a lignende implementasjoner i litteraturen, hvor orienteringsesti- matet er forkastet til fordel for et magnetometer. Til slutt er metoden testet og validert gjen- nom simulering og eksperiment ved det marinkybernetiske laboratoriet som tilhører Norges teknisk-naturvitenskapelige universitet, som er beskrevet i Kapittel 6.

Hovedbidraget til denne avhandlingen er en utvidelse av en eksisterende ˚apen kildekode for simulering av metoder i maskinsyn fusjonert med navigasjon og kontroll, med videre potensial for simulering av autonome undervannsoperasjoner. Videre har oppgaven bidratt med et design av et navigasjonssystem for forbedret posisjon- og orienteringsestimat ved bruk av lettgjenkjennelige markører. Designet oppn˚adde høy ytelse gjennom b˚ade simulering og eksperiment, hvor posisjonsstandardavvik kom p˚a 5.61 cm med en gjennomsnittsfeil p˚a 10 cm.

Sammenlignet med eksisterende navigeringssystem har denne løsningen en stort potensial for bruk som lokal navigasjon i nærheten av operasjonelle soner som krever høy posisjons- og orienteringsnøyaktighet.

v

(12)
(13)

Table of Contents

List of Figures xi

List of Tables xiii

Notation and Acronyms xv

1 Introduction 1

1.1 Background and Motivation . . . 1

1.1.1 Problem Formulation . . . 2

1.2 Existing Theory . . . 2

1.2.1 Inertial Navigation . . . 2

1.2.2 Acoustic Transponders and Modems . . . 3

1.2.3 Geophysical Navigation . . . 4

1.3 Research Questions and Methodology . . . 5

1.4 Main Contributions . . . 6

1.5 Outline . . . 7

2 Preliminaries 9 2.1 Coordinate Frames . . . 9

2.2 Generalized Coordinates . . . 11

2.3 Lie Groups . . . 11

2.4 Attitude Representation . . . 14

2.4.1 Euler Angles . . . 14

2.4.2 Unit Quaternions . . . 15

3 Hardware 17 3.1 Inertial Measurement Unit . . . 17

3.1.1 Sensor Model . . . 17

3.1.2 Implementation . . . 18 vii

(14)

3.1.3 Calibration . . . 19

3.2 Camera . . . 21

3.2.1 Sensor Model . . . 21

3.2.2 Implementation . . . 25

3.2.3 Calibration . . . 25

4 Computer Vision 31 4.1 ArUco Marker . . . 31

4.1.1 Marker Detection . . . 31

4.1.2 Pose Estimation . . . 33

4.2 Camera Coordinate Frame . . . 34

4.3 Accuracy . . . 35

5 Navigation System 39 5.1 Inertial Navigation Systems . . . 39

5.1.1 Vision-based Position Aiding . . . 40

5.1.2 Vector-based Attitude Aiding . . . 41

5.2 Multiplicative Extended Kalman Filter . . . 42

5.2.1 Linearized Model . . . 46

6 Implementation 51 6.1 Simulation Setup . . . 51

6.2 Experimental Setup . . . 53

6.2.1 Hardware . . . 53

6.2.2 Control . . . 54

6.2.3 Operating Condition . . . 57

7 Results 59 7.1 Simulation . . . 60

7.2 Experiment . . . 63

8 Discussion 67 9 Conclusions and Further Work 71 9.1 Conclusions . . . 71

9.2 Further Work . . . 72

References 73

(15)

TABLE OF CONTENTS ix

A Specification BlueROV2 I

A.1 Control Allocation . . . I A.2 Thruster Map . . . III

(16)
(17)

List of Figures

1.1 Outline of underwater navigation classifications . . . 3

2.1 Reference frames for underwater vehicles . . . 10

3.1 Raw IMU measurements . . . 20

3.2 The pinhole camera model . . . 22

3.3 Camera calibration . . . 30

4.1 ArUco marker . . . 32

4.2 Marker detection . . . 33

4.3 Transformation from Body-fixed frame to camera frame . . . 35

4.4 Raw pose measurements . . . 36

4.5 Pose estimation using raw measurements . . . 36

6.1 UUV simulator . . . 52

6.2 Marine Cybernetics Laboratory . . . 54

6.3 Hardware setup . . . 55

6.4 Logic of the Dualshock 4 controller . . . 56

6.5 Experimental setup . . . 57

7.1 Video of the simulation and the experiment . . . 59

7.2 MEKF pose estimation in the simulated condition . . . 61

7.3 MEKF estimation error in the simulated condition . . . 61

7.4 Error distribution in the simulated conditions . . . 62

7.5 MEKF pose estimation in the experimental condition . . . 63

7.6 MEKF estimation error in the experimental condition . . . 64

7.7 Error distribution in the experimental condition . . . 65 A.1 BlueROV2 technical drawing . . . I A.2 BlueROV2 thruster map . . . III

xi

(18)
(19)

List of Tables

1.1 Acoustic positioning accuracy . . . 4

1.2 Navigational sensors for underwater applications . . . 5

2.1 SNAME notation . . . 9

3.1 IMU specifications . . . 19

3.2 Standard deviation of the IMU measurements . . . 20

3.3 Noise density of the IMU measurements . . . 21

3.4 Camera specifications . . . 26

5.1 Variables in the error-state MEKF . . . 44

7.1 Statistics of the MEKF in the simulated condition . . . 60

7.2 Statistics of the MEKF in the experimental condition . . . 65 A.1 Thruster placement of the BlueROV2 . . . II

xiii

(20)
(21)

Notation and Acronyms

AINS Aided-inertial navigation system

AUV Autonomous underwater vehicle

DOF Degrees of freedom

GNSS Global Navigation Satellite System

IMR Inspection, maintenance, and repair

IMU Inertial measurement unit

INS Inertial navigation system

EKF Extended Kalman filter

MC-lab Marine Cybernetics Laboratory

MEKF Multiplicative extended Kalman filter

MEMS Microelectromechanical systems

NED North-East-Down

NTNU Norwegian University of Science and Technology

PWM Pulse-Width-Modulation

ROS Robot operating system

ROV Remotely operated vehicles

RPi Raspberry Pi

SLAM Simultaneous localization and mapping

UUV Unmanned underwater vehicle

VINS Vision-aided inertial navigation system

Fn The North-East-Down frame

xv

(22)

Fb The Body-fixed frame

Fs The sensor frame

Fc The camera frame

Fmi The coordinate frame of markeri

Fi The coordinate framei

SO(3) The Special Orthogonal Group of order three SE(3) The Special Euclidean Group of order three

so(3) The Lie Algebra of SO(3)

Rij ∈SO(3) The rotation matrix describing the orientation ofFj with re- spect toFi

Hji ∈SE(3) The homogeneous transformation matrix describing the trans- formation fromFj toFi

[p]× The skew-symmetric representation of a vector p∈R3

¯

p= [pT 1]T The homogeneous representation of a vectorp∈Rn

I,In The (n×n) identity matrix

0m×n The (m×n) zero matrix

0n The (n×n) zero matrix

Exp(φu) The exponential map of unit-quaternions

(23)

Chapter 1

Introduction

This chapter will give an introduction to the motivation and background, objectives, research questions, methodology and main contributions.

1.1 Background and Motivation

Underwater operations have been increasingly important in the last decades, ever since the oil adventure in the 1960s. It started as a discipline where divers executed all deepwater operations, even at depths of 300 meters below the surface of the North Sea. The extreme deepwater conditions resulted in 17 deaths and more than 100 serious injuries at the North Sea from the 1960s to the 1980s [39]. These challenges highly motivated the development of underwater robots in order to reduce human interaction in such extreme environments.

The vast majority of underwater vehicles utilized for subsea operations are work-class re- motely operated vehicles (ROVs), characterized by their box shape and tether-communication with the operator. The GNSS-denied environment requires a wired connection to have human- in-the-loop control, resulting in larger surface vessel following and assisting the ROV. Con- sequently, it leads to high operational costs and environmental impact. These disadvantages have challenged traditional thinking and led to research on new ways to design and operate underwater vehicles.

Autonomous underwater operations are one of the fields that have gained a lot of interest.

An autonomous system can perform tasks with little to no human interaction by intelligent reasoning based on sensory information. In other words, underwater robots can operate teth- erless without human-in-the-loop control. This makes vehicle-manipulators, such as Eelume, an excellent choice for permanently subsea resident robots, being ready 24/7 for planned on- demand intervention operations as well as being available as a first response for unplanned operations [27, 44]. The potential and impact of autonomous underwater operations are enor-

1

(24)

mous. It will, among others, cause significant reductions in operational costs and reduce environmental impact by less offshore service vessel activity. The increasing interest among researchers and industry partners has led to the recent establishment of the NTNU-VISTA Centre for Autonomous Operations Subsea. One of the main aspects of the center is how to navigate AUVs between specified installations to do inspections and, if necessary, perform intervention tasks [45].

1.1.1 Problem Formulation

Subsea inspection, maintenance, and repair (IMR) operations are challenging to solve au- tonomously. In contrast to other AUV tasks, such as oceanographic surveys, the requirement for accurate navigation is distinct. Surveys require accurate navigation of the entire oper- ation, while IMR vehicles tolerate more inaccuracies during transitions between the subsea sites but require high precision during intervention operations. For example, suppose the IMR vehicle is closing a valve autonomously; it is then essential to know the exact location of the manipulator’s arm.

As the modern literature and research on underwater navigation for IMR vehicles are lean, the primary focus of this thesis is deriving a system for accurate pose (position and orientation) estimation near subsea intervention and maintenance locations.

1.2 Existing Theory

AUV navigation has gained considerable interest among researchers since the first development in earnest in the 1970s. Underwater navigation is particularly challenging compared to surface navigation due to the rapid attenuation of the GNSS and radio-frequency signals.

For the interested reader, [35] provides a review of the general techniques in AUV navi- gation and localization. According to the article, the navigation techniques can be classified based on Figure 1.1.

1.2.1 Inertial Navigation

Inertial navigation is one of the three main categories of AUV navigation and localization.

This technique achieves the pose estimate using a kinematic model of rigid-body motion, where measured accelerations and angular velocities propagate the current state. The accel- erations and angular rates are measured using an inertial measurement unit (IMU), consisting of accelerometers and gyroscopes. Nevertheless, every single method in this category suffers position error growth that is unbounded, caused by the integration of measurement noise.

(25)

1.2. EXISTING THEORY 3

Figure 1.1: Outline of underwater navigation classifications. Courtesy of [35].

Hence, the inertial navigation solution will drift rapidly with time. As a result, the perfor- mance of IMUs is extremely variable, from expensive high-precision accelerometers and ring laser gyroscopes to cheap MEMS-based sensors (see Table 1.2). Chapter 5 introduces the rigid-body motion kinematic model and describes inertial navigation in detail.

1.2.2 Acoustic Transponders and Modems

The second technique, acoustic transponders and modems, estimates position using transpon- ders located in the environment and mounted on the vehicle. The acoustic navigation system works in one out of two ways.

• The first principle is similar to the GNSS, where it measures the range, i.e., the time of flight (TOF), from three or more transponders. The measured distances are then used to estimates the position based on triangular relations. This principle is known as either long baseline systems (LBL) or short baseline systems (SBL), depending on the distance between the transponders [24].

• The second principle is ultra-short baseline systems (USBL), consisting of one transceiver mounted on, e.g., a surface vessel. The method estimates the position by measuring the range and the bearing, i.e., the angle at which the acoustic signal approaches the vehicle.

(26)

The bearing is calculated based on the difference in phase of the signal arriving at the vehicle’s transceiver [35].

Note that the position is estimated relative to the environmental transponders. How- ever, the vehicle’s global position can easily be computed if the locations of the external transponders are known. Thus, acoustic positioning and inertial navigation system are of- ten combined to eliminate the unbounded position drift, known as aided-inertial navigation systems (AINS) [21, 35]. The accuracy of acoustic positioning is presented in [21], and the result are reproduced in Table 1.1. Note that the equipment used in producing the results is high-quality sensors developed by Kongsberg Maritime and the Norwegian Defence Research Establishment. Hence, a cheaper system may have significantly higher uncertainties.

Table 1.1: AUV horizontal position uncertainty (1σ) due to ship attitude uncertainty of 0.01 (1σ) in roll and pitch, and 0.1 (1σ) in heading. The ship attitude isφ= 0,θ= 0,ψ= 0. The relative horizontal position between the AUV and the USBL transducer is x = 50 m, y= 50 m [21].

AUV depth [m] 50 100 500 1000 3000

AUV position uncertainty [m] 0.12 0.13 0.17 0.28 0.75

1.2.3 Geophysical Navigation

The last category determine the vehicle’s position by detecting and identifying prior known environmental features. Hence, geophysical navigation relies on onboard sensors only. Thus, in contrast to acoustic transponders and modems, geophysical navigation allows low-cost oper- ations where the AUV can be deployed in unknown waters without further human interaction until pick-up. For underwater navigation, simultaneous localization and mapping (SLAM) are widely used. SLAM is the process of a robot autonomously building a map of its en- vironment through features and, at the same time, localizing itself by detecting the known features. See, e.g., [35] for various SLAM approaches. Extended Kalman filter (EKF) SLAM is one approach where each of then features is included as a state in the filter. Hence, the EKF-SLAM is computationally expensive due to itsO(n2) time scaling. Moreover, misinter- pretation of landmarks is critical for the covariance correction in the Kalman filter, resulting in occasionally poor performance.

One important remark is that the SLAM estimate will drift with the INS solution until the vehicle detects a prior known feature. Moreover, new feature observations get a position uncertainty equal to the vehicle’s uncertainty at the instant of observation. In other words, the navigation system has no better accuracy than its most reliable observation.

(27)

1.3. RESEARCH QUESTIONS AND METHODOLOGY 5 Table 1.2: Navigational sensors for underwater applications [35].

Sensor Description Performance

Compass A compass provides a globally bounded heading reference.

For cheap MEMS-based compasses, the heading reference is obtained by measuring the magnetic field. Consequently, a magnetic compass does not point to the geographic north and is subject to bias in the presence of magnetic objects. There- fore, a gyroscope measuring the earth’s rotation by using three gimbals is an alternative for high precision underwater navigation. This technique is unaffected by metallic objects and points to true north.

Accuracy within 1 to 2 for a modestly priced unit.

IMU An inertial measurement unit (IMU) consists of gyroscopes and accelerometers, sometimes with magnetometers, fused to estimate the sensor platform’s orientation, velocity, and gravitational forces.

• Gyroscope: Measures the angular rate. The accuracy comes at the cost of price, where ring laser and fiber optic are expensive high-precision alternatives, while MEMS-based gyroscopes are the cheap option.

• Accelerometer: Measures the specific force required to accelerate a proof mass. The working principle in- cludes pendulum, MEMS, and vibrating beam, along with others.

Since the IMU measures acceleration and angular rates, the estimated orientation and velocity will drift in time.

The drift of the gy- roscope varies with a range from 0.0001

/hr for a ring laser gyroscope to 60 /hr or more for a MEMS- based sensor. More- over, accelerometer bias varies from 0.01 mg for a MEMS-based unit to 0.001 mg for a pendulum [10].

1.3 Research Questions and Methodology

This Master’s thesis aims to improve underwater navigation using vision sensors. It involves deriving a method for pose estimation using computer vision and fuse it with state-of-the- art inertial navigation systems. The purpose of research and its focus is summarized in the following questions:

1. What effect does the underwater environment have on vision sensors?

2. How can vision sensors be applied to determine the relative distance and orientation between the sensor and observed objects/landmarks, and how accurate is the method?

(28)

3. How can state-of-the-art inertial navigation systems be fused with the information about position and orientation in 2?

The research methodology followed in the present work includes a review of relevant lit- erature and simulations and experiments. The relevant literature has opened new doors and inspired some of the concepts in this thesis, while the simulation and experiments have proved some concepts wrong. Simulations discovered bugs in the initial design and were later used as a validation tool by identifying weaknesses through extensive testing. In this project work, simulation was exceptionally crucial due to the short availability of the experimental facility.

Initially, the feasibility and implementation of computer vision for underwater applications was analyzed by reviewing the relevant literature and open-source libraries. Moreover, a simulation study of the computer vision system determined the accuracy of the position and attitude estimation. Furthermore, the fusion of the vision-based pose estimate and inertial measurements, referred to as a vision-aided inertial navigation system (VINS), was derived based on similar approaches in the literature [15, 40]. Finally, along with the derivation, the system was thoroughly tested and, in the end, verified at the experimental facility—the Marine Cybernetics Laboratory at the Norwegian University of Science and Technology.

1.4 Main Contributions

This thesis contributes to the transition towards subsea resident robots and addresses state estimation close to the subsea operational zones where a high precision position estimate is required. To summarize this work, the contributions are listed below:

• Design of a hardware architecture for UUV motion estimation.

• Extension upon existing open-source simulation software for testing vision-based navi- gation systems with the necessary sensors.

• Implementation of a vision-aided inertial navigation system capable of obtaining high precision pose estimates using ArUco markers.

• Implementation of a simple joystick controller for human-in-the-loop testing.

• Extensive simulation and experimental study of the system, including the main obser- vations.

• Several suggestions regarding future work and applications of the system.

(29)

1.5. OUTLINE 7

1.5 Outline

The thesis is organized in a manner where it systematically addresses and motivates the im- plementation. The text is divided into chapters describing each of the project’s main aspects, arranging for a pleasant reading flow and look-up structure. The following chapters are in- cluded in the thesis:

Chapter2 introduces the essential preliminaries behind the thesis.

Chapter3 presents the essential hardware components; how it is modeled, implemented, and calibrated.

Chapter4 introduces the computer vision approach and gives the advantages and disadvan- tages based on the relevant literature. Moreover, it presents a simulation study to determine the accuracy of computer vision system in order to determine its applicability in the naviga- tion system.

Chapter 5 proposes a methodology for implementing a vision-aided inertial navigation sys- tem using the Kalman filter.

Chapter 6 describes the simulation and experimental setup, including the test facility and the experimental platform.

Chapter7 presents the result of the simulation and experimental study described in Chapter 6.

Chapter8 discusses the main observations and results from Chapter 7.

Chapter9 concludes the project in the context of the research questions and scope of work.

Moreover, it suggests further work and applications of the system.

AppendixA presents the specifications of the BlueROV2.

(30)
(31)

Chapter 2

Preliminaries

Unlike most surface vehicles, the motion of underwater robots usually considers six degrees of freedom. In other words, at least six independent coordinates are necessary to determine the position and orientation of the vehicle. In the marine community, the most common notation adopts from the Society of Naval Architects and Marine Engineers (SNAME) in [38], see Table 2.1.

The geometry of rigid motions and the three-dimensional space plays a central role in robotics. Throughout this thesis, the Lie groups and their associated Lie algebras represent rigidbody transformations in the three-dimensional space. See, e.g., [16, 19] for more details.

Table 2.1: The notation adopted from the Society of Naval Architects and Marine Engineers [38].

DOF Forces and Linear and Position and Euler angles moments angular velocities relative to the inertial frame

1 Surge X u x

2 Sway Y v y

3 Heave Z w z

4 Roll K p φ

5 Pitch M q θ

6 Yaw N r ψ

2.1 Coordinate Frames

A large part of kinematics is concerned with establishing various coordinate frames to rep- resent the position and orientation of rigid objects. In order to describe the position and

9

(32)

Figure 2.1: Reference frames for underwater vehicles.

orientation of the underwater vehicle, it is convenient to define a Body-fixed frame, Fb, and an earth-fixed inertial frame. Based on the assumption of local navigation in confined areas, the North-East-Down (NED) frameFn is used exclusively as the inertial frame in this thesis.

Note that the NED frame is, strictly speaking, not inertial as the frame is fixed on the rotating earth. However, for low-speed applications, this is a common and justified simplification [15].

These two coordinate frames are defined as

Fn:= (on,in,jn,kn) Fb := (ob,ib,jb,kb),

where o ∈ R3 defines the origin, and i, j, k ∈ R3 denotes the right-handed perpendicular set of unit basis vectors of the respective frame. The first-mentioned frame is inertial and earth-fixed, while the latter is fixed to the rigid body, as shown in Figure 2.1. Furthermore, suppose the UUV is equipped with sensors, such as an inertial measurement unit (IMU) and a vision sensor. In that case, one coordinate frame is assigned for each sensor to easily transform the measurements toFb where the motion is expressed. Three additional types of coordinate frames are encountered in the thesis,

Fs:= (os,is,js,ks) Fc:= (oc,ic,jc,kc) Fm := (om,im,jm,km),

whereFs represents the sensor frame with axis aligned with the IMU-axis,Fc represents the camera frame, andFm represents the marker frame.

(33)

2.2. GENERALIZED COORDINATES 11

2.2 Generalized Coordinates

For a marine craft, two common vectors that being used in defining the underwater vehicle state vector areη and ν [14]. The vectorη is called the generalized position, defined as

η= [x y z φ θ ψ]T ∈R6. (2.1)

The generalized position is commonly divided into two parts,p= [x y z]T andΘ= [φ θ ψ]T, where p denote the vehicles position relative to the earth fixed frame and Θ denote the vehicles orientation relative to the earth-fixed frame. Moreover, the generalized velocities are the time derivatives of the generalized position of the system:

˙

η= [ ˙x y˙ z˙ φ˙ θ˙ ψ]˙ T = [vT ωT]T. (2.2) These quantities are all expressed in the inertial earth-fixed frame. However, it is advantageous to express the velocities inFb when deriving the equations of motion. The velocities in Fb is denoted as

ν = [u v w p q r]T ∈R6. (2.3)

Similarly to the generalized position, the velocity vector is divided into two parts,ν1 = [u v r]T andν2 = [p q r]T, whereν1 is the body-fixed velocity vector andν2is the body-fixed angular velocity vector.

2.3 Lie Groups

The kinematics of a rigid body can be derived globally in terms of Lie group and Lie algebra structures. More specifically, an element of the Lie group corresponds to a rigid body con- figuration, while elements of the Lie algebra express velocity. In other words, the state space may be written in terms of the Lie group and algebra [16].

A valuable property of groups in general is that two elements of a group can be combined to produce an element in the same group. This operation is defined by the group operator, commonly denoted as multiplication. However, it does not necessarily represent multiplication in the ordinary sense, which is the case for the quaternion product introduced in the following subsection. Furthermore, the inverse of an element of the group is also an element of the same group. The general matrix multiplication defines the group operator for matrix groups, while the inverse matrix represents the opposite operation. All these properties make the concept of groups a potent tool for describing rigid body motions [16].

It is desirable to use a smoothly differentiable group for describing rigid motions, i.e., groups in which there are no singularities that can take motions to infinity. This leads to the definition of the Lie groups:

(34)

Definition 2.3.1 (Lie Group) A Lie Group is a group G which is also a smooth manifold for which the group operator and the inverse are smooth mappings.

A Lie group of immediate interest is theSpecial Orthogonal Groupof order three, denoted SO(3). The group consists of the rigid rotations about a fixed point in R3

SO(3) :=

R∈R3×3 |det(R) = 1, R−1 =RT .

The rotation matrixR ∈SO(3) describes the orientation of one frame expressed in another, where the notation Rij denote the orientation of Fj := (oj,ij,jj,kj) expressed in Fi :=

(oi,ii,ji,ki). The opposite or inverse transformation, (Rij)−1, are unique and belongs to the same group. Geometrically, it can be interpreted as the opposite rotation, i.e., Rij. The rotation matrix is found by the projection of the unit vectors ofFj onto Fi

Rij =

ij·ii jj ·ii kj·ii ij·ji jj·ji kj·ji ij·ki jj·ki kj ·ki

, (2.4)

where (·) denote the dot product. It is convenient to derive the the principal rotation matrices (one-axis rotations),Rλ,β, whereλand β denote the axis and angle of rotation, respectively.

From (2.4) it is clear that

Rx,φ =

1 0 0

0 cφ −sφ 0 sφ cφ

, Ry,θ =

cθ 0 sθ

0 1 0

−sθ 0 cθ

, Rz,ψ =

cψ −sψ 0 sψ cψ 0

0 0 1

, (2.5)

where cα and sα is the short notation for cos(α) and sin(α), respectively. The Lie algebra of the Special Orthogonal group, SO(3), may be represented as

so(3) :=

Ω∈R3×3

Ω =−ΩT .

The group is identified with R3 through the skew-symmetric representation, which is useful for calculation of relative velocity transformation between coordinate frames [42]. The skew- symmetric representation of a vectorω= [ω1 ω2 ω3]T ∈R3 is defined as

[ω]×=

0 −ω3 ω2 ω3 0 −ω1

−ω2 ω1 0

, (2.6)

where [·]×:R3 −→so(3). The skew-symmetric representation possesses some useful properties,

[a]×b=a×b (2.7)

[a]×b=−[b]×a, (2.8)

(35)

2.3. LIE GROUPS 13 where a and b are any vectors belonging to R3. Now consider two rigidly attached frames, where the time-varying rotation matrix Rij represents the rotation of Fj relative to Fi. As- sumingRji is continuously differentiable, one may write the time derivative as

[ω]×= ˙Rij(Rij)T, (2.9)

where the vectorω∈R3 is the time-varying angular velocity ofFj relative toFi.

Consider now a rigid motion, which is a pure translation together with a pure rotation.

LetRij be the rotation matrix that specifies the orientation ofFj with respect toFi, and diij be the vector from the origin ofFi to the origin ofFj expressed inFj. Suppose the pointpis rigidly attached to Fj, with pj denoting the vector from Fj top. One may now express the coordinates ofp with respect toFi using

pi =Rjipj+diij, (2.10)

wherepi denote the vector fromFi topexpressed inFi. Hence, position and orientation may be expressed through the homogeneous representation

¯ pi=

"

Rij diij 01×3 1

#

¯

pj, (2.11)

where ¯(·) denote the homogeneous coordinate representation, i.e., ¯p:= [pT 1]T. The matrix encountered in (2.11) is referred to as the homogeneous transformation matrix

Hji =

"

Rij diij 01×3 1

#

∈R4×4, (2.12)

with inverse

Hij = Hji−1

=

"

Rji −Rjidiij 01×3 1

#

. (2.13)

Homogeneous transformations inherits its name from the homogeneous coordinates, which form the transformation. The introduction of translation requires one additional dimension, meaning one can no longer rely on Cartesian coordinates. Instead, homogeneous coordinates are used to represent a point in space. The homogeneous coordinates of a point (x, y, z) in R3 are defined as (λx, λy, λz, λ) for a nonzero λ ∈R. By this definition, the multiplication of any nonzero factor λ represents the same point in space. Hence the name homogeneous coordinates.

Homogeneous transformation matrices are a way of representing the Special Euclidean Group of dimension three,

SE(3) :=

("

R d

01×3 1

#

∈R4×4

R∈SO(3),d∈R3 )

,

(36)

which is the group of rigid-body transformations onR3.

Furthermore, a useful property of rigid bodies in kinematic chains is the composition of multiple homogeneous transformations

Hji=









Hjj+1Hj+1j+2· · ·Hi−1i ifj < i

I4 ifj=i

Hji−1

ifj > i,

(2.14)

whereI4 denote the identity matrix of dimension 4 [42].

2.4 Attitude Representation

The attitude of the UUV relative to the inertial frame is given by the rotation matrix Rnb. Because the rotation matrix is less intuitive and most observer designs do not combine vec- tor and matrix for state representation, it is convenient to parameterize the rotation matrix.

There are various attitude parametrizations in the literature, e.g., unit-quaternions and Eu- ler angle representation [15, 40]. In this thesis, the unit-quaternion representation is used exclusively in the implementations due to its computational advantage and non-singularities.

However, the Euler angle representation is used for debugging and presenting the results as it is intuitive and easy to comprehend.

2.4.1 Euler Angles

There are various Euler angle representations, e.g., thezyx convention, defined by the following sequence of rotations

Rnb(Θ) :=Rz,ψRy,θRx,φ,

whereΘ denote the vector of body Euler-angle coordinate. Expanding the Euler angle rep- resentation with (2.5) yields

Rnb2) =

cψcθ −sψcφ+ cψsθsφ sψsφ+ cψcφsθ sψcθ cψcφ+ sφsθsψ −cψsφ+ sθsψcφ

−sθ cθsφ cθcφ

. (2.15)

The vehicle’s motion path relative to the inertial earth-fixed frame, Fn, is given by the 6-DOF kinematic equation as follow [14]:

˙

η=Jb(Θ)ν, (2.16)

(37)

2.4. ATTITUDE REPRESENTATION 15 where η and ν are given in (2.1) and (2.3), and Jb(Θ) : R3 −→ R6×6 is the kinematics transformation matrix or Jacobian matrix, given as

Jb(Θ) =

"

Rnb(Θ) 03×3

03×3 Tbn(Θ)

#

=

cψcθ −sψcφ+ cψsθφ sψsφ+ cψcφsθ 0 0 0 sψcθ cψcφ+ sφsθsψ −cψsφ+ sθsψcφ 0 0 0

−sθ cθsφ cθcφ 0 0 0

0 0 0 1 sφsθ

cθ

cφsθ cθ

0 0 0 0 cφ −sφ

0 0 0 0 sφ

cθ cφ

cθ

 ,

(2.17)

where cα, sα and tα are short notations for cos(α), sin(α) and tan(α), respectively. Note that the Euler angles introduce a singularity when mapping from the body velocities to the generalized velocities expressed in the earth-fixed frame. From (2.17), it is clear that the matrix is singular forθ=±π2.

2.4.2 Unit Quaternions

An alternative representation that eliminate the Euler angle singularity is the non-minimal unit-quaternion representation [1]. A quaternion is defined as a complex number with one real part qw and three imaginary parts given by the vector qv = [qx qy qz]T. The set Q of unit-quaternions is defined as

Q:=

q

qTq= 1, q= [qw qvT]T, qw ∈Rand qv ∈R3 .

Due to its non-unique representation, linear addition and subtraction are not preferable as they may lead to possible issues in preserving the motion space of unit-quaternions. Al- ternatively, the quaternion product, known as theHamiltonian product, is often used instead.

The product is defined as

p⊗q=

"

pwqw−pTvqv pwqv+qwpv+ [pv]×qv

#

, (2.18)

where p,q ∈ Q. Note that the presence of the cross-product revals that the hamiltonian product is not commutative in the general case. However, the quaternions are associative and distributive over the sum [40],

(38)

p⊗q6=q⊗p (2.19)

(p⊗q)⊗r=p⊗(q⊗r) (2.20)

p⊗(q+r) =p⊗q+p⊗r (2.21)

Similarly to the Euler angle representation, the kinematic relationship between the linear velocities expressed in the body-fixed frame and the velocities expressed in the inertial frame is given through the transformation

η˙ =Jq(q)ν (2.22)

where η and ν are given in (2.1) and (2.3), and Jq(q) : R4 −→ R6×7 denotes the 6 DOF transformation matrix for the unit-quaternion andq= [qw qTw]T is the attitude representation ofFb expressed in Fn. Note that the additional differential equation in needed because of the unity constraint [15]. The differential equations are expressed as

Jq(q) =

"

R(q) 03×3

04×3 T(q)

#

, (2.23)

where

R(q) =I3+ 2qw[qv]×+ 2[qv]2× (2.24) T(q) =

"

−qvT qwI3+ [qv]×

#

. (2.25)

The rotational matrix R ∈ SO(3) and its corresponding quaternion q possess some useful properties [40],

R([1 0 0 0]T) =I3 (2.26)

R(−q) =R(q) (2.27)

R(q−1) =R(q)T (2.28)

R(q1⊗q2) =R(q1)R(q2) (2.29)

According to Euler’s theorem on finite rotations, one rotation about a certain axis is sufficient to describe any arbitrary sequence of rotations is space. Letu = [u1 u2 u3]T ∈ R3 define the unit-vector that represents the axis of rotation, and φ ∈ R denote the angle of rotation, the unit-quaternion may now be represented by an exponential map [40];

q:= Exp(φu) =eφu/2 = cos(φ

2) +usin(φ 2) =

"

cos(φ2) usin(φ2)

#

. (2.30)

This exponential map is useful in many situations and will be used in the derivation of the unit-quaternion error-state Kalman filter.

(39)

Chapter 3

Hardware

This chapter presents the hardware utilized to test the implementations. Each component is presented in a systematic way, beginning with the mathematical modeling, experimental implementation, and, at the end, calibration.

3.1 Inertial Measurement Unit

The inertial measurement unit (IMU) is a key component in most navigation systems [15]. It consists of a three-axis accelerometer and a three-axis gyroscope, sometimes with additional measurements such as a three-axis magnetometer. The working principle of accelerometers is a mass-spring-damper system mounted at each of the three axes of the senor frame Fs, i.e., (is,js,ks) which spans the three-dimensional Euclidean space (R3). The accelerometer measures the force required to accelerate a proof mass. Note that force is directly related to acceleration through Newton’s second law. One important observation is the case where the accelerometer is attached to a non-accelerating body. Here, the gravitation will induce a force on the proof mass, which results in a measured acceleration. Therefore, gravity has to be estimated and subtracted from the measurement in order to get the correct acceleration relative to the inertial frame.

3.1.1 Sensor Model

For any sensor application in motion control systems, it is crucial to have an accurate mathe- matical description of the measurement. Like most models encountered in control theory, the

17

(40)

measurement model of the IMU is described using a linear model [13, 15],

ωsimu(t) =ωst(t) +bsgyro(t) +wsgyro(t) (3.1a) asimu(t) =Rsn(t) (ant(t)−gn) +bsacc(t) +wsacc(t) (3.1b) msmag(t) =Rsn(t)mnt(t) +bsmag+wsmag(t), (3.1c) where ωsimu ∈ R3 represents the measured angular body-velocities, ωst ∈ R3 is the true angular velocities, asimu ∈ R3 is the measured body-acceleration, ant ∈ R3 is the true body- acceleration, msmag ∈ R3 is the measured magnetic field strength, mst is the true magnetic field strength, gn ∈ R3 is the gravitational vector, and Rsn ∈ SO(3) is the rotation of Fs with respect to Fn. Moreover, superscript sand ndenote quantities expressed in the sensor frameFs and North-East-Down (NED) frame Fn, respectively. It should be noted that the assumption of the linear sensor model is an oversimplification; thus, it is necessary to include:

• wsgyro,wsacc,wsmag ∈R3 - measurement noise modelled as Gaussian white noise.

• bsgyro,bsacc,bsmag ∈R3 - slowly time-varying bias (drift) modelled as a Brownian motion, where Brownian motion is produced by integration of white noise, i.e.,

sgyro =wsb,gyro (3.2)

sacc=wsb,acc (3.3)

smag =wsb,mag. (3.4)

The Gaussian white noise accounts for the disturbances in the sensor measurements, which are uncorrelated to the measured state. On the other hand, the bias removes any potential constant offset from the modeling.

3.1.2 Implementation

The IMU used for experiments, BNO055, is delivered by Bosch and consists of three MEMS- based accelerometers, three magnetometers, three gyros, and one internal temperature sensor.

The high-speed ARM Cortex-M0-based processor abstracts the sensor fusion and ensures a high refresh rate and performance. The output from the IMU is absolute orientation in quaternion or Euler representation, angular velocity vector, acceleration vector, magnetic field strength, linear acceleration vector, gravity vector, and temperature in degrees Celcius.

Another feature of the BNO055 is the possibility of rejecting the ARM-based sensor fusion for better performance, with the cost of losing the built-in calibration software. However, as

(41)

3.1. INERTIAL MEASUREMENT UNIT 19 dead-reckoning was not the focus of this thesis, the high precision alternative was rejected in favor of the sensor fusion and simple calibration

An open-source driver1 was used for communication between the ROS network and the IMU, ensuring an update rate of 100Hz.

Table 3.1: IMU specifications.

Parameter Value Unit

Sensor BNO055

Measurement frequency accelerometer 100 Hz Measurement frequency gyroscope 100 Hz Measurement frequency magnetormeter 20 Hz

Bandwidth accelerometerf−3dB 63 Hz

Bandwidth gyroscopef−3dB 47 Hz

Alignment error <0.5 deg

Noise density accelerometerNacc 1.86·10−3 m/s2

√ Hz Noise density gyroscopeNgyro 2.55·10−4 rad/s

√ Hz

Noise magnetometerσmag 1.0 µT

3.1.3 Calibration

The IMU BNO055 has software that runs the calibration and sensor fusion of the accelerom- eter, gyroscope, and magnetometer. At startup, the gyroscope calibrates by placing the vehicle in a single stable position for a few seconds. Moreover, the accelerometer calibrates by placing the vehicle in six different stable positions. The sensor must be lying at least once perpendicular to the x, y, and z axis, and it has to be slow movement between at least two stable positions. Finally, the magnetometer calibration involves inducing random motions, e.g., drawing the number ”8” on air. During the calibration, one can read a status message which rates the calibration from 1 to 3. However, it was observed that, even with a calibration status of 3, the IMU happened to have a poor performance.

For low price MEMS-based sensors, the magnetometer is usually the faultiest sensor.

Inaccurate magnetic field measurements are primarily due to magnetic influences from hard- iron and soft-iron, in addition to current-induced magnetic fields. These disturbances may lead to poor heading estimates for cheap MEMS-based magnetometers.

Further, determining the noise characteristics is convenient for the initial Kalman filter

1http://wiki.ros.org/ros_imu_bno055

(42)

0 500 1000 1500 2000 2500 3000 3500 4000 -0.1

0 0.1

Acceleration

0 500 1000 1500 2000 2500 3000 3500 4000

-0.2 0 0.2 0.4

0 500 1000 1500 2000 2500 3000 3500 4000

-9.6 -9.4 -9.2 -9

(a) Raw acceleration measurement.

0 500 1000 1500 2000 2500 3000 3500 4000

-5 0 5

10-3 Angular velocity

0 500 1000 1500 2000 2500 3000 3500 4000

-5 0 5

10-3

0 500 1000 1500 2000 2500 3000 3500 4000

-0.01 0 0.01

(b) Raw angular velocity measurement.

Figure 3.1: Raw IMU measurements.

tuning. There are various ways to estimate and verify the noise variances, e.g., the Allan vari- ance experiment [36]. However, this study requires several hours of acceleration and angular rate data, in order to compute the root means square (RMS) random drift error as a function averaging time. Accurate inertial measurements were not crucial for the performance of the implementations in this thesis, as dead reckoning was not one of the objectives. Thus, a simple variance study was performed in order to determine some of the measurement noise variances.

In order to calculate the measurement noise variances, a ROS bag recorded the output data from the accelerometer and gyroscope for a time interval of one hour (see Figure 3.1). The standard deviation of the output measurement data is given in Table 3.2. The datasheet’s

Table 3.2: Standard deviation of the IMU measurements.

x y z Unit

σacc 1.10·10−2 1.72·10−2 1.36·10−2 m/s2 σgyro 6.64·10−4 8.27·10−4 1.00·10−3 rad/s

measure of noise is noise density, defined as the power of noise per unit of bandwidth. The BNO055 has a bandwidth of 47 Hz for the gyroscope and 63 Hz for the accelerometer, inter- preted as the maximum frequency to which the sensor responds. The noise density is defined as

N = σ

pf−3dB

, (3.5)

whereσ and f−3dB are the signal standard deviation and frequency bandwidth, respectively.

The comparison of the datasheet and measurement noise densities is shown in Table 3.3. It appears that the accelerometer underperforms, while the gyroscope has better performance

Referanser

RELATERTE DOKUMENTER

The proposed image-aided inertial navigation system which integrates inertial sensor data with position data of image- feature-points was tested with a realistic data set from a

These include hydrogen and oxygen storage or generation, buoyancy and trim, ambient conditions and other challenges that stem from operation of fuel cells in a sealed container..

Let the control system enter collision avoidance mode at time t 1 , and let the vehicle Flow frame then be steered by the constant avoidance angle algorithm in Section 5.1.. The

Next, we present cryptographic mechanisms that we have found to be typically implemented on common commercial unmanned aerial vehicles, and how they relate to the vulnerabilities

Guidance laws for underactuated marine vehicles makes it possible for vehicles equipped with fixed stern propellers and steering rudders to achieve control goals such as path

As a result, the main challenge for military HEVs is related to the cost of introduction of the maturing electric traction motors, generators, energy storage systems and

The Fish4Knowledge project developed computer vision technologies and data visualizations for the in-situ moni- toring of fish populations [BHB ∗ ]. With 9 fixed underwater

Existing autonomous systems are becoming more powerful and utilise the capabilities of several types of devices such as Autonomous Underwater Vehicles (AUVs), Unmanned Surface