• No results found

Final changes

In document Remote vessel survey using VR (sider 68-72)

Once all features are created, different measures are taken to simplify the process of implementing new scenes. In order for all implemented features to work in a scene, several GameObjects are required. The requiredGameObjects are:

• XR Rig: The collection of GameObjects receiving user input from controllers and the VR

headset. An XR Rig is required in order for a user to move around and interact with the environment.

• Menu Canvas: The canvas holding all UI elements for the menu. This GameObject is required in order for the menu to be displayed in a scene.

• Measurement Canvas: The canvas holding all UI elements for the measurement tool. This GameObject is required in order to display the distance of a distance measurement in a scene.

• Database Manager: The Database Manager GameObject has the DatabaseConnector.cs script as a component which has the VESSEL_ID as a public variable. This GameObject is required for the scene to be able to connect to a database and retrieve information to all Points of Interest (POIs) in the scene based on the primary key VESSEL_ID.

• Network Voice: The Network Voice GameObject has a Photon Voice Network component.

This GameObject is required in order for users to communicate via voice chat in a scene when network functionality is enabled.

• XR Interaction Manager: The XR Interaction Manager has an XR Interaction Manager Unity script as a component. It is required to enable interactions between all interactors and interactable objects in a scene.

• Eventsystem: The EventSystem GameObject has an Event System component and a Unity XRUI Input Module attached. It is required in order to manage user inputs properly in a scene.

• SceneList: The SceneList GameObject has the SceneList.cs script as a component. The SceneList GameObject is required in order for the current scene to keep track of other scenes statuses and availability.

• Options Information: The GameObject that holds information about a user’s turning preferences across the scenes of the user. The GameObject is destroyed if another instance of the object is already present in a scene.

A GameObject calledEssential Assetsis created and prefabed in order to simplify the implement-ation of new scenes. TheEssential Assets GameObject has all the above required GameObjects as child objects. This means that when a new scene is implemented,Essential Assets is the only GameObject required in a scene in order for all functionality to be enabled.

All necessary steps required for a new scene’s virtual environment to be fully functional and ready to use is listed below:

1. Create a new Unity Scene and import the 3D model generated from a LiDAR scan into it.

Add a mesh collider component to it to make it behave like a physical object and disable global illumination and shadows on the models mesh renderer in order to make it computationally cheaper to render.

2. Drag and drop all desired Image Anchors into the scene and add their corresponding 360°

images as textures. Remember to rotate the image texture according to the Image Anchors surroundings to make their transitions smooth.

3. Drag and drop all desired POIs into the scene. Remember to set each POI objects TYPE variable to decide what type of information the individual POIs should display.

4. Add an Essential Assets prefab to the scene. Remember to set the VESSEL_ID of the Database Manager to decide which vessel the scene’s POI information should be retrieved from.

These are all steps needed to set up a new virtual environment. By following these steps when creating a new scene, all features implemented in this chapter are enabled and fully functional in the new environment.

Chapter 5

Results

In this chapter, material gathered with the utilized 360° camera and LiDAR sensor will be presen-ted. The resulting application implemented in the Unity engine will be shown. A video demon-strating the material gathering process and the developed applications functionality can be seen by clicking the following link: https://youtu.be/XkmnMDrFR0sor by clicking on the following hy-perlink: YouTube. Hyperlinks to specific parts of the video demonstrating the different results will also be provided throughout the chapter. These hyperlinks will start the video from a designated timestamp and have their duration mentioned in text. It is therefore important to make a note of this when clicking the hyperlinks, as the video segment specified is important for the topic providing the link.

Since the developed application is such a visual product, a video is thought to be the best option to demonstrate the achieved results. The video will demonstrate general controls of the application and the implemented tools. In addition, the video will demonstrate the applications functionality by exploring a scan of an apartment and several scans taken aboard a vessel. The video is considered as an essential part of the results chapter.

5.1 Modeling

A GoPro Max camera and an iPad Pro 2020 model was used to gather the material the virtual environments were generated from. The equipment has been tested in several conditions and in different environments. As the application is aimed at vessel surveys, parts of a general cargo vessel was scanned and imported into the application. This section will first present general information about the vessel that were scanned in subsection 5.1.1. The general results from the material gathering process is presented in subsection 5.1.2 and subsection 5.1.3.

5.1.1 MS Kristian With

To test the capabilities of the developed application and scanning methods used, parts of a vessel is scanned and imported into the application. After a meeting with Stener Olav Stenersen at DNV, contact with Egil Ulvan Rederi was established. Egil Ulvan Rederi is a Norwegian shipping company with a total of 7 vessels. One of their vessels are MS Kristian With. Kristian With is a general cargo ship and is about 86.5 meters long and 12.8 meters wide as seen from its general arrangements drawings in Figure 5.1. After contacting Egil Ulvan Rederi, the enquiry of boarding Kristian With was accepted. The process of being able to board a ship would normally be simple, however the COVID-19 pandemic made this difficult to arrange. Thankfully permission to board and scan Kristian With were given. Before the vessel was boarded, a scan and prototype virtual environment of an apartment was created. scans of Kristian With was performed on the 6th of May 2021 while Kristian With was at bay in Trondheim.

In a meeting with Per Haveland and Bjarne Augestad (section B.5), both senior surveyors at Gard, the professional surveyors recommended scanning the vessels bridge, engine room and parts of the deck in order to test the capabilities of the application, as those areas are important to examine for a surveyor.

(a) Tank plan and capacity plan of Kristian With.

(b) General arrangement drawing of Kristian With.

Figure 5.1: General arrangements drawing of MS Kristian With.

The LiDAR scans and 360° images were taken of the vessel and exported to Unity. The vessel could then be explored in the application. A demonstration of the material gathering process, the application’s functionality and the vessel scans being explored can be seenin the attached video.

As seen in the video, all implemented objects and tools work as expected and the overall quality of most environments are satisfactory.

5.1.2 LiDAR

The LiDAR sensor on an iPad Pro 2020 model was utilized to perform scans of different environ-ments. The ’3d scanner app’ by Laan Labs was then used to convert the generated point cloud into a textured 3D model which could be exported as a .obj file into Unity. The same LiDAR sensor is also available in the iPhone 12 Pro series. The captured results from Kristian With can be seen in Figure 5.2, while the scans taken from the apartment with different resolutions can be seen in Figure 5.3. As seen in the figures, ceilings are not present in the resulting LiDAR models.

This is because it was concluded that the ceilings were not of interest and they were therefore not scanned.

5.1.3 360 ° images

The GoPro Max was used to capture 360° images. The captured photos supplements the LiDAR model in the virtual environment and provides a higher level of detail than what is possible with the LiDAR sensor. After experimenting, it is found that the best results are achieved when the camera is held between 160-170cm above floor level. This camera height gives the most realistic switch between LiDAR scan and image when a user is standing while using the application. One instance of a 360° image being triggered in the application can be seen in the following part of the video from 22:24 - 22:41. 360° images captured from different environments can be seen in Figure 5.4.

In document Remote vessel survey using VR (sider 68-72)