• No results found

User interface

In document Remote vessel survey using VR (sider 51-55)

The user interface of the application should allow users to navigate between different environments, toggle network functionality, adjust controller options and provide assistance to new users of the application by displaying controller hints. The UI should have a simple and intuitive layout and be easily accessible.

4.4.1 Menu

In order for users of the application to be able to switch between different virtual environments and trigger network functionality, a menu system is implemented. The menu will be toggled by the press of a button and will be displayed in front of the user. The menu will also enable users to change parameters of their VR experience in an options tab.

A Canvas Unity object is the area that all UI elements are inside. All UI elements must be children of a Canvas object, where the Canvas object decide if the UI elements are rendered in screen space or world space. By using screen space, the UI is placed on the user’s screen in front of the rendered scene. This means that the Canvas has a static position in front of the users field of view. By using the world space render mode the Canvas will behave like any other object in the scene. In this way, the Canvas can be placed at any desired location in the scene. A canvas object calledMenu canvasis created as a parent object for all menu related buttons and text. When a user presses the menu button, a menu should appear before him. Since it is desired to not have the menu overlay users field of view, the render mode of the Canvas is set toworld space. To enable the XR-rigs rigs controller rays to interact with the menu, aGraphic Raycaster component is added to the Menu canvas.

Creating an interactable menu

By creating children text objects to the menu canvas and adding abutton component to them, a simple interactable menu is created. Each menu view is created as a separate child Gameobject to the canvas. The text buttonsOnClick() method is used to detect clicks on different menu items.

When a click on a specific menu item has been detected, the necessary menu views are toggled to allow menu navigation. A simple UI displaying the applications main menu can be seen in Figure 4.10.

Figure 4.10: The main menu of the application.

Toggle and position the menu

To toggle theMenu canvas object and position it according to a users position and orientation, a script calledMenuHandler.cs is created. The MenuHandler.cs script’sStart() method initializes the Menu Canvas object by automatically setting it’s event camera.

The main logic for toggling and positioning the menu canvas is implemented in theMenuHandler.cs script’sUpdate() method. TheUpdate() method will continuously listen for the designated input to toggle the menu. When the input is detected, the menus position is calculated based on the users position and orientation. When the calculation is complete, the menu will be toggled and appear in front of the user. Once the menu is activated, it will change its orientation to always face the user by using theLookAt() Unity function. When the user presses the designated menu input again, the menu will be deactivated.

Scene selection

In order to implement scene selection, a GameObject calledSceneList is created. The SceneList object has a custom written component script calledSceneList.cs. TheSceneList.csscript contains a list ofpublic serializeable C# class objects which stores information about all available scenes in the application.

In the menu view called "Choose Scan", users should be able to select available scenes and view an image of their environment. If a user has selected a scene and wants to switch to it, he can do so by pressing a designated button in the menu view to load the desired scene. A script called AvailableScenesManager.cs is created to help generate a button for each scene in the SceneList object and react to a user’s button selection. When activated, the scene selection menu view called

"Choose Scan" will run a function calledGenerateMenuIcons() that automatically instantiates a button for each available scene in theSceneList object and display them in a scrollable list in the menu view as seen in Figure 4.11. Each instantiated button has a text containing the scene name of the scene they were generated from.

Once one of the instantiated scene buttons in the "Choose Scan" menu view are pressed, a function called TaskOnClick() will run. The TaskOnClick() function uses the string text on the pressed button to call a function called PrepareScene() from a script called SceneLoader.cs. In addition to loading scenes, theSceneLoader.cs script is responsible for keeping track over which scene a user has selected in the menu and displaying an image of it to provide user feedback. In order to do so, theSceneLoader.cs script has a public list containing images of the available scenes. The PrepareScene()function iterates through the list of scene images and displays the image with the same name as the selected scene in an image container next to the scrollable scene list as seen in Figure 4.11. When a user has selected a scene and presses the switch scene button, a function in SceneLoader.cs will run and load the selected scene.

Figure 4.11: The menu view "Choose Scan" displaying available scenes and a photo of the selected choice. Users can choose to switch to a scene in offline or online mode which enables network functionality.

Options

To avoid VR-sickness, users of the application should be able to adjust the turning movement controller type. There are two main ways of turning in VR using the controllers, namely continuous turn and snap turn. When using the continuous turn option, a user rotates continuously with the joystick while the snap turn option rotates a user with a specified number of degrees. Users have individual preferences regarding which turn option feels best. Using a turn option which feels uncomfortable can induce VR sickness. The options tab in the created menu should therefore

allow users to choose turning option and adjust the turn rate for both options.

Since the selected options preferences has to be stored between different Unity scenes, an object calledOptions Informationis created. TheOptions Information should store all selected settings.

To implement this, a script called OptionsInformation.cs is created. The script stores the turn option and the desired turn rates for the two options. It also has set and get functions for the stored variables. When a new Unity scene is loaded, all objects from the previous scene is destroyed. In order to avoid this, theOptions Informationobject is given theDontDestroyOnLoad()function in theStart()method of theOptionsInformation.cs script. This ensures that theOptions Information object is not destroyed between scenes, and its information is kept.

To create the menu view that allows users to adjust turning properties, different UI elements are added. A checkbox is used to toggle between the two turn options. The continuous turn rate is adjusted with a slider, while the snap turn angle is adjusted with a drop down menu of selectable degrees. In order to react to inputs to these UI elements, a script called OptionsHandler.cs is created. The OptionsHandler.cs script is responsible for detecting changes to the UI elements in the Options menu view, storing them in the Options Information object and setting the set changes to the XR Rig. TheOptionsHandler.csscript is added as a component to the main menu canvas. In itsStart()method, theOptionsHandler.cs script gets access to the XR Rig and its turn providers. It also gets access to the information stored in theOptionsInformation.cs script on the Options Information object and sets the stored values from the script on the XR Rig. In order to achieve toggling between two turn modes, both turn modes are added as script components to the XR Rig. Switching between two turn modes is accomplished by toggling the two script components depending on which is selected in the menu view and setting the turn rates in the active component.

When the UI turn adjustment elements detects a change in value, the changed UI element calls a function from theOptionsHandler.cs script which stores the new value in OptionsInformation.cs and sets the changed value in the XR Rig.

If a user presses the "Quit" button in the main menu view, the Unity functionApplication.Quit() which closes the application.

4.4.2 Controller hints

To be able to see what the different buttons on a controller do, controller hints are added. First, a GameObject is added to structure the controller hints. It is placed as the first child of the root controller GameObject. This is because it then inherits the transform of the controller GameO-bject, and thus follows the position and rotation of the controllers. Then a world canvas scaled down to the appropriate size containing the text hints is added. To visualize for the user which button corresponds to the actions written in the text, lines pointing in the direction of the button is added as can be seen from Figure 4.12a.

(a) Controller hints with text describing what

the different buttons do. (b) The local position of one of the edges describing the line projection.

Figure 4.12: Controller hints.

The lines pointing at the individual buttons are added at runtime. To do this, a specified number of points describing the edges of the line projection are added to aLine Renderer. An example

of one of these points can be seen in Figure 4.12b. To add the points to the Line Renderer, a script called ControllerInformation.cs is created to provide this functionality. First, the points indicating the edges of the line have to be specified and added. Then, the number of points in the Line Renderer is set to the number of points provided. At last, the position of the points is set to the local position of the points provided. By utilizing the local position, the actual position is decided with relation to the text and button instead of the global position. As the local position is small, instead of placing the edges at the origin of the global scene, the edges are placed with respect to the controllers as seen in Figure 4.12b.

In document Remote vessel survey using VR (sider 51-55)