• No results found

This chapter describes the proposed system design that would conform to functional and non-functional requirements and be fitting for further testing and development at UiT/UNN hospital.

7.1. High-level System Design

The following diagram (Figure 7.1) depicts a high level system design.

Figure 7.1High level system design diagram

The system is logically split into two modules, which makes it possible to alter one module without affecting the other. This way a stabilizer can be switched without disassembling the whole system and making excessive changes as long as the tracker conforms to the defined message format (both video and stabilization data)(FREQ#6).

A video provider is a source of video that can be hooked up to the stabilizer. In the same way, a video provider can be any camera connected to the system or any file that resides in the system’s long-term memory. It could also be a video stream from a remote host (FREQ#2).

After being retrieved from the video provider and processed by the stabilizer, a video stream is then transmitted to the GUI module and is shown to the user (FREQ#1).

Stabilization data (i.e. estimated camera motion) is transmitted in parallel with the video, but in a separate data stream (FREQ#5). This is necessary to make it simpler to make changes to respective functionality and make the system more flexible in general.

7.2. GUI Module Design

The GUI module will be responsible for providing an interface for the user to see the system’s output and to interact with the system by telestrating (Figure 7.2).

46

Figure 7.2 GUI module design diagram

The user input is a component provided by operating system and standard input devices (e.g. mouse, keyboard), or any means of input that can be mapped to the standard input methods.

The GUI module will possess two concurrent methods of interprocess communication that will run in parallel. Video input will provide a sequence of images to be displayed on the video layer which will be the lowest layer in the visual hierarchy.

Stabilization input will provide information about camera movement, which will be applied to all the telestrations drawn so far, moving them accordingly (FREQ#5). The telestrations will be displayed on top of a video layer.

The actual GUI elements will be displayed at the very top of visual hierarchy in order to ensure their visibility for the user at all times. User input such as drag & drop with the mouse left button will be interpreted into telestrations that will be placed on telestrations layer (FREQ#3). The GUI element positioning is displayed on Figure 7.3.

47

Figure 7.3 UI elements positioning

The undo button is to provide a solution for FREQ#4. Once this button is hit, the latest telestration will be removed from the telestrations layer.

The sandbox will display the current telestration shape, which is a simple freehand line drawing by default. The same sandbox will be a button (left click) to choose a telestration shape (FREQ#9), and a button to change the telestration color (right click) (FREQ#8).

This way, less screen space will be occupied by the UI elements and a greater portion of underlying surgical video will be seen.

The brush size slider is to provide a functionality to alter line width in a user-friendly way that does not require any numerical keyboard input and can be controlled with simple mouse drag & drop gestures (FREQ#7).

The canvas opacity slider is introduced to provide a way to temporarily hide the telestrations or make them transparent to a certain degree (from 0% to 100 %) if the mentor thinks it is necessary to see them through.

A close button is there to give the user the opportunity to stop the system.

7.3. Stabilizer Module Design

The stabilizer module is going to perform most computationally heavy tasks in the system. The overall stabilizer design is depicted in Figure 7.4.

48

Figure 7.4 Stabilizer module design diagram

Stabilizer’s video input is a part of stabilizer that is responsible for providing a stream of frames from the desired source. In case a mentoring session is going to feature surgical procedures that were recorded before, a file can be used as a video source. But if a live feed from a laparoscopic/endoscopic camera is needed, video input can be configured to use a camera of choice as a source as long as the camera is connected to a computer, where the module runs (FREQ#2).

The decompressed/decoded frames are then passed to a video output thread, which uses an interprocess communication method to transmit the video to the GUI module described previously.

In parallel, the decompressed/decoded frames are optionally passed to an image pre-processing module, where certain adjustments can be made (e.g. brightness/contrast adjustment, sharpening). This step is sometimes necessary in order to enhance the images’ quality to make the image analysis more efficient.

Depending on the experiment/mentoring session circumstances, a pre-processed or raw decompressed frames are then passed to an image analysis module, which uses computer vision techniques in order to retrieve camera motion coordinates that are necessary for telestration stabilization.

Note that the image analysis node can be replaced by any other node as long as it yields correct/sufficient stabilization data in the end, including a node using different computer-vision techniques. This also means that if necessary, sensors other than camera can be used for camera motion estimation (FREQ#6).

The estimated camera motion data is then serialized into a certain standard and is passed to a thread that performs interprocess communication with a GUI module (FREQ#5).

49