• No results found

Data Collection and Methods

Paper 4: Kinect Modelling of Chest Compressions - A Feasi-

9.2 Data Collection and Methods

Figure 9.1: Block scheme of 3d CC modeling using Microsoft Kinect. IIR(i, x, y) and IDM(i, x, y) are provided by the Kinect.

9.2 Data Collection and Methods

The CC modeling is performed using Microsoft Kinect for Xbox One8 and reflective markers to track the shoulders, elbows and wrist points in 3D.

The Kinect device uses IR time-of-flight technology to create depth maps where the map pixel values corresponds to the distance in millimeters to

8https://www.xbox.com/en-US/xbox-one/accessories/kinect

objects visible to the IR sensor. To capture the IR frames and the depth maps from the Kinect, we have used a modified version of theKin2 Matlab toolbox created by Terven9. The modifications includes frame rate control and frame capturing.

The setup for the experiment with an additional block scheme for the main tracking algorithm is shown in Fig. 9.1. From the captured IR frames and depth maps we track the reflective markers and measure the bystanders CC movement in world coord., Xw,Yw andZw, for different CC depths.

The truth data,CCtrue, is collected by performing the CCs on a Resusci Anne manikin.

9.2.1 Distance, Z, from Kinect camera in world coord.

As can be seen in the depth map, IDM(i, x, y), shown in Fig. 9.1, the markers appears as black spots due to the fact that the highlights either prevent the infrared reflection back to the kinect sensor or causes the sensor to saturate [118]. To obtain the depth map information in the area around each marker the following steps, shown in Fig. 9.1, are carried out: In step 1, only the information in bright reflective spots are kept in the IR frames, IIR(i, x, y), where iis the frame number and x, y the image coordinates, by thresholding the frames:

IIRT(i, x, y) =

1 if IIR(i, x, y)> Tm

0 otherwise (9.1)

Step 2 removes small bright spots caused by noise by discarding all ar-eas with a number of pixels< Tsbs. Containing only information in the reflective markers, the resulting filtered framesIIRF(i, x, y) are dilated in step 3 with a 5-by-5 matrix of ones and all pixel values>0 in the dilated imageIIRD(i, x, y), are set to one. Next, the Hadamard product between IIRD(i, x, y) and the depth maps,IDM(i, x, y), is found, resulting in frames, IDMF(i, x, y), with only depth information in the area around each marker.

Further we define index-sets, Am = {x(m)l , yl(m)}, where x(m)l and yl(m) represents the pixel positions included in the region of each marker, m, wherem∈1 : 6. TheZw position of each marker and frame can then be found by:

9https://github.com/jrterven/Kin2

Figure 9.2: Median movement,MQ(p, d, DG,Q2), [mm] as a function of CC depth [mm] for each test person‘s (TP) CCs.

Zw(i, m) = 1 nm

l∈Am

IDMF(i, x(m)l , yl(m))>0 (9.2) where nm is the number of pixels in the index-set of marker, m.

9.2.2 X and Y position in world coord.

In step 4, Fig. 9.1, the markers centroid coordinates, (xc, yc)im, are found inIIRF(i, x, y) by:

(xc, yc)im =cent(IRF(i, Am)>0), (9.3) and in step 5 we convert these image coordinates to world coordinates. By calibrating the IR camera, the camera matrix, KIR, can be found, and together with a rotation matrix,Rk2w, a translation vector, Tk2w and the depth information, Zw, from section 9.2.1, KIR allows us to convert IR image coordinates (x, y) to world coordinates Xw and Yw. By defining the matrixCk2w =KIR[Rk2w|Tk2w], and choosing the center of the world coordinate system to be the same as the camera coordinates system, the conversion can be written [119]:

λ

where λ=Zw,α andβ the focal length of the camera andx0 andy0 the principal point offset in pixels. From Eq. 9.4 we can find theXw andYw coordinates for eachm and i:

Xw(i, m) = (xic,m−xo)Zw(i, m)

α (9.5)

Yw(i, m) = (yic,m−yo)Zw(i, m)

β (9.6)

Further, for each marker, m 1 : 6 and direction, d Xw, Yw, Zw, we define position in world signals,Sm,d(i), as a function of discrete time,i, e.g. S1,Xw(i) =Xw(i,1).

9.2.3 Movement analysis

FromSm,d(i) and the reference data,CCtrue, we can measure a person‘s movement as a function of different CC depths, summarized in Algorithm 2.

The output is the measured bystander movement, MQ(p, d, DG, Q) [mm], where p∈shoulder, elbows, wrists in the directionsd∈Xw, Yw and Zw, DG the depth groups and Q the quartile measurements, Q1 (25%), Q2 (median) and Q3 (75%). CCs in the CC rate range of 95-125 cpm is here being measured and sorted ingroupCCD according to the reference CC depths. The first group 0-15 mm and the following 9 groups divides the range 15 to 60 into depth intervals of 5 mm. Further we find the motion vector for the median movement inYw andZw direction for the bystander‘s shoulders. These vectors are used to estimate how the motion would be observed by a smartphone camera placed on the floor next to the patient, see Fig. 9.4, and to investigate if it is possible to create a conversion model based on the method for CC depth measurment proposed in Meinich-Bache [93], where we measure the movement‘s motion band size in the image frames. To convert the movement in world coord. to smartphone camera image coord., we use Eq. 9.4 and substitute KIR, Rk2w and Tk2w with smartphone to world matrices KSP, Rsp2w and Tsp2w. Fig. 9.4 shows the rotation and translation, -555 mm in YSP-direction and 475 mm in ZSP-direction, between the coord. systems, and we get:

[Rsp2w|Tsp2w] =

Algorithm 2 Movement measurement of shoulders, elbows, and wrists for different CC depths.

Input: Sm,d(i), CCtrue, Output: MQ(p, d, DG, Q) Detecting CCs using the Yw signal of left wrist:

[pks lcs]f indpeaks(S5,Yw(i))

Measuring mov. for each CC, marker and direction:

form=1:6 do

ford=Xw, Yw, Zwdo

o=1;forj=length(lcs):-1:2 do

if 95[cpm]< CCtrue(j)<125[cpm]then Mm,d(o)

|max(Sm,d(lcs(j)) : Sm,d(lcs(j 1)) min(Sm,d(lcs(j)) : Sm,d(lcs(j1))|o=o+ 1;

end end

Grouping CCs in CC depth groups (DG):

M(m, d, DG)groupCC(Mm,d(o), CCtrue) end

end

forshoulders, elbows and wrists in all directions: do Combining L&R measurements

ML&R(p, d, DG)[M(Lef t, d, DG), M(Right, d, DG)]

Estimating Q1,Q2,Q3

MQ(p, d, DG, Q)Q(ML&R(p, d, DG)) end