• No results found

Tensor Decomposition Models

N/A
N/A
Protected

Academic year: 2022

Share "Tensor Decomposition Models"

Copied!
22
0
0
Vis mer ( sider)

Fulltekst

(1)

Tutorial: Tensor Approximation in Visualization and Computer Graphics

Tensor Decomposition Models

Renato Pajarola, Susanne K. Suter, and Roland Ruiters

(2)

Data Reduction and Approximation

A fundamental concept of data reduction is to remove redundant and irrelevant information while preserving the relevant features

e.g. through frequency analysis by projection onto pre-defined bases, or extraction of data intrinsic principal components

identify spatio-temporal and frequency redundancies

maintain strongest and most significant signal components

Data reduction linked to concepts and techniques of data compression, noise reduction as well as feature extraction and recognition/extraction

8

(3)

Data Approximation using SVD

Singular Value Decomposition (SVD) standard tool for matrices, i.e., 2D input datasets

see also principal component analysis (PCA)

left singular vectors

(column-space)

right singular vectors

(row-space) singular

values M: rows

N: columns

=

A

N N

M N

N N 0

0

U VT

9

(4)

M

N

=

M

à U VT

N

N 0 R 0

R

0

N N

N

Low-rank Approximation

10

Exploit ordered singular values: s1 ≥ s2 ≥ ... ≥ sN

Select first r singular values (rank reduction)

(5)

N N

M N

~ =

N

à U N

R

VT R R

R

M

N

Low-rank Approximation

10

Exploit ordered singular values: s1 ≥ s2 ≥ ... ≥ sN

Select first r singular values (rank reduction)

use only bases (singular vectors) of corresponding subspace

(6)

Matrix SVD

rank reducibility

orthonormal row/column matrices

Matrix SVD Properties

N N

M N

=

N N 0

0

A U VT

M

N

11

(7)

What is a Tensor?

Data sets are often multidimensional arrays (tensors)

images, image collections, video, volume data etc.

12

1st-order tensor

I1 a

i1 = 1, . . . ,I1

2nd-order tensor

I1

A

i2 = 1, . . . ,I2

I2

3rd-order tensor

I2

I1

A

i3 = 1, . . . ,I3

I3

a

0-order tensor ...

(8)

Individual elements of a vector a are given by ai1, from a matrix A by ai1,i2

and from a tensor

A

by ai1,i2,i3

The generalization of rows, columns (and tubes) is a fiber in a particular mode

Two dimensional sections of a tensor are called slices

frontal, horizontal and lateral for

A

3

Fibers and Slices

13

(9)

Operations with tensors often performed as matrix operations using unfolded tensor

representations

different tensor unfolding strategies possible

Forward cyclic unfolding A(n) of a 3rd order tensor

A

(or 3D volume)

The n-rank of a tensor is typically defined on an unfolding

n-rank Rn = rankn(

A

) = rank(A(n))

multilinear rank-(R1, R2, …, RN) of

A

I2 I3

A

A

I1

I1

I2

I3 I2

I3 I1

I2

I1

A

I1

I2 I3

I3

I3 I3

I2 I2

I1 I1

A(3) A(1)

A(2)

I2 ·I1 I1 ·I3 I3 ·I2

Unfolding and Ranks

14

(10)

Rank-one Tensor

N-mode tensor

A

I1×…×IN that can

be expressed as the outer product of N vectors

Kruskal tensor

Useful to understand principles of rank-reduced tensor reconstruction

linear combination of rank-one tensors

15

A = b ( 1 ) b ( 2 ) ··· b (N )

I3 I2

I

1

A =

b(1)

b(2) b(3)

(11)

Tensor Decomposition Models

16

I3 I1

I2

A

• Three-mode factor analysis (3MFA/Tucker3) [ Tucker, 1964+1966 ]

• Higher-order SVD (HOSVD) [ De Lathauwer et al., 2000a ]

Tucker

I1

U(1)

I2

U(2) I3

U(3

)

R2

R1

R3

R2

R3

R1

core tensor B basis matrices U(n)

B

PARAFAC (parallel factors) [ Harshman, 1970 ]

CANDECOMP (CAND) (canonical decomposition) [ Caroll & Chang, 1970 ]

CP

I1

U(1)

I2

U(2) I3

U(3

)

R R

R

R

coefficients

(12)

Higher order tensor

A

I1×…×IN

represented as a product of a core tensor B ∈ ℝR1×…×RN and N factor matrices U(n)∈ ℝIn×Rn

using n-mode products ×n

U(3) U(1) U(2)

I1 I2

I1

I2 I3

I3

R1 R2 R3

R1

R2 R3

= B +

e

A

A = B ⇥

1

U

(1)

2

U

(2)

3

··· ⇥

N

U

(N)

+ e

Tucker Model

17

(13)

CANDECOMP-PARAFAC Model

Canonical decomposition or parallel factor analysis model (CP)

Higher order tensor

A

factorized into a sum of rank-one tensors

normalized column vectors ur(n) define factor

matrices U(n)∈ ℝIn×R and weighting factors λr

18

U(3) U(1)

U(2) I1

I2 I1

I2 I3

I3

R R R

0

0

l1 l2

lR lR 1

A = ... + e

A =

Â

R r=1

l

r

· u

(1)r

u

(2)r

. . . u

(Nr )

+ e

(14)

The CP model is defined as a linear combination of rank-one tensors

Linear Combination of Rank-one Tensors

19

I3 I2

I1

l1

u(1)1

u(2)1 u(3)1

+l2 + . . . +

lR

u(2)2 u(1)2

u(3)2 u(3)R

u(2)R u(1)R

A = + e

A =

Â

R r=1

l

r

· u

(r1)

u

(r2)

. . . u

(Nr )

+ e

(15)

The CP model is defined as a linear combination of rank-one tensors

The Tucker model can be interpreted as linear combination of rank-one tensors

Linear Combination of Rank-one Tensors

19

A =

R1 r

Â

1=1

R2

r

Â

2=1

···

RN rN

Â

=1

b

r1r2...rN

· u

(r11 )

u

(r22 )

. . . u

(NrN )

+ e

u(1)R1

u(2)R2 u(3)R3

u(3)r3

u(2)r2

u(1)r1

I3 I2

I1 A = br1r2r3 + . . . + bR1R2R3 + e

(16)

I3 I1

I2

I3

I1

U(3)

U(1)

I2 U(2)

Af

R

R R

l1

lR

...

CP a Special Case of Tucker

20

I1

U(1)

I2

U(2) I3

U(3

)

R R

R

R I1

U(1)

I2

U(2) I3

U(3

)

R2

R1

R3

R2

R3

R1

B

CP Tucker

I3 I1

I2

I3

I1

U(3) R3

R1

R2 U(1)

I2 U(2)

B

Af

(17)

Any special form of core and corresponding factor matrices

e.g. blocks along diagonal

B1

I1

I2 I1

I2 I3

I3

R R R

0

A = ...

0

+

e

U(1)1 U(1)2 . . . U(1)P

. . .

. . .

BP

U(3)2

U(P2) U(2)1 U(2)2

U(3)1 U(3)P

Generalizations

21

(18)

Full reconstruction using a Tucker or CP model may require excessively many coefficients and wide factor matrices

large rank values R (CP), or R1, R2 … RN (Tucker)

Quality of approximation increases with the rank, and number of column vectors of the factor matrices

best possible fit of these bases matrices discussed later

22

Reduced Rank Approximation

U(3) U(1) U(2)

I1 I2 I3

R1 R2 R3

R1

R2 R3

l1

B

lR

...

(19)

I3 I1

I2

I1

Af

R

l...1 lR

I3

R

U(1)

U(3)

I2 U(2) R

I3 I1

I2

I3

I1

U(3)

U(1)

I2 U(2)

Af

R

R R

l1

lR

...

Rank- R Approximation

Approximation of a tensor as a linear

combination of ranke-one tensors using a limited number R of terms

CP model of limited rank R

23

A f =

Â

R r=1

l

r

· u

(1)r

u

(2)r

. . . u

(Nr )

(20)

I3 I1

I2

I3

I1

U(3) R3

R1

R2 U(1)

I2 U(2)

f B

A

Rank- (R 1 , R 2 , …, R N ) Approximation

Decomposition into a tensor with reduced, lower multilinear rank(R1, R2, …, RN)

n-mode products of factor matrices and core tensor in a given reduced rank space

Tucker model with limited ranks Ri

24

A f = B ⇥

1

U

(1)

2

U

(2)

3

··· ⇥

N

U

(N)

I3 I1

I2

I3

I1

U(3) R3

R1

R2 U(1)

I2 U(2)

B

Af

rankn(Af) = Rn rankn(A ) = rank(A(n))

(21)

Best Rank Approximation

25

Rank reduced approximation that minimizes least-squares cost

Alternating least squares (ALS) iterative algorithm that converges to a minimum approximation error based on the

Frobenius norm ||…||F

rotation of components in basis matrices

Af= B 1 U(1) 2 U(2) 3 U(3)

=

I3 I1

I2

A˜

I3

I1

U(3) R3

R1

R2 U(1)

I2 U(2)

B

typical high-quality data reduction:

Rk ≤ Ik / 2

A f = argmin ( A f ) A A f

2

(22)

Generalization of the Matrix SVD

26

A U VT

M

N

M

N N

N N

N

=

higher orders

CP model

higher orders

Tucker model

U(3) U(1)

U(2) I1

I2 I1

I2 I3

I3

R R R

0

0

l1 l2

lR lR 1

A = ... + e

U(3) U(1) U(2)

I1 I2

I1

I2 I3

I3

R1 R2 R3

R1

R2 R3

= B + e

A

Referanser

RELATERTE DOKUMENTER

It is the first version of the RCPSP where the aim is to select which tasks to complete (or leave undone) based on the utility value of tasks, while considering resources with

Based on our ethnography, the study delineates theoretical background, method, and then the three communication strategies for collaboration and communication :

Incubation of cerebellar granule cells with excess NaCl caused reduction in glucose metabolism, as could be seen from the reduced consumption of glucose and the diminished formation

This report presented effects of cultural differences in individualism/collectivism, power distance, uncertainty avoidance, masculinity/femininity, and long term/short

Marked information can be exported from all kinds of systems (single level, multi level, system high etc.), via an approved security guard that enforces the security policy and

4 The effect of confinement on ammonium nitrate decomposition 23 4.1 Steady-state decomposition and its dependence on pressure 23 4.2 Consequences of elevated pressure on

In the analysis of flow around an acoustic antenna, various tensors appear, for example the strain rate tensor, structural tensors and tensorial expressions involved in the

Such high confidence matches carry relevant information from the prior models to the scan, including normal data and feature classification, and are used to augment the

This paper demonstrates that TA is a powerful approach to (a) represent microstructural volume data sets at high data reduction ratios, and (b) simultaneously highlight

The local geometric tensor we formulated is a versatile diffu- sion tensor, which can well control the anisotropic diffusion, and properly distinguish weak features from noise,

At the top of the window there is a toolbar that allows to load medi- cal images, define models and their relationship, remove un- necessary parts, and enable the calculation of

Keywords: Tensor decompositions, tensor approximations, Tucker model, CANDECOMP/PARAFAC model, com- pact visual data representation, higher-order SVD methods, data

In a scatterplot view (top right) we link the structures of the time series data (position information) and of the attribute space (similarity-preserving color

A population reduction programme is one in which the objective to remove seals occurs over and above a harvest at replacement yield (consumption, hunt, other uses). In this case, the

The present paper is a proof-of-concept for the future SWOT data pre-processing, showing that an error reduction method based on the detrending of the spatially structured errors

A basic method, known as information preserving statistical obfuscation (IPSO), produces synthetic data that preserve variances, covariances and fitted values.. The data are

Det, Nietzsche bringer på banen, er en helt anden form for relevans, end ideen om at have en direkte samtidig relevans: historien (og dermed arkæologien) er relevant for så vidt som

Conclusions: Displaying self-collected health data during consultations is not enough for clinicians; the data reliability has to be assured and the relevant information needs to

The present analysis of longitudinal deferred imitation data contrasted toddlers’ imitation of functional and relevant (FURE) versus arbitrary and irrelevant (ARIR) target

A process model based on a unit model structure, is used for estimation of the process condition using data reconciliation.. Measurements are classified as redundant or non

If a person in Sweden uses a computer to load personal data onto a home page stored on a server in Sweden – with the result that personal data become accessible to people in

a) New features have been included in the ENC’s necessary for the voyage in question, and the ECDIS is not able to correctly display all the relevant information contained in the ENC

On daily returns a pairwise decomposition with Student copulas is preferable to mul- tivariate copulas, while the decomposed models on intra day data need more development