• No results found

Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition

N/A
N/A
Protected

Academic year: 2022

Share "Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition"

Copied!
121
1
0
Vis mer ( sider)

Fulltekst

(1)

Supplementary Results

Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition

EGSR 2018

(2)

Results

We include the full images for all comparisons in the paper.

We place our results between other methods so that

comparisons could be made by flipping back and forth

between them.

(3)

Figure 1

(4)

Input

(5)

Shi et al. [2017], WHDR = 75.70%

Reflectance Shading

(6)

Ours, WHDR = 6.61%

Reflectance Shading

(7)

Narihira et al. [2015], WHDR = 36.03%

Reflectance Shading

(8)

Ours, WHDR = 6.61%

Reflectance Shading

(9)

Zhou et al. [2015], WHDR = 11.48%

Reflectance Shading

(10)

Ours, WHDR = 6.61%

Reflectance Shading

(11)

Nestmeyer et al. [2017], WHDR = 7.35%

Reflectance Shading

(12)

Ours, WHDR = 6.61%

Reflectance Shading

(13)

Figure 3

(14)

Input

(15)

Only synthetic

Reflectance Shading

(16)

Our full method

Reflectance Shading

(17)

Only real

Reflectance Shading

(18)

Figure 6

(19)

Input

(20)

Without bilateral

Reflectance Shading

(21)

With bilateral

Reflectance Shading

(22)

Figure 7

(23)

Input

(24)

Zhou et al. [2015]

Reflectance Shading

(25)

Ours

Reflectance Shading

(26)

Figure 9

(27)

SOFA Input

(28)

SOFA Bi et al. [2015], WHDR = 18.11%

Reflectance Shading

(29)

SOFA Ours, WHDR = 9.62%

Reflectance Shading

(30)

SOFA Zhou et al. [2015], WHDR = 11.22%

Reflectance Shading

(31)

SOFA Ours, WHDR = 9.62%

Reflectance Shading

(32)

SOFA Narihira et al. [2015], WHDR = 47.98%

Reflectance Shading

(33)

SOFA Ours, WHDR = 9.62%

Reflectance Shading

(34)

SOFA Shi et al. [2017], WHDR = 55.88%

Reflectance Shading

(35)

SOFA Ours, WHDR = 9.62%

Reflectance Shading

(36)

SOFA Nestmeyer et al. [2017], WHDR = 11.01%

Reflectance Shading

(37)

SOFA Ours, WHDR = 9.62%

Reflectance Shading

(38)

KITCHEN

Input

(39)

KITCHEN

Bi et al. [2015], WHDR = 6.11%

Reflectance Shading

(40)

KITCHEN

Ours, WHDR = 4.41%

Reflectance Shading

(41)

KITCHEN

Zhou et al. [2015], WHDR = 11.27%

Reflectance Shading

(42)

KITCHEN

Ours, WHDR = 4.41%

Reflectance Shading

(43)

KITCHEN

Narihira et al. [2015], WHDR =46.50%

Reflectance Shading

(44)

KITCHEN

Ours, WHDR = 4.41%

Reflectance Shading

(45)

KITCHEN

Shi et al. [2017], WHDR = 53.50%

Reflectance Shading

(46)

KITCHEN

Ours, WHDR = 4.41%

Reflectance Shading

(47)

KITCHEN

Nestmeyer et al. [2017], WHDR = 7.27%

Reflectance Shading

(48)

KITCHEN

Ours, WHDR = 4.41%

Reflectance Shading

(49)

CUPBOARD

Input

(50)

CUPBOARD

Bi et al. [2015], WHDR = 24.49%

Reflectance Shading

(51)

CUPBOARD

Ours, WHDR = 10.45%

Reflectance Shading

(52)

CUPBOARD

Zhou et al. [2015], WHDR = 23.59%

Reflectance Shading

(53)

CUPBOARD

Ours, WHDR = 10.45%

Reflectance Shading

(54)

CUPBOARD

Narihira et al. [2015], WHDR = 51.23%

Reflectance Shading

(55)

CUPBOARD

Ours, WHDR = 10.45%

Reflectance Shading

(56)

CUPBOARD

Shi et al. [2017], WHDR = 31.79%

Reflectance Shading

(57)

CUPBOARD

Ours, WHDR = 10.45%

Reflectance Shading

(58)

CUPBOARD

Nestmeyer et al. [2017], WHDR = 17.37%

Reflectance Shading

(59)

CUPBOARD

Ours, WHDR = 10.45%

Reflectance Shading

(60)

LIVING ROOM

Input

(61)

LIVING ROOM

Bi et al. [2015], WHDR = 17.33%

Reflectance Shading

(62)

LIVING ROOM

Ours, WHDR = 24.70%

Reflectance Shading

(63)

LIVING ROOM

Zhou et al. [2015], WHDR = 37.26%

Reflectance Shading

(64)

LIVING ROOM

Ours, WHDR = 24.70%

Reflectance Shading

(65)

LIVING ROOM

Narihira et al. [2015], WHDR = 47.57%

Reflectance Shading

(66)

LIVING ROOM

Ours, WHDR = 24.70%

Reflectance Shading

(67)

LIVING ROOM

Shi et al. [2017], WHDR = 49.70%

Reflectance Shading

(68)

LIVING ROOM

Ours, WHDR = 24.70%

Reflectance Shading

(69)

LIVING ROOM

Nestmeyer et al. [2017], WHDR = 20.05%

Reflectance Shading

(70)

LIVING ROOM

Ours, WHDR = 24.70%

Reflectance Shading

(71)

OFFICE

Input

(72)

OFFICE

Bi et al. [2015], WHDR = 15.96%

Reflectance Shading

(73)

OFFICE

Ours, WHDR = 17.48%

Reflectance Shading

(74)

OFFICE

Zhou et al. [2015], WHDR = 17.39%

Reflectance Shading

(75)

OFFICE

Ours, WHDR = 17.48%

Reflectance Shading

(76)

OFFICE

Narihira et al. [2015], WHDR = 38.69%

Reflectance Shading

(77)

OFFICE

Ours, WHDR = 17.48%

Reflectance Shading

(78)

OFFICE

Shi et al. [2017], WHDR = 46.60%

Reflectance Shading

(79)

OFFICE

Ours, WHDR = 17.48%

Reflectance Shading

(80)

OFFICE

Nestmeyer et al. [2017], WHDR = 14.24%

Reflectance Shading

(81)

OFFICE

Ours, WHDR = 17.48%

Reflectance Shading

(82)

Figure 10

(83)

BEDROOM

Input

(84)

BEDROOM

Zoran et al. [2015], WHDR = 16.10%

Reflectance Shading

(85)

BEDROOM

Ours, WHDR = 6.94%

Reflectance Shading

(86)

LOUNGE

Input

(87)

LOUNGE

Zoran et al. [2015], WHDR = 28.99%

Reflectance Shading

(88)

LOUNGE

Ours, WHDR = 24.36%

Reflectance Shading

(89)

Figure 11

(90)

Input

Input image 1 Input image 2

(91)

Ours

Reconstructed image 1 MPRE( ି૛): 0.41

Reconstructed image 2 MPRE( ି૛): 0.85

(92)

Bi et al. [2015]

Reconstructed image 1 MPRE( ି૛): 2.89

Reconstructed image 2 MPRE( ି૛): 5.45

(93)

Ours

Reconstructed image 1 MPRE( ି૛): 0.41

Reconstructed image 2 MPRE( ି૛): 0.85

(94)

Zhou et al. [2015]

Reconstructed image 1 MPRE( ି૛): 0.73

Reconstructed image 2 MPRE( ି૛): 1.62

(95)

Ours

Reconstructed image 1 MPRE( ି૛): 0.41

Reconstructed image 2 MPRE( ି૛): 0.85

(96)

Zoran et al. [2015]

Reconstructed image 1 MPRE( ି૛): 1.19

Reconstructed image 2 MPRE( ି૛): 3.04

(97)

Ours

Reconstructed image 1 MPRE( ି૛): 0.41

Reconstructed image 2 MPRE( ି૛): 0.85

(98)

Nestmeyer et al. [2017]

Reconstructed image 1 MPRE( ି૛): 0.50

Reconstructed image 2 MPRE( ି૛): 1.13

(99)

Ours

Reconstructed image 1 MPRE( ି૛): 0.41

Reconstructed image 2 MPRE( ି૛): 0.85

(100)

Figure 12

(101)

Input

(102)

Bi et al. [2015]

(103)

Ours

(104)

Zhou et al. [2015]

(105)

Ours

(106)

Narihira et al. [2015]

(107)

Ours

(108)

Shi et al. [2017]

(109)

Ours

(110)

Nestmeyer et al. [2017]

(111)

Ours

(112)

Figure 13

(113)

Input

(114)

Ground truth

Reflectance Shading

(115)

Zoran et al. [2015]

Reflectance

si-MSE( ି૛): 1.39

Shading

si-MSE( ି૛): 3.08

(116)

Ours

Reflectance

si-MSE( ି૛): 0.95

Shading

si-MSE( ି૛): 2.17

(117)

Narihira et al. [2015]

Reflectance

si-MSE( ି૛): 1.05

Shading

si-MSE( ି૛): 3.10

(118)

Ours

Reflectance

si-MSE( ି૛): 0.95

Shading

si-MSE( ି૛): 2.17

(119)

Nestmeyer et al. [2017]

Reflectance

si-MSE( ି૛): 1.04

Shading

si-MSE( ି૛): 3.22

(120)

Ours

Reflectance

si-MSE( ି૛): 0.95

Shading

si-MSE( ି૛): 2.17

(121)

END

Referanser

RELATERTE DOKUMENTER

Keywords: ultra-deep water, active truncation, slender marine structures, real-time hybrid model testing, fidelity, control system, artefacts,

Second, domain decomposition methods reduce a large problem into many smaller size problems on the subdomains, and the computation of the subdomain problems can be done by

This paper describes a novel virtual reality simulation system for cataract surgery training, involving the capsulorhexis and phacoemulsification tasks.. The simulator runs on

Intrinsic image decomposition separates a photograph into independent layers: reflectance, which represents the color of the materials, and illumination, which encodes the effect

To utilize the real images, we rely on the fact that two images of a scene captured with different lightings have the same reflectance, and thus, enforce that the network

It can be found in real-time to support fully animated or simulated hair, which means it can easily be added to existing hair rendering frameworks, like TressFX, that already

Computer Graphics Forum © 2021 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd... The physical formation of an image involves

DOI: 10.1111/cgf.142650.. The physical formation of an image involves various unknowns at macroscopic and microscopic levels, and decomposing them al- together makes it ill-posed.

In the course of the past 45 years, Norway has gone from being a moderately prosperous country to one of the wealthiest countries in the world at the end of the first decade of

Our work has been aimed specifically at training deep convolutional neural networks (CNNs) to predict an emotional signal based on real human data, and applying these to a deep

Abstract: Image denoising or artefact removal using deep learning is possible in the availability of supervised training dataset acquired in real experiments or synthesized using

Finally, the network was initially trained on synthetic data generated using our proposed synthetic data generation pipeline, and then transfer learning was applied using real data

Further investigation is done by constructing synthetic signals to compare decomposition quality, and by applying the detection methods to an actual voltage and current measurement

The training data was generated by simulating CAD models in a synthetic environment using domain randomization, and the trained models were able to detect the real objects with

The strong reverberation close to the coast results in low target SNR and high probability of false alarm rate inflation, which strongly reduces the performance of four of the

Genuine ballast water from Havila Subsea and sea water from Norway and Singapore spiked with 1 and 10 CFU/100 ml of V. cholerae VC 021 cells detected positive for V. cholerae

In order to perform reasoning the behaviour models shall have access to data about the simulated environment and react to events in the simulated environment, where the

As a result, the main challenge for military HEVs is related to the cost of introduction of the maturing electric traction motors, generators, energy storage systems and

Figure 4.7: Final version of synthetic data: Ground Truth and Textured Pauli Decomposition with Target Point.... The simulated images, generated as previously described in Chapter

We next analyze the re-id performance when SOMAnet is trained from scratch with SOMAset, and fine-tuned on the training partition of another dataset, whose testing partition is used

Selecting applicants to teacher training programmes who have high achievement goal motivation and high intrinsic motivation might be a possible implication for

The framework is utilized to investigate how real users experience the performance of the novel hybrid recommender model implemented with deep learning-based visual features,

The Government has now anchored work on entrepreneurship in education and training in three Reports to the Storting (White Papers) in addition to this Action Plan. 6 It