• No results found

ACM Symposium on Solid Modeling and Applications

N/A
N/A
Protected

Academic year: 2022

Share "ACM Symposium on Solid Modeling and Applications"

Copied!
347
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Solid Modeling 2004

ACM Symposium on Solid Modeling and Applications

Genoa, Italy June 9–11, 2004

Symposium and Program Co-Chairs Gershon Elber, Technion, Israel Nicholas Patrikalakis, MIT, USA

Pere Brunet, UPC, Spain

Proceedings Production Editor Dieter Fellner, TU Braunschweig

In cooperation with the Eurographics Association and ACM SIGGRAPH

(2)

This work is subject to copyright.

All rights reserved, whether the whole or part of the material is concerned, specifically those of translation, reprinting, re-use of illustrations, broadcasting, reproduction by photocopying machines or similar means, and storage in data banks.

Copyright c 2004 by the Eurographics Association PO Box 16, CH-1288 Aire-la-Ville, Switzerland Published by the Eurographics Association PO Box 16, CH-1288 Aire-la-Ville, Switzerland Printed in Germany

Cover design by Stefanie Behnke ISBN 3-905673-55-X

ISSN 1811-7783

The electronic version of the proceedings is available from the Eurographics Digital Library at

http://diglib.eg.org

(3)

Table of Contents

Preface . . . 7

Invited Talk 1

R es i d u al It er at i o n an d Accu r at e P o l ynomi al Eval uat i on for Shape-i nt errogat i on Appl i cat i ons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

C. Hoffmann, with G. Park, J-R. Simard, N. F. Stewart

SESSION 1: Medial Axis Representations

Efficient and Robust Computation of an Approximated Medial Axis . . . 15

Y. Yang, O. Brock, R. N. Moll

Medial Axis Extraction and Shape Manipulation of Solid Objects Using Parabolic PDEs . . . 25

H. Du, H. Qin

Medial-Axis Based Solid Representation . . . 37

A. Shaham, A. Shamir, D. Cohen-Or

Invited Talk 2

From Computer Geometry to Manufacturing Algorithms . . . 45

E. Cohen

SESSION 2: Geological and Volumetric Representations

Multiresolution Heterogeneous Solid Modeling and Visualization Using Trivariate Simplex Splines . . . 47

J. Hua, Y. He, H. Qin

Automatic Building of Structured Geological Models . . . 59

S. Brandel, S. Schneider, M. Perrin, N. Guiard, J. F. Rainaud, P. Lienhardt, Y. Bertrand

Spline Approximation of General Volumetric Data . . . 71

C. Roessl, F. Zeilfelder, G. Nuernberger, H. P. Seidel

SESSION 3: Surface Parametrization and Approximation

Planar Parameterization for Closed Manifolds Genus-1 Meshes . . . 83

D. Steiner, A. Fischer

A Condition for Isotopic Approximation . . . 93

F. Chazal, D. Cohen-Steiner

An Effective Condition for Sampling Surfaces with Guarantees . . . 101

J. D. Boissonnat, S. Oudot

SESSION 4: Subdivision Schemes

Optimization Techniques for Approximation with Subdivision Surfaces . . . 113

M. Marinov, L. Kobbelt

(4)

A Framework for Multiresolution Adaptive Solid Objects . . . 123

Y.- S. Chang, H. Qin

SESSION 5: Tolerancing and Collision Detection

Tolerance Envelopes of Planar Parametric Part Models . . . 135

Y. Ostrovsky-Berman, L. Joskowicz

Fast Continuous Collision Detection for Articulated Models . . . 145

S. Redon, M. C. Lin, D. Manocha

SESSION 6: Simplicial Geometric Representations

B-rep SE: Simplicially Enhanced Boundary Representation . . . 157

M. Freytag, V. Shapiro

Update Operations on 3D Simplicial Decompositions of Non-manifold Objects . . . 169

L. De Floriani, A. Hui

Invited Talk 3

Efficient Processing of 3D Scanned Models . . . 181

R. Scopigno

SESSION 7: Engineering Drawings and CAD Data

Integration of Parametric and Geometric CAD Data Exchange . . . 183

S. N. Spitz, A. Rappoport

Making the Most of Using Depth Reasoning to Label Line Drawings of Engineering Objects . . . 191

P. A. C. Varley, R. R. Martin, H. Suzuki

SESSION 8: Boolean Operations and Design

Progressive Dimension-Independent Boolean Operations . . . 203

A. Paoluzzi, V. Pascucci, G. Scorzelli

Constraint-based Design of B-spline Surfaces from Curves . . . 213

P. Michalik, B. D. Bruderlin

POSTERS SESSION

Reconstruction with 3D Geometric Bilateral Filter . . . 225

A. Miropolsky, A. Fischer

Developability-preserved Free-form Deformation of Assembled Patches . . . 231

C. C. L. Wang, K. Tang

Implicit Curve and Surface Design Using Smooth Unit Step Functions . . . 237

Q. Li

(5)

Stability and Homotopy of a Subset of the Medial Axis . . . 243

F. Chazal, A. Lieutier

Tracing Surface Intersection with a Validated ODE System Solver . . . 249

H. Mukundan, K. H. Ko, T. Maekawa, T. Sakkalis, N. M. Patrikalakis

Topological and Geometric Beautification of Reverse Engineered Geometric Models . . . 255

F. C. Langbein, A. D. Marshall, R. R. Martin, B. I. Mills, C. H. Gao

Connected and Manifold Sierpinski Polyhedra . . . 261

E. Akleman and V. Srinivasan

Physics-based Modelling and Simulation of Functional Cloth for Virtual Prototyping Applications . . . 267

M. Fontana, C. Rizzi, U. Cugini

Image Based Bio-CAD Modeling and Its Applications to Biomedical and Tissue Engineering . . . 273

B. Starly, A. Darling, C. Gomez, J. Nam, W. Sun, A. Shokoufandeh, W. Regli

Shape Similarity Measurement Using Ray Distances for Mass Customization . . . 279

T. J. Hwang, K. Lee, J. H. Jeong, H. Y. Oh

Using Cayley Menger Determinants . . . 285

D. Michelucci, S. Foufou

History Based Reactive Objects for Immersive CAD . . . 291

T. Convard, P. Bourdot

Shortest Circuits with Given Homotopy in a Constellation . . . 297

D. Michelucci, M. Neveu

Contour Interpolation with Bounded Dihedral Angles . . . 303

S. Bereg, M. Jiang, B. Zhu

Actual Morphing: A Physical-Based Approach for Blending Two 2D/3D Shapes . . . 309

S. M. Hu, C. F. Li, H. Zhang

Euler Operators for Stratified Objects with Incomplete Boundaries . . . 315

A. J. P. Gomes

Handling Degeneracies in Exact Boundary Evaluation . . . 321

K. Ouchi, J. Keyser

3D Discrete Skeleton Generation by Wave Propagation on PR-Octree for Finite Element Mesh Sizing . . . 327

W. R. Quadros, K. Shimada, S. J. Owen

Compression, Segmentation, and Modeling of Filamentary Volumetric Data . . . 333

B. McCormick, B. Busse, P. Doddapaneni, Z. Melek, J. Keyser

Plumber: A Multi-scale Decomposition of 3D Shapes into Tubular Primitives and Bodies . . . 339

M. Mortara, G. Patane, M. Spagnuolo, B. Falcidieno, J. Rossignac

Cover Image Credits . . . 345 Committees . . . 346

(6)
(7)

Preface

The ACM Symposium on Solid Modeling and Applications is an annual international forum for the exchange of recent research in and applications of spatial modeling and computations in design, analysis and manufacturing, as well as in emerging biomedical, geophysical and other areas. Previous symposia in this series were held in Austin, Texas, 1991; Montreal, Canada, 1993; Salt Lake City, Utah, 1995; Atlanta, Georgia, 1997; Ann Arbor, Michigan 1999 and 2001; Saarbrücken, Germany, 2002; and Seattle, Washington, 2003. For additional information, visit www.solidmodeling.org, the hom e page of t he Solid M odeling Associ ation t hat oversees the sym posia seri es.

The Ninth Symposium on Solid Modeling and Applications was held in Genoa, Italy on June 9-11, 2004. This year’s conference was held jointly with SMI04, with a one-day shared program. Added-on symposium activities included courses and tutorials before and after the plenary session. Sixty papers were reviewed by an international program committee and reviewers from around the world. At least three external reviewers and members of the program committee reviewed each paper. A total of 20 refereed papers were selected for plenary presentation and full paper publication in the proceedings. In addition, 20 papers were selected for poster presentation and six-page paper publication in the proceedings.

The symposium program also included three keynote presentations invited from leading authorities in industry and academia.

We would like to acknowledge the efforts and contributions of the many people who helped to make this conference a success. Members of the program committee and volunteer reviewers spent scores of hours carefully reading and reviewing the submitted papers. We would especially like to thank the local organizing committee of the joint SMISM04 event, Bianca Falcidieno, Franca Giannini, and Michela Spagnuolo. We also thank all the authors who submitted papers to this conference - their ongoing support made this event possible. Last but not least, we thank the Institute of Applied Mathematics and Information Technology, National Research Council, Genoa, Italy and Eurographics for sponsoring the symposium in cooperation with ACM.

Gershon Elber, Nick Patrikalakis and Pere Brunet

Conference Chairs, Solid Modeling 2004

(8)

In cooperation with:

Eurographics Association

(9)

G. Elber, N. Patrikalakis, P. Brunet (Editors)

Residual iteration and accurate polynomial evaluation for shape-interrogation applications

C. M. Hoffmann, G. Park, J.-R. Simard§and N. F. Stewart

Abstract

Surface interrogation and intersection depend crucially on good root-finding algorithms, which in turn depend on accurate polynomial evaluation. Conventional algorithms for evaluation typically encounter difficulties near multiple roots, or roots that are very close, and this may lead to gross errors in the geometric computation, or even catastrophic failure. In this paper we study the cost and accuracy of several approaches to polynomial evaluation, explaining the reasons for non-convergence of certain methods, and supporting our subsequent conclusions with the results of benchmarking experiments.

Categories and Subject Descriptors(according to ACM CCS): G.1 [Numerical Analysis]: I.3.5 [Computational Geometry and Object Modeling]: J.6 [Computer-Aided Engineering]:

1. Introduction

Many important problems in CAD and CAGD reduce directly to solving systems of nonlinear polynomial equations of the form

p(x) =0,

wherepandxare in Euclidean spaces of possibly different dimen- sion. Such problems include the calculation of intersections, dis- tance functions, and curvature extrema [PMS 02].

The usual difficulties solving nonlinear polynomial equations are exacerbated when dealing with multiple roots. Such cases arise at surface intersections that are tangential or singular. Even the evalu- ation of polynomials near multiple roots turns out to be of reduced accuracy. Inaccuracies in these fundamental tasks can lead to gross errors in the geometric computation, or even catastrophic failure [Hof 01].

As a part of a project which includes finding more robust surface

Department of Computer Science, Purdue University; partially supported by NSF grants DMS-0138098 and CCR-9902025.

Department of Computer Science, Purdue University, supported by NSF grants DMS-0138098 and CCR-9902025.

§ Département IRO, Université de Montréal, CP6128, Succ. CentreVille, H3C 3J7, Canada.

Département IRO, Université de Montréal, CP6128, Succ. CentreVille, H3C 3J7, Canada. The research of the fourth author was supported in part by a grant from the Natural Sciences and Engineering Research Council of Canada and by NSF grant DMS-0138098.

intersection evaluators [CARGO] we are re-examining these issues from the ground up. Arguing from first principles, it is clear that accurate root finding cannot succeed without accurate polynomial evaluation. Focusing specifically on this subproblem, we study in this paper the cost and feasibility of accurate floating-point evalua- tion using various approaches that have been proposed before.

Tangential or singular surface intersections arise often by de- sign in fairing operations and in blending operations. Curvature- continuous surface intersections are not uncommon in ship-hull design. In such demanding situations, some authors attempt to in- crease the certainty of the numerical evaluation of the geometry by employing interval-arithmetic enclosure methods,e.g.[Pat 03].

Here, a common problem near multiple roots is that the enclosure algorithm cannot exclude the presence of a root in many intervals near the true root because the polynomial evaluation is not accu- rate enough. Thus, accurate polynomial evaluation has significant impact on CAD in a number of critical situations.

CAD systems create complex geometry with tangential and curvature-continuous contacts based on user specifications and pro- prietary algorithms. As long as the created geometry remains un- der the control of the CAD system, the difficulties of tangential and near-tangential intersection computations can be controlled by re- taining additional information such as the rail curves of blending surfaces. This may motivate a perception that increasing the ac-

(10)

curacy of intersection computations is a minor matter in practice.

However, when the geometry is exported to other CAD systems or analysis programs for additional manipulation, then the additional information that simplified difficult intersections for the original CAD system is removed. Thus the problem we sketched has a prac- tical dimension.

Böhm [Boe 83] and others [Ham+95] have described an algo- rithm for the evaluation of univariate polynomials

p(t) =

n

i=0

piti, (1)

as well as [Kra 91] bivariate polynomials p(x,y) =

m

j=0

n i=0

pi jxiyj

where the coefficients pi or pi j are assumed to be exactly repre- sentable in the computer. Kramer’s extension can be used for poly- nomials in more than two variables as well. Although the matrices generated can be large, they are highly structured and do not have to be represented explicitly.

Böhm’s algorithm, which will be described in the next section, is based on Horner’s method. Part of the algorithm is closely re- lated to the classical method of iterative improvement, which is used to improve computed solutions to systems of linear equations [Wil 65]. The method of iterative improvement generally performs very well: indeed, Wilkinson states [Wil 65]p. 256 that “. . . the per- formance of the . . . process does not fall far short of that corre- sponding to exact iteration provided only that inner-products are accumulated [in double precision] in the computation of the resid- uals.” Nevertheless, there are difficulties with the method in the context of certain of the problems considered here. These difficul- ties can be explained by careful examination of the analysis of it- erative improvement given in [Wil 63]. Our purpose is to provide this explanation, and to describe experimental results obtained with the iterative-improvement method and, more generally, with the method of Böhm. Moreover, we contrast the residual iteration ap- proach with Priest’s multi-precision arithmetic approach [Priest92].

2. Residual iteration in polynomial evaluation 2.1. The method of Böhm for evaluating polynomials

We restrict our attention to the case of the single-variable polyno- mial, as in equation (1). Rewriting p(t)as in reference [Ham+95], in the Horner nested-multiplication form

p(t) = (...(pnt+pn−1)t+...+p1)t+p0,

and defining

xn=pn

xi=xi+1t+pi, i=n−1,...,0, we obtain the linear system

Ax=p (2)

where

A(n+1)×(n+1) =











 1

−t 1 .

. .

−t 1











 ,

x =











 xn

xn−1 . . . x0













,and p=











 pn

pn−1 . . . p0













The desired solution to the problem of evaluating p(t)can be obtained by taking the last component ofx:x0=p(t). Indeed, the process of computing an initial approximationx(1) to this linear system by direct forward substitution gives

x(1)n =pn

x(1)i =x(1)i+1t+pi, i=n−1,...,0, which is exactly Horner’s method.

2.2. Iterative improvement

Given the initial approximationx(1)to the solution of (2), the clas- sical method of iterative improvement [Wil 65, Wil 63] can be ap- plied to obtain a sequence of vectorsx(s) which converge, under certain conditions, to the exact solutionxof (2). Since in our spe- cial case the matrixAis lower triangular, the process [Wil 63]p.

121 simplifies: fors≥1, we first compute the residualr(s)from r(s)=p−Ax(s), (3) and then the correctionx(s+1)−x(s)from

x(s+1)−x(s)=A−1r(s); (4) this last is computed by means of the same forward-substitution process that was used above to computeA−1p, withpreplaced by r(s). If guaranteed solutions are required, then the calculation of C. M. Hoffmann, G.Park, J.-R.Simard, and N.F.Stewart / Residual iteration and accurate polynomial evaluation for shape-interrogation applications 10

(11)

the residual in (3), and the correction vector in (4), can be com- puted in interval arithmetic [Ham+95]. Alternatively, initial iter- ationsx(1),x(2),...,x(s0)can be computed using ordinary floating- point arithmetic, with double-precision calculation of residuals, un- til convergence, followed by a final iterationx(s0+1), using interval arithmetic, to obtain an interval bound.

It is essential, however, that the residual in (3) be computed to high accuracy [Wil 63]p. 126. If this is done, then under normal conditions the iterative-improvement procedure provides excellent results [Wil 65]p. 256. On the other hand, there are exceptional cases, one of which arises in the use of this procedure for the eval- uation of polynomials with multiple roots.

Consider for example the polynomial p(t) =

n

i=0

piti

where

pi=

n i

(−1)ni,i=0,...,n, so that

p(t) =

n

i=0

n i

ti(−1)ni= (t1)n.

We might taken=12 and consider evaluating the polynomial at t=1+10−6, so thatp(t) =10−72. Since

A1=











 1

t 1

t2 t 1

. .

. .

t12 t11 . . t 1













we see thatx0, which is equal to the last component ofA−1p, is equal to

12 i=0

tipi=

12

i=0

ti

n i

(−1)12−i= (t−1)12, which is indeedp(t).

The matrixAhasA−1=∑12i=0ti=t13t11=13, andA= 2+10−6, so that its condition numberA−1·Ais approx- imately 26. This is well within the bound [Wil 63](38.1) required for convergence using “block floating” arithmetic. The iterative- improvement procedure fails to converge using floating-point arith- metic, however, even when the residual is computed in double pre- cision [Wil 63]p. 126. (This fact is easily verified empirically by writing a short computer program.)

The reason for this failure is as follows. Suppose that|x(0s)|has been reduced to a value below 2−53=10−17; then, using IEEE double-precision floating-point arithmetic [IEEE] the result of cal- culating

r(0s)= ((p0−x(0s)) +x(1s)t)

in double precision gives a result that does not depend onx(0s), since p0=1, and subtractingx(0s)from 1 in double precision produces ex- actly 1. Since this is the only place at whichx(0s)enters the calcula- tion, it can be seen that once|x(0s)|is below the threshold mentioned, subsequent iterations do not even depend onx(0s).

The convergence analysis given in [Wil 63] is for block-floating arithmetic—floating-point arithmetic is discussed only informally [Wil 63]p. 126. If we try to imitate the analysis, to obtain an analy- sis for floating-point arithmetic, we encounter a problem at a fairly early stage. Block-floating arithmetic produces an exact represen- tation of the residualr(s)which can be converted to a standardized block-floating vector with small relative error [Wil 63](38.11), and this is a crucial step in the convergence proof. Although double- precision calculation of the residual willusuallyproduce an esti- mate with small relative error, this cannot be guaranteed, as the ex- ample described above shows. In such cases we are therefore forced to compute the residual using extended precision, as is done for the inner-product operator3in [Ham+95], and to use this throughout the process of iterative improvement.

2.3. The Inner-Product Operator3

Given floating-point numbersak andbk, k=1,...,n, the opera- tor3delivers as result a floating point numberSsuch that there is no representable floating point number S properly between S=∑nk=1akbkandS. The operation can be implemented in sev- eral ways.

Hammeret al.[Ham+95] implement an accumulator that holds, as a fixed-point number, the mantissa of productsakbk and their summations. At a size of 544 bytes, the accumulator is large enough to represent all partial sums that do not overflow or underflow with theakandbkin double precision. Thus, an accumulator can be used to compute accurate summationsSand extract the leading bits asS. Hence,Sis accurate to within less than one unit of least precision (ulp).

Kobbelt [Kob 97] implements an accumulator as an array of floating-point numbers indexed by their exponents. For each ex- C. M. Hoffmann, G.Park, J.-R.Simard, and N.F.Stewart / Residual iteration and accurate polynomial evaluation for shape-interrogation applications11

(12)

ponent value, there is an array element. Kobbelt gives an algorithm for summations using the array with appropriate carry. The final result can be extracted by combining partial sums recursively.

Because Kobbelt’s summation delays the construction of the leading bits ofS, his implementation does better when large sum- mations are needed with few result extractions. Hammer’s imple- mentation is better suited for general-purpose situations.

2.4. Distillation

Studied by Priest [Pri 92], several algorithms have been developed that effectively carry out multiple-precision floating-point compu- tations without explicitly manipulating extended mantissae. These techniques rest on the concept of compensating summation and dis- tillation. Intuitively, a distillation of a sum

S=

n

k=1

ak

ofnfloating-point numbers calculatespfloating point numbersbk

such that

n k=1

ak=

p k=1

bk

exactly and the mantissae of thebkare disjoint, i.e.,

|bk+1|<ulp(|bk|).

It is then clear thatb1is an approximation toSthat is accurate to within 1 ulp. By rounding according to the leading bit ofb2,Scan be accurate to within 1/2 ulp.

Distillation can be used to implement floating-point summa- tion exactly. Additionally, products of distillations again can be represented exactly as sums of partial products, so implementing floating-point products exactly. This provides a third way for im- plementing the inner-product operator3.

Since Priest’s algorithms can compute multiplication and addi- tion directly, it is also possible to use them to evaluate polynomials directly, without iteration. We include below data to show how ef- ficient the direct evaluation is.

3. Experimental Assessment of Evaluation Methods

We have implemented polynomial evaluation in the following ways:

1. Simple Horner: The polynomial is evaluated using the usual Horner method. This provides us a baseline datum for assess- ing accuracy and speed.

2. Residual iteration using the Hammeret al.implementation of 3.

3. Direct evaluation of the polynomial using distillation.

4. Residual iteration using distillation of sums and products.

All evaluations were done in the fully expanded power-base form;

no factorization is considered. The polynomials used to benchmark polynomial evaluation are the following:

p1(x) = (x2)4,x=2+10−8

p2(x) = (x2)6,x=2+10−8

p3(x) = (x−2)8,x=2+10−8

p4(x) =0.886339x60.163394x5+1.38273x4+4.28066x3 0.390834x21.38172x0.00221897,x=0.00221897

p5(x) =6.18121x74.04466x60.353717x5+1.557x4+ 0.846107x3+4.10773x2+0.307834x+1.03226,x=−1.03226

p6(x) =1.41423x80.70459x70.156364x6+3.41544x5+ 7.70459x42.04128x30.522173x20.529087x1.5827, x=1.06724

p7(x) =0.7934x90.257333x80.402707x7+1.67564x6+ 0.442567x5 + 1.58694x4 + 0.110229x3 0.777947x2 0.440497x+1.5179,x=−0.257333

Polynomials p1 through p3 have been chosen to test the evalua- tion near a multiple root. Despite the high multiplicity, the coeffi- cients of these polynomials have low bit complexity. The remaining four polynomials have simple roots but have coefficients of high bit complexity. They were generated randomly.

The table shows observed evaluation times for the four methods.

All computations were done using 64-bit double-precision arith- metic on a Pentium 4 PC. In each case, the CPU time of a total of 3000 (repeated) evaluations was measured. All times are in mil- liseconds.

Polyn. p1 p2 p3 p4 p5 p6 p7

Degree 4 6 8 6 7 8 9

Iterat’ns 3 4 5 1 1 1 1

Horner 63 78 94 78 79 94 94

Hammer 390 1125 1938 328 375 406 453

Direct 1906 5563 10720 7137 10360 13938 17517

Distill 1531 11922 32062 609 672 766 859

Depending on the number of iterations, Hammer’s method costs a factor of between 1:4 and 1:20 compared to Horner, with the num- ber of iterations the main cost parameter. For multiple roots, more C. M. Hoffmann, G.Park, J.-R.Simard, and N.F.Stewart / Residual iteration and accurate polynomial evaluation for shape-interrogation applications 12

(13)

than one iteration is needed when evaluating in its vicinity. Direct evaluation, using Priest’s method, is unattractive, with cost factors as high as 1:180. Here, the bit complexity of the coefficients is the main factor leading to large vectors in the distillations. The cost can be reduced by using residual iteration with sum distillation, yet the computing times observed are always higher than using Hammer’s inner-product implementation. Here, the number of iterations has a big impact.

The accuracies that are achieved over Horner’s method are sum- marized as follows:

p1: Horner evaluates to exactly zero. The other evaluation meth- ods obtain the value 10−32(hex 3949 f623 cb12 02b2), with an interval width of 3 ulp(10−32) after 3 iterations.

p2: Horner evaluates 8.53·10−14, whereas the accurate evalua- tion is 1048with an enclosure width of 1 ulp after 4 iterations.

p3: Horner evaluation yields 1.02·1012, accurate evaluation 1064, with an enclosure width of 1 ulp.

p4: The Horner value is 2.64044, the accurate value is also. The distance between the two, observing the binary mantissae, is 1 ulp (4005 1f9d 29c7 a5da vs 4005 1f9d 29c7 a5db).

p5–p7: The Horner value differs from the exact value by no more than 2 ulp.

The accuracy of evaluatingp4throughp7reflects the fact that the argument and the coefficients do not generate sums with terms of vastly different magnitude. Thus, digit cancellation does not create large errors.

4. Conclusions

Accurate polynomial evaluation is a key prerequisite for good root- finding algorithms which, in turn, are fundamental to surface inter- rogation and intersection. These operations typically encounter dif- ficulties near multiple roots or near roots that are very close. Con- ventional evaluation algorithms are then a fundamental obstacle since they are inherently imprecise in such cases. Accurate eval- uation is possible, but the experiments show that the algorithms available to us today are expensive. Residual iteration in particu- lar is an algorithm capable of delivering accurate values, but the implementation of the3operator is expensive.

As we have seen, the difference between the accurately rounded value and the value delivered by Horner can be more than 30 orders of magnitude. In such situations, conventional evaluation delivers

no information near the root. This has led to inefficiencies in shape- interrogation algorithms [Pat 03].

Against this backdrop, we have demonstrated that several tech- niques for accurate evaluation are rather costly and should not be used to simply displace conventional evaluation. Conventional evaluation using classical iterative improvement, conventionally implemented, may lead to non-convergence in cases of impor- tance. These insights provide strong justification for the certifi- cate approach of computational geometry pioneered by Fortune [FVW 93]. Indeed, as shown by evaluations ofp4throughp7, sim- ple evaluation can be efficientandaccurate.

It remains to be seen whether novel hardware algorithms such as thefused floating-point multiply and add(fma) instruction on the Intel Itanium can make significantly lower the cost of accurate polynomial evaluation; see also [Nie 03]. The fma instruction com- putes the expressiond=a∗b+cwith all intermediate bits, and thus can deliver the floating-point value fordto the result precision rather than to the precision ofmax(|a∗b|,|c|). It is conceivable that it allows us to push the boundary of meaningful routine calculation closer to the vicinity of multiple roots.

Brep solid representations of curved solids contain geometric data that is intrinsically inexact. This raises the question how useful it is to improve the accuracy of evaluation and intersection compu- tations. It seems to us, in this context, that precise evaluation must be combined with a semantic conceptualization of what the im- precise input data means in a mathematical sense. Clearly, without such a semantics highly precise evaluation is a luxury with unclear purpose. But conversely, given such a semantics, the inability to evaluate geometry with high precision at reasonable cost may be equally crippling.

The algorithms we have examined evaluate polynomials given in power-base representation. Corresponding techniques that work with the Bernstein-Bezier basis or with a B-spline representation would be desirable.

References

[PMS 02] Patrikalakis, N. M. and Maekawa, T. Shape Interroga- tion for Computer Aided Design and Manufacturing.

Springer, 2002.

[Hof 01] Hoffmann, C. M. Robustness in geometric computa- C. M. Hoffmann, G.Park, J.-R.Simard, and N.F.Stewart / Residual iteration and accurate polynomial evaluation for shape-interrogation applications13

(14)

tions.J. Computing and Information Science in Engi- neering(1), 143-156, 2001.

[CARGO] CARGO Project “ITANGO”,

www.cse.uconn.edu/~biscegli/itango/, 2001.

[Pat 03] Patrikalakis, N. M. Shape Interrogation. Keynote Ad- dress, Proc. 8th ACM Symposium on Solid Modeling and Applications, 2003.

[Boe 83] Böhm, W. Berechnung von Polynomnullstellen und Auswertung arithmetischer Ausdrücke. PhD Thesis, Universität Karlsruhe, 1983.

[Ham+95] Hammer, R.,et al, Chapter 4 “Evaluation of polynomi- als”. In C++ Toolbox for Verified Computing, Springer, 1995.

[Kra 91] Krämer, W. Evaluation of polynomials in several vari- ables with high accuracy. In COMPUTER ARITH- METIC, Scientific Computation and Mathematical Modelling, E. Kaucher, S. M. Markov and G. Mayer (editors), J. C. Baltzer AG, Scientific Publishing Co., IMACS, 239-249, 1991.

[Wil 65] Wilkinson, J. H. The Algebraic Eigenvalue Problem.

Oxford, 1965.

[Wil 63] Wilkinson, J. H. Rounding Errors in Algebraic Pro- cesses. Prentice-Hall, 1963.

[IEEE] IEEE Standard for Binary Floating-Point Arithmetic.

ANSI/IEEE Std 754-1985, IEEE, New York, 1985.

[Kob 97] Kobbelt, L. Robust and efficient evaluation of function- als on parametric surfaces. 13th ACM Symposium on Computational Geometry, ACM Press, 376-378, 1997.

[Pri 92] Priest, D. On properties of floating point arithmetics:

numerical stability and the cost of accurate comuta- tions. PhD Thesis, UC Berkeley, 1992.

[FVW 93] Fortune, S. J. and Van Wyk, C. Efficient Exact Arith- metic for Computational Geometry. Proc. Ninth An- nual Symposium on Computational Geometry, 163- 172, 1993.

[Nie 03] Nievergelt, Y. Scalar fused multiply-add instructions produce floating-point matrix arithmetic provably ac- curate to the penultimate digit.ACM Trans. on Math.

Software(29), No. 1, 27-48, 2003.

C. M. Hoffmann, G.Park, J.-R.Simard, and N.F.Stewart / Residual iteration and accurate polynomial evaluation for shape-interrogation applications 14

(15)

ACM Symposium on Solid Modeling and Applications (2004) G. Elber, N. Patrikalakis, P. Brunet (Editors)

Efficient and Robust Computation of an Approximated Medial Axis

Yuandong Yang Oliver Brock Robert N. Moll

Laboratory for Perceptual Robotics Department of Computer Science University of Massachusetts Amherst Email: {yuandong, oli, moll}@cs.umass.edu

Abstract

The medial axis can be viewed as a compact representation for an arbitrary model; it is an essential geometric structure in many applications. A number of practical algorithms for its computation have been aimed at speeding up its computation and at addressing its instabilities. In this paper we propose a new algorithm to compute the medial axis with arbitrary precision. It exhibits several desirable properties not previously combined in a practical and efficient algorithm. First, it allows for a trade- off between computation time and accuracy, making it well-suited for applications in which an approximation of the medial axis suffices, but computational efficiency is of particular concern. Second, it is output sensitive: the computation complexity of the algorithm does not depend on the size of the representation of a model, but on the size of the representation of the resulting medial axis. Third, the densities of the approximated medial axis points in different areas are adaptive to local free space volumes, based on the assumption that a coarser approximation in wide open area can still suffice the requirements of the applications. We present theoretical results, bounding the error introduced by the approximation process. The algorithm has been implemented and experimental results are presented that illustrate its computational efficiency and robustness.

Categories and Subject Descriptors(according to ACM CCS): I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling

1. Introduction

The medial axis of a solidDis the locus of points insideD, which lie at the centers of all non-intersecting closed discs or balls in Dof maximum radius that have at least two contact points with D[5, 28]. Given a medial axis ofDand its associated radius func- tion, it is easy to reconstruct the surface ofD. Thus the medial axis provides a compact representation ofD. The medial axis is helpful in many applications, such as shape analysis [24, 36], robot motion planning [13, 18], image processing [22], computer vision [7, 23], solid modeling [14, 17, 31] and mesh generation [1, 27]. In this pa- per we present a novel approach to compute an approximated me- dial axis based on the aforementioned definition.

While various algorithms to compute the exact medial axis of simple polyhedra composed of a few hundreds of triangles exist [21, 26, 28], it is non-trivial to scale them to more complicated models. This is a consequence of the instability and the algebraic complexity of the medial axis as a mathematical structure. A small change of the surface can cause relatively large changes in the me- dial axis; furthermore, the algebraic representation of the medial axis usually is of higher degree than the surface of the solid. In order to address the instability problem and reduce the compu- tation time, algorithms to approximate the medial axis have been proposed. These will be discussed in detail in Section 2.

There are a number of applications for which computation time is of critical concern, but an exact representation of the medial axis is not needed. For example, robotic motion planning techniques based on the medial axis have been devised [13, 18]. These meth- ods exploit the fact that the medial axis captures points at a maximal distance from obstacles and use it as a heuristic to find positions of the robot that do not collide with the environment [18]. For this application an approximated medial axis, which can be computed efficiently, is very desirable. Furthermore, in narrow and geometri- cally complex regions, a more detailed representation of the medial axis is of interest, whereas in wide open regions the danger of col- lision is low and a coarse representation suffices.

In this paper, we propose a novel algorithm to approximate the medial axis. Motivated by applications such as robot motion plan- ning, the proposed algorithm has the following characteristics:

1. The computation of the approximated medial axis can be per- formed very efficiently. As shown in Section 7, the approxi- mated medial axis of the model of the Stanford Bunny with 69,451 triangles can be computed in only 5.6 seconds on a stan- dard PC. The approximated medial axis of the Happy Buddha model, consisting of over one million triangles, can be com- puted in 13.4 seconds. This is much faster than existing algo- rithms with the same input.

(16)

Y. Yang, O. Brock, R. N. Moll / Efficient and Robust Computation of an Approximated Medial Axis 2. The algorithm allows the user to trade off speed of computation

for accuracy: the more accurate the approximation, the more computation time is required.

3. The algorithm adapts the density of points used to represent the approximated medial axis based on local properties of the solid.

Few points are necessary in open areas and a denser set of points is used to represent the approximated medial axis in confined areas.

As a consequence, the proposed approach is well suited for appli- cation in which an efficiently computed, approximated medial axis is of value. This has been validated experimentally in the domain of sampling-based motion planning in robotics [34].

2. Related Work

Given the geometric description of a solid, there are various algo- rithms to compute its medial axis. We classify the approaches found in the literature based on their method of exploring the interior of the solid.

The algorithms in the first category use a tracing approach [21, 26, 28] to compute the medial axis. The algorithm starts from a junction point, which is the point where multiple facets of the medial axis intersect, and traces the seams from the junction point until they meet another junction point or an end point. This proce- dure is repeated recursively until all parts of the medial axis have been traced. In addition, Culver [9] also uses a spatial subdivision technique to reduce the running time of the algorithm. Choset [8]

applies this approach to sensor-based motion planning. His algo- rithm only uses local distance information gained from the sensors to incrementally build a medial axis representation, as the robot ex- plores its environment. The tracing steps are small and computation speed is limited. Holleman [18] uses a similar approach to deter- mine the medial axis, which is then used as a heuristic for sampling free configuration space in the probabilistic roadmap approach to robot motion planning. Computational considerations limit the ap- plicability of these approaches to polyhedra composed of only a few thousand faces.

The algorithms in the second category represent the free space as voxels. Lam [19] gives a survey of thinning algorithms based on voxel representations and Zhang [35] provides a performance eval- uation. The thinning algorithms perform erosion in order to com- pute an approximated medial axis. Siddiqi [29] assigns each voxel a vector field value corresponding to the vector pointing to the clos- est point on the surface. A voxel is considered to be on the medial axis if the mean flux of the vector field entering the neighbor vox- els is positive. Ragnemalm [25] assigns to each voxel the Euclidean distance to its nearest voxel on the boundary and computes the lo- cal directional maxima to determine the approximated medial axis, while Hoff [16] utilizes hardware to compute these, thus speed- ing up the computation. Vleugels [33] divides voxels recursively if they contain a portion of the medial axis until they reach a mini- mum size. Foskey [11] combines the advantages of [16] and [33], using hardware to compute distances while adaptively and recur- sively dividing the voxels. This method computes a simplified me- dial axis [11], which is a subset of the actual medial axis. This new data structure is more stable but it does not necessarily maintain the connectivity of the medial axis, thereby limiting its applicability.

The algorithms in the third category divide the free space into

Voronoi regions based on sample points on the surface of the solid.

The resulting Voronoi regions are tiny because a dense sampling of the surface is required for an accurate computation of the me- dial axis. Both [3] and [10] provide good surveys of the literature of the algorithms in this category. These algorithms are applicable to complicated models, if an appropriate set of sample points can easily be determined.

3. Sampling the Approximated Medial Axis

The main idea of our algorithm is to efficiently generate a small set of partially overlapping maximal spheres to cover almost the entire free space within the solid. These spheres are constructed to intersect features of the medial axis. By sampling points on the surface of the spheres and determining their closest feature in the environment, a set of points on the surface of the sphere that are in proximity to the medial axis can be identified. The points serve as an approximation to the medial axis. The union of these points comprises the approximated medial axis, abbreviated as aMA in the remainder of the paper. Because large open areas can be covered by large spheres, the aMA consists of few points in wide open areas and is denser in geometrically complex regions of the solid. The precision with which the aMA approximates the medial axis can be specified as a parameter of the algorithm. This allows users to consciously trade accuracy for computational efficiency.

3.1. Description of the Algorithm

Each point inside the solid has at least one closest feature on the surface of the solid. Thedirection vector~vof a pointpin the solid Dis the unit vector pointing from pointpto the closest feature on the surface ofD. Thedistanceδ(p)associated with a pointpin the solidDis the distance from point pto the closest feature on the surface ofD. Note that points on the medial axis must have at least two direction vectors.

The description of the algorithm relies on two primitive opera- tions. The first identifies an initial pointmand associated distance δ(m), such that the resulting sphere of radiusδ(m)aroundmin- tersects the medial axis. Given a solidD, a set of pointsPin the interior ofDand their direction vectors, the second primitive iden- tifies, for each pointpP, that point on the medial axis ofDwhich is closest top. These primitives will be described after we have de- tailed the algorithm for computing the aMA.

Assume pointmlies on the medial axis and is distance δ(m) away from the closest obstacle. This point is determined using the first primitive. A priority queueQis initialized to contain the sphere described by pointmand radiusδ(m). The setSof spheres describ- ing the free space inside the solidDis initialized to be the empty set.

The largest spheresis extracted fromQand a setUof uniformly distributed samples that have been generated on its surface. Points inUthat are contained in one of the spheres inSare discarded. The second aforementioned primitive is used to determine those points inUthat lie closest to the medial axis. These pointspiare added into the aMA and, along with their distancesδ(pi), into the priority queueQ. The spheresis added toS. To bound the exploration of free space we introduce an additional requirement for insertion into Q: only those spheres with radii larger than theexpansion threshold 16

(17)

Y. Yang, O. Brock, R. N. Moll / Efficient and Robust Computation of an Approximated Medial Axis Kecan be added. We can control computation time and the number

of aMA points by changingKe. These steps are repeated untilQis empty (see also Figure 1).

1. Find pointminsideDsuch thatδ(m)>Keand the medial axis intersects the sphere of radiusδ(m)aroundm(see Section 3.2).

2. Sphere setS:=∅

3. Medial axis point setM:=∅ 4. Priority queueQ:={(m,δ(m))}

5. WhileQis not empty

a. Extract spheres= (p,δ(p))fromQ

b. Generatenuniformly distributed samplesU={u1,· · ·,un} on the surface ofs. Discard alluiUfor which∃sjSsuch thatuisj.

c. UsingUand the direction vectors associated with theuiU, determine approximated medial axis pointsA={a1,· · ·,ak} (see Section 3.3).

d. Q:=Q∪ {(ai,δ(ai))|aiAandδ(ai)>Ke} e. M:=MA

f. S:=S∪ {(p,δ(p))}

6. Connect points inMto generate the aMA (see Section 5) Figure 1:The pseudo code of the algorithm. S is the set of spheres describing the interior of the solid D. M is the set of points de- scribing the approximated medial axis. Q is the priority queue of spheres, ordered by radius.

Figure 2 illustrates our method in a two-dimensional case. As- sumeo1is the first element inQ. A maximal circle centered ato1is generated andnsamplesp1,p2, ...,pnare generated on its circum- ference (not all samples are shown in the figure). The point pairs (p1,p2),(p3,p4)and(p5,p6)have different direction vectors and the midpoints of these pairs,q1,q2andq3, are considered to be on the medial axis; they are added to the aMA and to the queue. Since q3has the largest radius it is expanded next and the procedure re- peats.

p1 p2

p3 p4 p5

p6 o1

q1

q2 q3

Figure 2:An illustration of the algorithm. The dashed lines repre- sent the medial axis of the rectangle.

We now discuss the two primitives used in the description of the algorithm.

3.2. Identifying the Initial Approximated Medial Axis Point In the description of the algorithm it was assumed that the queue Qis initialized with a pointmon the medial axis of the solidD.

We use a similar expansion algorithm as the one described above to findm. Starting from a random point pinsideD, we generate

the maximal sphere with the center atp. If we cannot find a medial axis point of the surface of the sphere (how medial axis points are identified is described in Section 3.3), we take the pointp0 with the biggest radius as the center of the next sphere. This process is repeated until the sphere of radiusδ(p0)around p0 intersects the medial axis. Since this procedure converges towards a sphere of locally maximum radius, its center converges towards a pointp0on the medial axis and the surrounding sphere thus must intersect the medial axis.

3.3. Identifying Approximated Medial Axis Points

Given a set of uniformly distributed sample pointsUon the surface of a sphere, we apply theseparation angle criteria [3, 4, 6, 11] to determine the set containing points of the aMA. If the direction vectors of two adjacent sample points span an angle larger than a thresholdθt, we take the samples’ midpoint as the aMA points. In Section 4 we will bound the error of this approximation.

The separation angle criteria can erroneously place a point on the medial axis (see Figure 3 for an example) [11]. If a reflex vertex on the boundary is the nearest neighbor to both sample points, both direction vectors will point toward that vertex. To identify these cases, we apply the divergence criteria: the direction vectors must point away from each other.

p2 p1

v2 v1

Figure 3:False positives for the separation angle criteria If a sphere only intersects one facet of the aMA, there will be two sets of sample points, each set with a different direction vector, based on classification by the angle criteria. In this case we simply insert the center of the sphere into the aMA. The samples on the surface are superfluous. If a sphere intersects multiple facets of the medial axis, however, we identify adjacent samples with three or more distinct direction vectors and add their midpoint to the aMA.

These points are calledcritical points; they designate an edge or a vertex between multiple facets of the aMA. Critical points can be used to approximate the hierarchical generalized Voronoi graph[8].

3.4. Increasing the Accuracy of the Approximation

Our basic algorithm does not sample the interior of the spheres. By restricting sampling to the surface of the sphere, important features of the aMA might be missed. We provide an optional refinement algorithm which can locate aMA points with a maximal number of adjacent aMA facets on the inside of a sphere. Starting from one aMA point, we use a tracing algorithm similar to the one proposed in [18] to look for a point where several aMA facets touch. Our method differs from [18] in that it performs the tracing in 3D, rather than in a plane. The method uniformly samplesnpoints nearby and selects the point with equal or more adjacent aMA facets as the next tracing point. The refinement algorithm stops if the tracing point is outside of the sphere or all points nearby have less MA facets crossing. If the refinement point is close to another critical point, we discard it and stop the algorithm. The aMA points found 17

(18)

Y. Yang, O. Brock, R. N. Moll / Efficient and Robust Computation of an Approximated Medial Axis in this manner more accurately describe the characteristics of the

solid. This refinement considerably increases the computation time of the overall algorithm.

3.5. Discussion

In Section 2, we differentiated three categories of approaches to medial axis computation. These methods were classified according to their methods of free space exploration. All of them generate the (approximated) medial axis either during or after the explo- ration process. The algorithm proposed here can also be regarded as a tracing approach: it traces the medial axis by sampling on the spheres during expansion. However, the algorithm differs impor- tantly from previous approaches: during the tracing progress, the step size is adjusted based on the local geometrical properties of the free space. By construction, the centers of the spheres lie close to the medial axis and thus free space is explored with near-optimal step sizes. To ensure this property all possible tracing directions are explored at every step. The near-optimal step size is the main reason for the computational efficiency of the proposed approach.

4. Approximation Error

In this section we bound the error made by the proposed method of approximating the medial axis of a solid. We differentiate quan- titative errors that result from the finite sampling density of our algorithm, and qualitative errors. The latter are a consequence of the sphere expansion algorithm to explore the free space inside the solid.

When considering the quantitative error, two cases have to be distinguished. If the sampling density on the surface is kept con- stant, larger spheres will be covered by a larger number of samples than smaller spheres. In this case we compute the absolute error as the distance of a point on the aMA to the closest point on the true medial axis. If the number of samples is kept constant, on the other hand, the sampling density is lower on large spheres, which means that fewer samples are computed in wide open areas. In other words, the computational expense is proportional to the difficulty of the region – a desirable property for an approximation algorithm.

For this case we will consider a relative error, which relates the ab- solute error to the amount of local free space.

4.1. Sample Point Accuracy

LetM be the set of approximated medial axis points for a given solidDattained by our algorithm. For each point pi inM, there exists a medial axis pointtiwhich is closest topi. The absolute and relative errors for the sample pointsMof the approximated medial axis, relative to the true medial axisTare

• Absolute errorεa: maxpi∈M{|pi−ti|}

• Relative errorεr:δ(tεa

i)

Given a set of uniform samples on a unit sphere (circle), we de- fine two points as neighbors of each other if the distance between them is smaller than theneighbor thresholddn. If the closest fea- tures of two neighboring points p1andp2are different, there is a point between them which is closest to both of those features and thus lies on the medial axis. We will consider the midpointmof the line segment betweenp1andp2as a point on the approximated

medial axis. Consequently, the maximum distance between points of the approximated medial axis and the real medial axis is given byεa=d2n. This equation holds in both 2D for circles and 3D for spheres. Theresolutionof the algorithm is then defined byrmindn

2, whererminis the radius of the smallest sphere generated by the al- gorithm described in Section 3.

It is obvious that there is a relation between the number of the samples placed on a sphere and the quality of the approximated me- dial axis. In the following sections we establish how many samples are needed in order to achieve a desired absolute or relative error in 2D or 3D.

4.1.1. 2D

It is relatively easy to compute the numberNof samples needed for a given absolute errorεa. During the exploration of free space, the algorithm uniformly samplesNpoints on a circle (refer to Fig- ure 2). The angle∆θbetween two neighbor sample points (for ex- ample,6 −−→q1p1−−→q1p2) is N and the distance between them is given by

|−−→p1p2|=2rsin∆θ

2 =2rsinπ N. Letdnequal|−−→p1p2|. The absolute error is then given by:

εa=dn

2 =rsinπ N.

Consequently, given an absolute errorεa the number of samples needed is given as a function of the radiusrof the sphere:

N= π

arcsinεra (1)

To determine the relative errorεr, we have to estimate the radius of a nearby pointmon the true medial axis. In Figure 2, assume the closest MA point toq1ism. Therin equation 1 is the distance δ(q1)and notδ(m). Althoughmandδ(m)are unknown, according to the definition of the medial axis and the triangle inequality, we can estimateδ(q1)≤δ(m)<δ(q1) +εaunder the condition thatm has the same direction vectors asq1. Thusδ(q1ε)+εa a<δ(m)εaδ(qεa1). Ifr≤δ(q1), δ(m)εaδ(qεa1)εra, we can selectNbased onεra:

N= π

arcsinεr

Shouldr≥δ(q1), we determineNbased on the following equation:

N= π

arcsinδ(qεrr1)

Given the numberNof desired sample pointspiand the origino of a sphere with radiusr, it is easy to determine their position:

pi=o+r

cos2πiN sin2πiN

fori=1,· · ·,N.

4.1.2. 3D

Before we discuss the accuracy of the algorithm applied to a three-dimensional solid, we introduce the sphere covering prob- lem [15, 30]: How cannpoints be placed on a unit sphere so as to minimize the maximum distance of any point on the sphere to the othern−1 points? Alternatively, the problem can be defined in 18

Referanser

RELATERTE DOKUMENTER

The system can be implemented as follows: A web-service client runs on the user device, collecting sensor data from the device and input data from the user. The client compiles

In April 2016, Ukraine’s President Petro Poroshenko, summing up the war experience thus far, said that the volunteer battalions had taken part in approximately 600 military

Only by mirroring the potential utility of force envisioned in the perpetrator‟s strategy and matching the functions of force through which they use violence against civilians, can

This report documents the experiences and lessons from the deployment of operational analysts to Afghanistan with the Norwegian Armed Forces, with regard to the concept, the main

Also a few other cases (see table 4.1) shows.. This supports the hypothesis that the mean stream wise velocity in the linear sub-layer is the appropriate velocity scale for

Overall, the SAB considered 60 chemicals that included: (a) 14 declared as RCAs since entry into force of the Convention; (b) chemicals identied as potential RCAs from a list of

The ideas launched by the Beveridge Commission in 1942 set the pace for major reforms in post-war Britain, and inspired Norwegian welfare programmes as well, with gradual

On the first day of the Congress, on Wednesday 3 June, 2009, we will organize a Pre Congress Workshop on topics related to museums of the history of medicine, addressing the