• No results found

Fractal arrays : design and analysis

N/A
N/A
Protected

Academic year: 2022

Share "Fractal arrays : design and analysis"

Copied!
129
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Contents

1 Introduction 1

2 Arrays and beamforming 3

2.1 Coordinate Systems . . . 3

2.2 Wave Equation . . . 4

2.2.1 Solutions to the wave equation . . . 5

2.3 Sampling in Spatial and time Domain . . . 6

2.4 Beamforming . . . 8

2.4.1 Far field and near field . . . 8

2.4.2 Narrowband and Broadband Beamformers . . . 9

2.4.3 Beamform sum and delay . . . 10

2.4.4 The array pattern . . . 11

2.4.5 The angular array pattern . . . 13

2.4.6 The beampattern . . . 13

2.4.7 The mainlobe and sidelobes . . . 14 i

(2)

2.5.1 Linear arrays . . . 15

2.5.2 Planar arrays . . . 16

2.5.3 Array thinning . . . 17

3 Fractals 18 3.1 Fractals . . . 18

3.1.1 Examples of fractal geometry . . . 19

3.1.2 self-similarity . . . 22

3.1.3 Fractal dimension . . . 22

3.2 Fractal applications . . . 24

4 Fractal linear arrays 26 4.1 Cantor linear array . . . 26

4.1.1 Cantor linear array construction . . . 26

4.1.2 Cantor array factor . . . 27

4.2 The binomial array . . . 30

4.3 Near binomial array . . . 33

4.4 The uniform array . . . 33

4.5 Triadic Cantor array . . . 34

4.6 Summary . . . 40

4.7 Koch array . . . 41 ii

(3)

4.7.1 Koch-pattern construction algorithm . . . 41

4.7.2 Array pattern synthesis and corresponding current distri- bution . . . 42

4.7.3 Fractal radiation patterns . . . 43

4.7.4 Simplification and reduction of the Koch array . . . 49

4.8 The Blackman-Koch array . . . 49

4.9 Koch-Kaiser array . . . 62

4.10 summary . . . 78

5 Fractal planar arrays 79 5.1 Fractal planar arrays construction . . . 79

5.1.1 The concentric circular ring subarray generator . . . 80

5.2 Sierpinski arrays . . . 82

5.2.1 Sierpinski carpet array . . . 82

5.2.2 Sierpinski gasket array . . . 92

5.3 Triangular fractal array . . . 103

5.3.1 Array construction . . . 103

5.3.2 Fractal triangular array factor . . . 106

5.4 Summary . . . 113

6 Fractal antennas and arrays application 114 6.1 Antennas . . . 114

iii

(4)

7 Conclusion 119

iv

(5)

Chapter 1 Introduction

Fractal geometries have found an intricate place in science as a representation of some of the unique geometrical features occurring in nature. Fractal geometry was first discovered by Benoit Mandelbrot as a way to mathematically define structures whose dimension can not be limited to whole numbers. These geo- metries have been used to characterize structures in nature that were difficult to define with Euclidean geometries. Examples include the length of a coastline, the density of clouds, and the branching of trees. Just as nature is not confined to Eu- clidean geometries, arrays and antennas designs should not be confined, as well.

In addition to having non-integer dimension, fractals usually exhibit some form of self-similarity which means that they are composed only of multiple copies of themselves at several scales. These properties can be used to develop new config- urations for antenna arrays. It might be possible to discover structures that give us better performance than any Euclidean geometry could provide.

This thesis includes overlapping elements of array theory, fractal geometry, and numerical calculations, its goal is to design and investigate arrays using fractal geometries. Fractals, as used in this work, are structures of infinite complexity with a self-similar nature. What this means, is that as the structure is zoomed in upon, the structure repeats itself. This property is used to design arrays that can operate at several frequencies. Another aspect of using fractals as arrays is that the fractional dimension of their structures may lead to the discovery of arrays with improved characteristics . This property is used to design a low sidelobe arrays.

In this thesis, various fractal geometries are generated and studied as linear and planar arrays. There are an infinite number of possible geometries that are avail- able to try. The ones tried here provide broad overview of several different classes

1

(6)

and their various characteristics. The fractals that have been investigated are all deterministic.

This thesis contains seven chapters. Here is an overview of the rest of the thesis:

chapter 2: This chapter describes the principles of array theory and the basic con- cepts governing the array beamforming are presented. We begin by representing a space-time signal at the coordinate system. We consider the equation related to wave propagation and establish the wave equation solution, we use these to find a representation for beampatterns. The linear and planar arrays principles are also discussed.

Chapter 3: This chapter introduces the basic concepts of fractals and the related geometry concepts. Several examples of fractal geometry are shown. Finally we briefly review some of the applications of fractals, and how the fractal can be ap- plied to antennas and arrays.

Chapter 4: In this chapter two types of deterministic fractal linear arrays known as the Cantor linear array and the Koch array are generated and analyzed.

Chapter 5: In this chapter special types of fractal planar arrays are designed and their radiation characteristics are investigated.

Chapter 6: In this chapter we briefly review an antenna theory related to antenna size, and briefly discuss how the antenna benefit from the reduction of antenna size using fractal design. The applications related to fractal antennas and arrays are proposed, and some practical examples of fractal antennas are shown.

Chapter 7: This chapter gives the final conclusion.

(7)

Chapter 2

Arrays and beamforming

This chapter describes the principles of array theory and the basic concepts gov- erning the array beamforming are presented. We begin by representing a space- time signal at the coordinate system. We consider the equation related to wave propagation and establish the wave equation solution and we use these to find a representation for beampatterns. The linear and planar arrays principles will also be discussed.

2.1 Coordinate Systems

In most situations, a three-dimensional Cartesian coordinate system is used to represent space, with time being the fourth dimension. A space-time signal is written ass(x, y, z, t), for example, with x, y, andz being the three spatial vari- ables in a right-handed orthogonal coordinate system as shown in Figure 2.1. We use the position vector~x to denote the triple of spatial variable(x, y, z). Using this notation we can write a space-time signal as s(~x, t). Other coordinate sys- tems may be defined. For certain problems, spherical coordinates represent space most appropriately. Here a point in the space is represented by its distancerfrom the origin, its elevation θ from the vertical axis and azimuth φ within an equat- orial plane containing the origin (Figure 2.1), and a space-time signal is written as s(r, θ, φ, t). The spherical coordinates of a point are related to the(x, y, z)right- handed Cartesian coordinates by the simple trigonometric formulas

The relation between the spherical coordinates(r, φ, θ)and the Cartesian coordin- 3

(8)

ates(x, y, z)is given by

r=p

x2+y2+z2 x=rsinθcosφ θ = sin1

y x2+y2

y=rsinθsinφ φ= cos1

z

x2+y2+z2

z =rcosθ

(2.1)

There are other various coordinate systems such as cylindrical and elliptical co- ordinate systems which may be convenient choices under different circumstances.

However, the spherical coordinate system has a special place in this thesis since much of the theory used is heavily dependent on it.

θ

x

z

y φ

r

Figure 2.1: Spherical coordinate system. The angleθis known as the azimuth and φthe elevation.

2.2 Wave Equation

Background

The classical wave equation is one of the fundamental equations in array signal processing. It governs how signals pass from a source radiating energy to a sensor.

Array processing algorithms attempt to extract information such as source location

(9)

2.2. WAVE EQUATION 5 from propagating waves. To do this they rely on an accurate characterization of how the medium affects propagation through the wave equation for a given source medium-sensor situation. Thus, array processing or beamforming algorithms are characterized by the wave equation at the physical level.

The physics of sound propagating is described by scalar wave equation given by

2s= 1 c2

2s

∂t2 (2.2)

where2is the Laplacian which can be expressed in a chosen coordinate system, cit the speed of wave propagating, s s(~x, t)is the scalar wave field (such as sound pressure) at a point~xand thettime.

2.2.1 Solutions to the wave equation

The Equation(2.2) can be expressed in Cartesian coordinates as

2s

∂x2 +2s

∂y2 + 2s

∂z2 = 1 c2

2s

∂t2 (2.3)

This Equation(2.3) has many solutions. It is assumed that the wavefield s(~x, t) has a time-harmonic dependence [2]. When solved in Cartesian coordinates, the harmonic solution can be written in the form

s(~x, t) =Aej(wt~k~x) (2.4) where A is a complex constant, ~k = (kx, ky, kz) is the wave number and w is the radian frequency. Substituting Equation(2.4) into the wave Equation(2.3), we obtain

|~k|2 = w2

c2 (2.5)

As long as this constraint is satisfied, signals with the form of Equation(2.4) satisfy the wave equation.

The solution given by Equation(2.4) may be interpreted as monochromatic plane wave. Monochromatic means one color. Here it refers to a wave with one temporal

(10)

frequency w. If we place a sensor at some fixed position ~x0 = (x0, y0, z0), to observe the signal, the received signal has the form

s(~x0, t) =Aej(wt~k~x0) (2.6) The linearity of the wave equation implies that many plane waves propagating in different directions can exist simultaneously. Any signal can then be expressed as a weighted superposition of complex exponentials.

In spherical coordinates, the Equation(2.2) can be written in polar coordinates as 1

r2

∂r

r2∂s

∂r

+ 1

r2sinφ

∂φ

sinφ∂s

∂φ

+ 1

r2sin2φ

2s

∂φ2 = 1 c2

2s

∂t2 (2.7) This equation can be solved by using the method of separation of variables. How- ever, the spherical coordinate wave equation is generally used in situations where spherical symmetry is evident [1]. The monochromatic solution ofs(r, θ, ψ, t)is

s(r, t) = A

rejw(trt) (2.8)

This solution can be interpreted as a spherical wave propagating outward from the origin with a temporal frequencyw.

2.3 Sampling in Spatial and time Domain

Sampling a temporal signal requires a sampling frequency of at least twice the highest frequency component in order to retain nearly all signal information ( Nyquist criterion). If the signals are bandlimited, and the highest frequency com- ponent of a bandlimited signal iswo, the sampling frequency,wsmust satisfy

ws2wo (2.9)

If the sampling frequency, ws < 2wo is chosen, the signal is undersampled and aliasing occurs. The original signals can not be recovered from the samples.

(11)

2.3. SAMPLING IN SPATIAL AND TIME DOMAIN 7 The Nyquist criterion also applies to spatial sampling. This implies that in or- der to avoid spatial aliasing, the spatial sampling frequency, ks, must be at least two times the highest componentk0of the time-space signal,s(~x, t)i.e. ks2k0. In space we sample with sensors separated byd. Consequently, the spatial sampling interval must be

d= 2π ks π

k0 = λ0

2 (2.10)

whereλo is the wavelength. The array elements should then be separated by λ2o units or less to avoid spatial undersampling. Undersampling in spatial domain will give grating lobes in the visible region. Signals propagating in directions cor- responding to the grating lobes cannot be distinguished from the signal direction corresponding to the mainlobe in the beam pattern, as illustrated in Figure 2.2 . The relation between the magnitude of the wavenumber vector in thex-direction and the the azimuth angleφ for a linear array like the one in Figure 2.3 is written as

kx =−ksinφ=

λ sinφ (2.11)

−80 −60 −40 −20 0 20 40 60 80

−40

−35

−30

−25

−20

−15

−10

−5 0

Angle (degrees)

Magnitude(dB)

d=λ/2

−80 −60 −40 −20 0 20 40 60 80

−40

−35

−30

−25

−20

−15

−10

−5 0

Angle (degrees)

Magnitude(dB)

d=1.25⋅λ

Grating lobe Grating lobe

Mainlobe

Sidelobes

Figure 2.2: Beampattern of a 11-element linear array with d = λo/2 (left-side) andd= 1.25λ0(right-side).

Since|sinφ| ≤1, kx, can be real only between±2π/λ. The region thatkx span when|kx| ≤2π/λis called the visible region, and corresponds to angles between

±π/2for a given value ofλ.

(12)

φ

k k

kx

z z

x d

Wavefronts

Figure 2.3: Illustrations of a propagating signal impinging on a linear array with an azimuth angle,φ.

2.4 Beamforming

Beamforming is the name given to a wide variety of array processing techniques that by some means, focus the array’s signal capturing (spatial filtering) capabil- ities in a particular direction or location [1]. This means that signals from a given spatial region are amplified and signals from other regions are attenuated with the usual objective of estimating a desired signal in the presence of noise and interfer- ing signals. A processor that performs beamforming operations is called a beam- former. Thus, a beamformer is used in conjunction with an array of sensors to provide spatial filtering and usually consists of filters or complex weights to com- bine the sensor signals. Typically a beamformer linearly combines the spatially sampled signal from each sensor to obtain an output signal in the same manner as a FIR (Finite impulse response) filter linearly combines temporally sampled data [3]. A beamformer can be characterized by its spatial response in the same way a linear time-invariant system is characterized by its frequency response. Some- times, the spatial response is expressed only as angular variables, then it is called the angular response. The beamforming is commonly used in many applications such as antenna arrays, medical imaging, radar, siesmic and many other applica- tions [1].

2.4.1 Far field and near field

Beamforming algorithms vary according to whether the sources are located in near field or in the far field [1]. If the source is located close to an array in the near field the wavefront of the propagating wave is perceptively curved with re-

(13)

2.4. BEAMFORMING 9 spect to the dimensions of the array and the wave propagation direction depends on sensor location. If the direction of propagation is approximately equal at each sensor, then the source is located in the array’s far field and the propagating field within the array aperture consists of plane waves, as illustrated in Figure 2.4 . Defining the borderline between the near field and the far field is application de- pendent. The common rule of thumb for the approximate distance at which the far field approximation begins to be valid isr = 2D2/λ(known as Rayleigh dis- tance), whereris the distance from an arbitrary array origin,Dis the largest array dimension, andλis the operating wavelength [4]. In the antenna literature, the far field is called the Fraunhofer zone and the near field is called the Fresnel zone [5], and they are treated differently.

Source

Spherical wave Plane wave

Array Array

Far-field Near-field

Figure 2.4: Illustration of near and far-field and the wave characteristics.

2.4.2 Narrowband and Broadband Beamformers

Depending on the bandwidth of the signal environment, a beamformer can be classified as either narrowband or broadband. If the signal bandwidth is more than a significant fraction (say 0.1) of the mid-band frequency, then the signal is said to be broadband. There is no fixed definition for a broadband signal [6], since, whether one can sufficiently treat the signal as monochromatic depends on a range of other factors associated with the problem.

(14)

Narrowband Beamformers

In a conventional narrowband beamformer, the output of each sensor is weighted by a complex scalar. All such weighted outputs are summed together to give the beamformer output. The output of a narrowband beamformer with2M+1sensors at timetis given by

z(t) = XM

m=M

wmsm(t) (2.12)

wherewmis the complex weight applied to them-th sensor, andsm(t)is the signal received at them-th sensor at timet. The basic idea of a narrowband beamformer is to add the outputs of sensors with appropriate weights so that a signal arriving from a desired direction adds up constructively and signals arriving from other directions add destructively (on average) [6].

Broadband Beamformers

Narrowband beamforming methods assume that the signal bandwidth is suffi- ciently narrow to consider only a single frequency. However, the frequency spec- trum of many signals of interest have more than one frequency component. The use of a narrowband beamformer for a broadband signal degrades the performance of the beamformer. One common approach to this problem is to attach a temporal filter to each sensor output instead of a complex weight, before summing them together. Combining these filtered outputs to form a beam is known as filter and sum beamforming [6]. Broadband beamforming is said to entail spatio-temporal filtering or a space-time signal processing problem, since the signal wavefield is sampled and processed in both space and time domains.

2.4.3 Beamform sum and delay

One method used to perform beamforming is delay and sum beamforming [1], which is one of the oldest and simple beamforming techniques. In this method, the spatial filtering results from the coherent summing of the signals received by the sensors in the array. A signal’s propagation time between sensors in the me- dium can be calculated using a knowledge of the the signal’s propagation speed

(15)

2.4. BEAMFORMING 11 through the medium, the distance between sensors, and the signal’s direction of arrival. With this information signals received by the array are added in-phase by taking appropriately delayed samples from each sensor. Signals approaching from directions other than the direction of interest are not coherently summed and are thus attenuated compared to signals arriving from the direction of interest.

To see this, let the wavefieldf(~x, t)be sampled at the spatial location~x R3 by them-th sensor. The measured waveform at this sensor is than ym(t) = f(~x, t).

The delay and sum beamformer combines each of theseM sensor outputs apply- ing a fixed temporal delay4m Rand an amplitude weightwm R. The output signalz(t)from the delay and sum beamformer is then defined as

z(t) = XM

m=1

wmym(t− 4m) (2.13)

where the delays4mare adjusted to focus the array in different spatial directions, which is also called phase-steering. The delay and sum beamformer’s response to monochromatic wave is often called the array pattern.

2.4.4 The array pattern

To characterize the delay and sum beamformer’s directivity, the array pattern is examined. The array pattern is simply the delay and sum beamformer’s response to a monochromatic plane wave [1], impinging on the array from different direc- tions. As a superposition of plane wave expresses an arbitrary wavefield, the plane wave response determines the beamformer’s output for general case.

Assume that the wavefield f(~x, t)consists of a monochromatic wave with tem- poral frequencyw0 propagating with direction and spatial frequency given by the slowness vectorδ~0 R3, the wavefield is then

f(~x, t) =ejw0(tδ~0·~x) (2.14) The wavefield measured atm-th sensorym(t)is

ym =f(~x, t) =ejw0(tδ~0·x~m) (2.15) Let the beamformer be steered to look for plane waves with slowness vectorδ~0 R3. This as attained by choosing the set of delays4m as

(16)

4m =−~δ·x~m (2.16) Substituting this into (2.13) we can write the delay and sum beamformer’s output z(t)∈Cto the monochromatic wave

z(t) = XM

m=1

wmejw0(t+(~δδ~0)·x~m) (2.17)

The temporal contentejw0tmay be extracted from this equation

z(t) =

" M X

m=1

wmejw0(~δδ~0)·x~m)

#

ejw0t (2.18)

By introducing the wavenumber vectork~0 =w0δ~0 R3for the propagating wave, we finally get the delay and sum beamformer’s outputz(t)in the monochromatic case

z(t) =W(w0~δ−k~0)ejw0t (2.19) where W(.) denotes the Fourier transform of the sensor weights

W(~k) = XM

m=1

wmej~k·x~m (2.20)

which is also the array pattern determined for the wavenumber vector~k R3. Note the resemblance with the frequency response used in digital signal pro- cessing. The frequency response H(ejwT) characterizes a linear time invariant system by examining it’s output to sinusoidal inputs. The array pattern is used in a similar way in array signal processing. Through the quantityW(w0~δ−k~0) in (2.19), it determines the amplitude and phase of the beamformed monochromatic plane wave signal impinging on the array from different directions expressed in~δ.

The array elements weightswmare correspondent to a windows’s filter taps. Thus array pattern (2.20) determines the array’s directivity characteristics.

(17)

2.4. BEAMFORMING 13

2.4.5 The angular array pattern

The angular array pattern represents a more practical definition for many applic- ations, than the previously defined wavenumber array pattern given in (2.20). It gives the arrays’s response to waves from different spherical directions explicitly expressed with the angles φ andθ . The geometric interpretation of the angular array pattern is evident in Figure 2.5, where a plane wave soon will impinge on the array. To deduce the angular array pattern, it is convenient to introduce the unit direction vector in spherical coordinates~sφ,θ R3as

~sφ,θ = ~k

|~k| (2.21)

where the length of the wavenumber vector is taken to|~k|= λ for fixed wavelength λ. The unit direction vector~sφ,θ may be interpreted as the direction in which the array look. Substituting this for~k in (2.20) we obtain the general angular array patternW(φ, θ)R3

W(φ, θ) = XM

m=1

wmejλ~sφ,θ·x~m (2.22)

The unit direction vector ~sφ,θ may be expressed in rectangular coordinates as

~sφ,θ = (sinφcosθ,sinφsinθ,cosφ). Let the m-th array element be located at

~

xm = (xm, ym, zm)in space. The dot product~sφ,θ·x~mis then

~sφ,θ·x~m = (sinφcosθ)xm+ (sinφsinθ)ym+ (cosφ)zm (2.23) which is valid for any array configuration.

2.4.6 The beampattern

The beampattern (2.22) is defined as the magnitude of the array pattern [3], and given by

G(φ, θ) = |W(φ, θ)| (2.24)

(18)

x

z

y φ

θ

r

d d

y x

Wavefront

k>

Figure 2.5: Illustration of a propagating wave impinging on planar array, in a spherical coordinate, with the azimuth angleθand the elevation angleφ.

UsingG(φ, θ), we may define the normalized beamformer response AF(φ, θ) = W(φ, θ)

max[G(φ, θ)] (2.25)

whereAF(φ, θ)is also known as the normalized radiation pattern or array factor of the array.

Using the array factor, we can analyze the output of beamformer for propagat- ing waves for any directions. Array factor also describes the quality of a beam, measured often by mainlobe width and sidelobe levels.

2.4.7 The mainlobe and sidelobes

According to the beampattern in Figure 2.2, the highest peak in the mainlobe while the smaller peaks are sidelobes. The beampattern may be interpreted as the spatial filter response of an array. Thus the mainlobe is similar to the passband in a spatial bandpass filter, which only passes signals in these directions. Similar to filter design in digital signal processing, we would like the beam to approach the delta pulse or equally an infinitely thin beam. But from array processing theory, this is impossible using an array with finite spatial extension.

The location of mainlobe peak tells in which direction we get maximum response with the array. Another measure used to characterize the mainlobe is the mainlobe width or the beamwith. Here we define it to be the full width of the mainlobe at

(19)

2.5. ARRAYS 15 6dBbelow the mainlobe peak on the beampattern. From the angular array pattern, we can measure at which angleφthe mainlobe has dropped6dB. The beamwidth is then2φ for consistency and usually measured in degrees.

The sidelobes in the beampattern is equal to the stopband in a bandpass filter. As is known from window filter design, the sideslobe can not be completely rejected using a finite aperture. But the sideslobe can be suppressed a certain degree by adjusting the amplitude weights and elements positions. The sidelobe region or equally the stopband, is conveniently defined as the area in theφθ-plane outside the first zero crossing of the mainlobe. The sidelobe level is used as a meas- ure on the height of the highest sidelobe peak in the sidelobe region and usually given indB. The height of the highest sidelobe relative to the mainlobe measures an array’s ability to reject unwanted noise and signals, and focus on particular propagating signals.

2.5 Arrays

An array composed of individual sensors sample the wave-field at discrete spatial locations, each sensor occupies a single point in the space, and they are equally sensitive to signals propagating in all directions (omnidirectional). Sensors can be positioned in regular or irregular grid. Array elements can be arranged in various geometries, with linear, circular and planar arrays, depending on the dimension of the space one wants to access. In this section, the linear and planar arrays principles will be discussed.

2.5.1 Linear arrays

One of the simplest geometries for an array is a linear array in which their ele- ments are aligned along a straight line (i.e. along one axis) as in Figure 2.3. Other possibilities are non-uniformly spaced and randomly spaced arrays, dependent on the particular application it is designed for. Since they only have a single element in the elevation dimension they cannot be steered in this direction. The steering with linear arrays can only be performed in the azimuth dimension.

The discrete arrays of sensors are similar to FIR filters because of the FIR filters finite temporal extent and the arrays finite spatial extent. The theory and methods of filter design by the window method can be applied to arrays as well. However, for arrays, different weights are applied in different sensor locations and is thus

(20)

a spatial window. There are numerous windows with various properties [7] that have been proposed for filter design. By choosing a window function, the band- width and sidelobe levels should be considered. A feasible way of reducing the sidelobe level is by choosing a window that weights the array. This weighting is often called shading or tapering. The cost of the this reduction in sidelobes level, is as in signal processing, a degraded resolution. In other words there is a tradeoff between the high resolution goal and the will for noise suppression. The maximum achievable spatial resolution in the lateral dimension is fixed once the array’s geometry is made. Therefore it is only possible to lower the sidelobes at the cost of broadening the mainlobe of the beam, when weighting the array. In order to get optimal weights the array must be uniformly filled and have uniform spaced elements symmetrically positioned around the array’s origin.

2.5.2 Planar arrays

The (2D) planar arrays have elements placed in bothx and y direction or equi- valently in both azimuth and elevation direction, the placement can either be on a regular or irregular grid. The planar arrays provides focusing and steering in both elevation and azimuth directions. This will reduce the thickness of the scan slices and therefore give better lateral resolution. As for linear arrays, weighting can be applied to reduce the sidelobe level, in this case the tapering will be both in elevation and azimuth directions. (2D) arrays increase the complexity of analysis and synthesis considerably, this makes it more difficult to calculate optimal taper- ing for the array. For linear arrays it is often desirable to minimize the total area under the sidelobes. This measures the energy contribution from signals outside the mainlobe. For planar arrays the similar will be to minimize the volume under the sidelobes.

Planar arrays point out to be important for improving the imaging quality, these arrays provide the possibility of three-dimensional(3D) electronic focusing and beam steering of ultrasonic system [8]. Unfortunately there are fundamental prob- lems of both constructing and using (2D) arrays. In many applications there is a limit on the size of an array and this gives difficulties when connecting the tiny array elements. A filled two dimensional array will require an extremely large number of elements. The processing will be time consuming with this number of elements. Compared to a linear array the increase in operations will be power of two. Due to these problems one has to reduce the number of elements by thinning or removing some elements from the array. As a result from this the resolution is reduced.

(21)

2.5. ARRAYS 17

2.5.3 Array thinning

When producing an array, the large number of elements obviously adds complex- ity to the design. This becomes a significant problem, especially with large arrays.

An interesting approach to array design problem is to minimize the number of ar- ray elements with constraints on the array pattern. Thinning of array will result in an often high sidelobe level in the beampattern, to overcome this problem the amp- litude weight optimization of elements in the thinned pattern must be considered.

The weight optimization method yields the lowest sidelobe level with a particular element configuration. The algorithmic optimization methods are elaborated in many articles, in [9] a dynamic programming method is proposed, while a sim- ulated annealing algorithm is given in [10], in [11] a genetic thinning algorithm is suggested. In [12] the rule based thinning and element placement methods are discussed, the approach to generated arrays based on the theory of fractals is pro- posed as a method for generating a thinned regular array. The main work in this thesis is to generate and to analyze various linear and planar arrays based on a fractal theory.

(22)

Chapter 3 Fractals

In this chapter we introduce the basic concepts of fractal. The idea of the fractal and the related geometry concepts will be described. Several examples of fractal geometry will be shown. Those and the related terms will be used in the later chapters. Finally we will briefly review some of the applications of fractals, and how the fractal can be applied to antennas and arrays.

3.1 Fractals

The term fractal means broken (fragmented), was coined less than twenty years ago by one of history’s most creative mathematicians, Benoit Mandelbrot, whose seminal work, The Fractal Geometry of Nature [13]. Benoit Mandelbrot showed that many fractals existed in nature and that fractals could accurately model cer- tain irregularly shaped objects or spatially nonuniform phenomena in nature that cannot be accommodated by Euclidean geometry, such as trees or mountains, this mean that fractals operating in non-integer dimension. By furthering the idea of a fractional dimension, he coined the term fractal. Mandelbrot defined fractal as a rough or fragmented geometric shape that can be subdivided in parts, each of which is (at least approximately) a reduced-size copy of the whole [13]. In the mathematics, fractals is a class of complex geometric shapes commonly exhibit the property of self similarity, such that small portion of it can be viewed as a reduced scale replica of the whole.

Fractals can be either random or deterministic. Most fractal objects found in 18

(23)

3.1. FRACTALS 19 nature are random , that have been produced randomly from a set of non-determined steps. Fractals that has been produced as a result of an iterative algorithm, gen- erated by successive dilations and translations of a initial set, are deterministic.

Several examples of deterministic fractal geometry will be described in next sec- tion.

3.1.1 Examples of fractal geometry

The Cantor set

We begin our fractal examples by considering, The Cantor set [14], shown in Figure 3.1, is an simple example of a fractal. A common construction is the middle third Cantor set generated by iterative process. We begin with the interval [0,1] as the set C0, than we remove the open interval (13,23) from C0 to obtain C1=[0,13],[23,1]. To get the second stage, we remove the middle third of each interval inC1. This givesC2=[0,19],[29,13],[23,79],[89,1] . We continue this process, removing the middle third of each interval inCnto obtain Cn+1. Repeating this processn times results in2n line segments of total length 23n. Through infinite iteration, we will end up with a set of points that remain in the Cantor Set.

Middle−third Cantor set stage 0 (C0)

stage 1 (C1)

stage 2 (C2)

stage 3 (C3)

stage 4 (C4)

Figure 3.1: The first four stages in the construction of the Cantor set

The Koch curve

Another example of a fractal is The Koch Curve [14], shown in Figure 3.2. It is constructed by begin with a straight line , divide it into three equal segments and replace the middle segment by the two sides of an equilateral triangle of the

(24)

same length as the segment being removed. We repeat, taking each of the four resulting segments, dividing them into three equal parts and replacing each of the middle segments by two sides of an equilateral triangle. This procedure is applied repeatedly to the remaining lines. In the limit there is a strictly similar structure.

Each fourth of this structure is a rescaled copy of the entire structure.

Koch curve

stage 0

stage 1

stage 2

stage 3

stage 4

Figure 3.2: The first four stages in the construction of the Koch Curve

The Sierpinski gasket

Sierpinski’s gasket triangle is a deterministic fractal [14], [37]. It has the basic properties that are common to all fractals like recurrence, which mean whatever part of the triangle we take, if we magnify it, we will find exactly the same triangle in it. The deterministic construction algorithm for the Sierpinski gasket are shown in Figure 3.3. We begin with an equilateral triangle. Then use the midpoints of each side as the vertices of a new triangle, which we then remove from the original. We continue this process, from each remaining triangle we remove the

"middle" leaving behind three smaller triangles.

The Sierpinski carpet

The Sierpinski carpet , Figure 3.4, is a deterministic fractal which is a generaliz- ation of the Cantor set into two dimensions [14], [34]. In order to construct this fractal we begin with a square in the plane, subdivide it into nine smaller con- gruent squares of which we drop the open central one, then subdivide the eight

(25)

3.1. FRACTALS 21

Stage 0 Stage 1

Stage 3 Stage 2

Figure 3.3: The first four stages in the construction of the Sierpinski gasket

remaining squares into nine smaller congruent squares in each of which we drop the open central one. We continue this process infinitely often obtaining a limiting configuration which can be seen as a generalization of the Cantor set.

Satge 0

Stage 2 Stage 3

Stage 1

Figure 3.4: The first four stages in the construction of the The Sierpinski carpet

(26)

3.1.2 self-similarity

The concept of a fractal is most often related with geometrical objects satisfying the criteria of self-similarity. Self-similarity means that an object is composed of sub-units and sub-sub-units on multiple levels that (statistically) resemble the structure of the whole object [14]. Mathematically, this property should hold on all scales, that means, that any portion of a self-similar fractal curve, if blown up in scale, would appear identical to the whole curve. In other words, if we shrink or enlarge a fractal pattern, its appearance remains unchanged. However, these prop- erty do not hold indefinitely for real phenomena and, there are necessarily lower and upper bounds over which such self similar behavior applies. Self-similarity can therefore be associated with fractals, which are objects with unchanged ap- pearances over different scales. The Sierpinski gasket is a good illustration of this feature of self-similarity. We can see that Sierpinski triangle Figure 3.5, may be decomposed into 3 congruent figures, If we scale any of the 3pieces of the triangle, we obtain an exact replica of it. We can look deeper into Sierpinski tri- angle, for each small triangle, if we scale it we see further copie of the original one.

Figure 3.5: The self-similarity of The Sierpinski triangle

3.1.3 Fractal dimension

The second concept for a fractal is a fractional dimension. This requirement dis- tinguishes fractals from the Euclidean geometry, which have integer dimensions.

The common intuitive idea of dimension is referred to as topological dimension

(27)

3.1. FRACTALS 23 . A point, a line segment, a square and a cube have topological dimensions zero, one, two and three, respectively. This intuitive dimension is always expressed as an integer. In [13] the Hausdroff-Besicovich dimension is referred as the frac- tional dimension, and it defined as, a real number that precisely measures the object’s complexity. Mandelbrot defines a fractal as a set for which the fractional dimension or Hausdroff-Besicovich dimension strictly exceeds the topological di- mension. He refers to fractional dimension as the fractal dimension of a set . Fractal dimension have been defined in many ways, depending on application.

Fractional dimension is related to self-similarity in that the easiest way to create a figure that has fractional dimension is through self similarity.

From the properties of self-similarity the fractal dimensionDof a setAis defined as [14]

D= log(N)

log(1/r) (3.1)

whereN is the total number of distinct copies similar toA, andAis scaled down by a ratio of 1r.

In order to calculate the fractal dimension of any self-similar object, we can split the object into N parts, where the size of each part is related to the size of the original by the similarity ratior. The fractal dimensionDis the power of 1r that givesN, that is, 1rD =N. Solving forDyields Equation(3.1). Using a similar- ity ratio of 12 to divide a line segment produces two copies of the original. Thus the dimension of a segment is log(2)log(2) = 1. Dividing the edges of a square at their midpoints gives four copies of the original square. Its dimension is log(4)log(2) = 2.

Similarly, the dimension of a cube is log(8)log(2) = 3. The fractal dimensionDequals the topological dimension for any Euclidean shape, butD can also be evaluated for any non-standard figures that exhibit self-similarity. For the Cantor set , where each successive iteration involves removing the middle third from each line, to calculate its fractal dimension,N would equal2andr= 13. So the fractal dimen- sion is,D= log(2)log(3) = 0.63. It some how fills less space than for a one-dimensional object, such as a line segment. To find the dimension for Koch’s Curve. Using line segments that have3segments each with4equal straight lines. The number of line segments used isN = 4andr = 13, so the fractal dimension is,D= log(4)log(3) = 1.26.

The dimension of the Sierpinski triangle consists of3self-similar triangles , each with similarity factor2. So the fractal dimension is, D = log(3)log(2) = 1.58.

For the Sierpinski carpet the fractal dimension is log(8)log(3) = 1.89.

Unlike Euclidean geometry the fractal dimension does not have to be an integer.

The Koch curve for example, has a fractal dimension of1.26, it lies between one-

(28)

and two-dimension. The curve fills more space than one-dimension curve, but in- habits less space than an Euclidean area of the plane. The character of non-integer dimension causes the fractal dimension to be useful in measurement, analysis and classification of many fractal shapes, for example the fractal dimension provides a way to measure how rough fractal curves are. The more irregular a curve is, the higher its fractal dimension, a value between one and two. In this thesis we will examine how the array’s fractal dimension affect the beampattern.

3.2 Fractal applications

The interest in fractals has grown tremendously in recent years. Fractal geometry is proving to be an exciting field of study. Fractals have been used to describe many aspects of nature. Mathematicians have used fractals to simulate the effect of shoreline decay on fisheries. Biochemists have investigated the influence of irregular protein surfaces on molecular interactions with fractals [14]. Climate and other apparently chaotic phenomena can be modeled and even predicted with fractal methods [15]. Studies of topics as diverse as fluid turbulence and bone structure have benefited from the use of fractal structures. Fractal geometry has provided the computer graphics artist with an exciting new palette of intriguing shapes and surfaces [16].

The concepts of fractals have also been applied to develop various antenna ele- ments and arrays. Fractal arrays are concerned with the folding large arrays into small geometric regions using fractal geometry so that multiple resonances corresponding to each scale-length in the antenna will overlap and provide wide bandwidth or multiband operation [17]. The first application of fractal to antenna design was thinned fractal linear and planar arrays [17-19], where the elements are arranged in a fractal pattern to reduce the number of elements in the array and obtain wideband arrays or multiband performance. The fact that most fractals have infinite complexity and detail can be used to reduce antenna size and develop low profile antennas [20-21], When antenna elements or arrays are designed with the concept of self-similarity, they can achieve multiple frequency bands because different parts of the antenna are similar to each other at different scales. Applica- tion of the fractional dimension of fractal structure leads to the gain optimization of wire antennas . The combination of the infinite complexity and detail and self- similarity makes it possible to design antennas with very wideband performance [17].

(29)

3.2. FRACTAL APPLICATIONS 25

The concept concerns the relation between the symmetry of the fractal array and the symmetry of its radiation pattern. Here self-similar arrays produce self-similar array factors or radiation pattern in accordance with the symmetry of function and the theory of Fourier transforms [17]. Another advantage of these fractal arrays [36], is that the self-similarity in their geometric structure may be exploited in or- der to develop algorithms for rapid computation of their radiation patterns. These algorithms are based on convenient product representations for the array factors and are much quicker to calculate than the discrete Fourier transform approach.

In the next chapters we use the concept of the fractal geometry to design and ana- lyze of linear and planar arrays. We will use the fractal to produce fractal array by placing array elements in fractal arrangement that describe fractal shape, such as the Cantor linear array, Koch array, Sierpinski carpet planar array and Sierpinski gasket planar array.

(30)

Chapter 4

Fractal linear arrays

The construction of many ideal fractal shapes is usually carried out by applying an iterative algorithm an infinite number of times. In such a procedure, an initial structure called a generator is replicated many times at different scales, positions and directions to grow into the final fractal structure.

In this chapter two types of deterministic fractal linear arrays known as the Cantor linear array and the Koch array will be generated and analyzed.

4.1 Cantor linear array

The Cantor linear array is a deterministic fractal array based on a Cantor set Figure 3.1. The array can be obtained by placing the array elements at discrete points on thexaxis, which hold the fractal structures of the Cantor set.

4.1.1 Cantor linear array construction

Aa a procedure to generate the Cantor set, we start by taking two∆-delta functions spaced a distance d along the x axis, as the basic structure known as the generator in fractal terminology [23]. The generator is scaled by a factorδto obtain another structure composed of a factor two delta functions spaced d/δ. If we convolve these two structures, a set of four delta functions will be placed at the points of a

26

(31)

4.1. CANTOR LINEAR ARRAY 27 Cantor set constructed with only two iterations. It can be seen that this convolution could be iterated an infinite number of times to obtain the complete set. If we call f(x) the two delta function generator, a Cantor linear array c(x) aligned a long the x direction can be described by a multiple convolution succession of several delta function sets f(x), each one scaled by an arbitrary scale factorδ, the whole Cantor set c(x) can be written as [22]

c(x) =· · ·f(x)∗f(δ·x)∗f2·x)· · · ∗f(δn·x)· · ·

=n=−∞f(x.δn) (4.1)

whereδis an arbitrary scale factor,indicates the convolution operator.

Cantor Array Generator

A generator for set ofN-delta function of equal amplitude, spaced a distance d and centered at the origin can be written as

f(x) =

N1

X

n=0

∆(x−nd+N 1

2 d) (4.2)

Examples of generating impulses are illustrated in Figure 4.1, for N = 2 with differentδand with finite number of iterationM = 4.

4.1.2 Cantor array factor

The array factor corresponding to c(x) can be written in terms of Fourier transform of the generatorF(ψ) as the corresponding array factorC(ψ)can be written as an infinite succession of products

(32)

−100 −8 −6 −4 −2 0 2 4 6 8 10 0.5

1

Generating impulses after M−iteration with N=2,delta=1.1

M=0

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=1

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=2

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=3

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=4

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Cantor array for M=4,delta=1.1,N=2

x

Array elment distribution

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

Generating impulses after M−iteration with N=2,delta=2

M=0

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=1

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=2

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=3

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=4

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Cantor array for M=4,delta=2,N=2

x

Array elment distribution

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

Generating impulses after M−iteration with N=2,delta=3

M=0

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=1

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=2

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=3

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.5 1

M=4

−100 −8 −6 −4 −2 0 2 4 6 8 10

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Cantor array for M=4,delta=3,N=2

x

Array elment distribution

Figure 4.1: The impulses placement with differentδafter each iteration(left), and resulting arrays after convolution(right). From the (top)δ= 1.1,δ= 2,δ= 3.

C(ψ) =· · ·δ2·F(δψ)·δ·F(δψ)·F(ψ)

· F(ψδ)

δ · F(δψ2) δ2 · · ·

= Y n=−∞

F(ψ δn)

(4.3)

(33)

4.1. CANTOR LINEAR ARRAY 29 where ψ is defined asψ = kdcos(θ) +β, d begin the spacing of the generator array,θthe angle between the direction of propagation and the axis of the array,β the progressive phase-shift of the generator array, andk = λ the wave number.

The resulting array factor (4.3) can be obtained by repeatedly multiplying the array factor of generator with a scaled version of itself.

It can be seen that frequency change by a factor ofrimplies a proportional scaling of both the parameterψ and the array factorC(ψ). That is,C(ψ)becomes

C(rψ) = · · ·F(rδψ)·F(rψ)

·F(

δ )·F(

δ2)· · ·F( δn)

= Y n=−∞

F( δn)

(4.4)

If the frequency shift r is taken to beδp then the array factor will be [22]

C(δpψ) = Y n=−∞

F(δpψ δn )

= Y n=−∞

F( ψ δnp)

= Y m=−∞

F( ψ δm)

=C(ψ)

(4.5)

That implies, that array factor self-scalable when scaled down by a factor of δp, which means that the array factor is held constant for any set of frequencies fn = f0 ·δn sinceψ is proportional to frequency, such a property is only stricly held by the infinite array construction by a infinite succession of convolutions

The Cantor array factor generated by a generator(4.2) after M iterations can be written as

C(ψ) =

MY1

n=0

sin(Nψ2δn)

Nsin(ψ2δn) (4.6)

(34)

whereN is the number of elements of the generator andδis the scaling factor.

Various linear array can be generated, depending on the choose ofδand the num- ber of elements in the generatorN. We will consider the family of arrays that are generated whenN = 2andδ = 1,δ = 1.1,δ= 2,δ= 3.

4.2 The binomial array

If we choose a scaling factorδ= 1then the equation (4.6) may be written as

C(ψ) =

"

sin(Nψ2) Nsin(ψ2)

#M

(4.7) This represents the array factor for a uniformly spaced linear array with a binomial current distribution Figure 4.2 . Thus binomial distribution has been obtained by taking a generator withN = 2and withδ = 1. The total number of elementsNM for a given number of iterationM, is NM = M + 1. The array factor for these arrays have no sidelobes which is a feature characteristic of binomial array.

If we take the number of elements in the generator, N = 3the resulting array will have a triangular current distribution for M = 2 , and tend to a Gaussian shape for increasing values of M Figure 4.3 show the arrays current distribution and corresponding array factors. The total number of elements afterM iteration is NM = 2M+N. The sidelobe level of this arrays are calculated in the Table 4.1, its clear that the sidelobe level indBdecrease linearly with the number of iterations, witch is a convenient method for designing low sidelobe array, while mainlobe width is decrease exponentially as the number of elements increase.

This method has a great inconvenience the dynamic range of elements amplitudes within the array is so large that small errors in the feeding network change the weight of smallest elements, distorting the final pattern [22].

Referanser

RELATERTE DOKUMENTER

In presenting and applying design criteria for a three-in-one transducer, the component patterns or arrays of transducer elements are assumed to act as ideal

Under the fractal-behaviour conditions obtained in the super-critical regime, the numerical solution shows the expected nucleation process approaching a Cantor-set-like

These points allow an easy and flexible control of the fractal shape generated by the IFS model and provide a high quality fitting, even for surfaces with sharp transitions.. This

The simple data structures used to recover active patches from viewer position allows fast CPU lo- calization of the geometry inside the view frustum..

Here, as in the case of Ultra Fractal, it is also possible to produce animations of Julia sets under the change of particular parameters.. However, real–time anima- tion is

We present an approach to object detection and recognition in a digital image using a classification method that is based on the application of a set of features that include

Otherwise, the compression process is carried out in the same way as in Section 3.1 and an IFS3D is obtained that is capable of compressing the distance field generated from

The results obtained demonstrate a clear relationship be- tween the branching behaviour of filamentous organisms and the fractal dimension of the resultant mycelial structures,