• No results found

Drum Analysis

N/A
N/A
Protected

Academic year: 2022

Share "Drum Analysis"

Copied!
106
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Drum Analysis

Cand.Scient thesis by

Espen Riskedal (espenr@ii.uib.no)

Department of Informatics University of Bergen

B ERGENSIS UN IVERSITAS

February 11, 2002

(2)

Abstract

This thesis studies drumloops, and the possibility to separate the different drumsounds from each other. Drum onsets and the detection of these are thoroughly discussed. Different approaches for detection and separation are discussed and a measure for correctness is presented.

(3)

Acknowledgments

This thesis could not have been made without the help from the following people:

Olli Niemitalo whom without I’d never understand anything about DSP, his IRC sessions taught me a LOT!

Thanks Olli.

Mum for reading the thesis! and encouraging me. Dad for telling me to get it done and buying me my first SoundBlaster! (and a lot of dinners) :) Rune for forcing me to become ambidextrous, and finally Svanhild for being great.

Petter B. for being patient and giving me advice.

Stig and KarlTK and generally #klumpfot for giving me pointers on programming.

#musicdsp for giving hints and help on DSP.

Anssi Klapuri for discussing onset extraction and measure of correctness with me.

Core Convergence AS for being understandable in my finishing period.

Boards of Canada, Biosphere, Brothomstates, Haujobb, Thievery Corporation, Jaga Jazzist etc. which all got me into music and DSP coding in the first place.

Florian Bomers for help on wavelets.

Henrik Sundt at NOTAM for discussing the problem and giving hints.

DjFriendly and CSL for providing me with the correct inspirational material (ie. music :D) God, and all the others I forgot :)

(4)

Contents

Acknowledgments i

Introduction 1

Goal . . . . 1

Problem . . . . 1

1 Psychoacoustics and drum basics 3 1.1 Drum types . . . . 3

1.2 Frequency assumptions . . . . 3

1.3 Attack . . . . 4

1.3.1 Perceptual Attack Time . . . . 4

1.4 Decay . . . . 4

1.5 Specific drumsounds . . . . 5

1.6 Limits of temporal discrimination . . . . 5

1.6.1 Separating close events . . . . 5

1.6.2 Weber law . . . . 5

2 Digital sound basics 7 2.1 LTI systems . . . . 7

2.2 Waveform . . . . 7

2.3 FFT and the power spectrum . . . . 8

2.4 STFT and the sonogram . . . . 10

2.4.1 STFT calculation synopsis . . . . 11

2.5 Real-world signals and windowing . . . . 11

2.5.1 Amplitude response . . . . 11

2.5.2 Windowing . . . . 13

2.5.3 Window function tables of merits . . . . 15

2.6 Wavelet . . . . 16

2.6.1 DWT - discrete wavelet transform . . . . 17

3 Testing 18 3.1 Overlapping drumsounds . . . . 18

3.1.1 Overlapping in time . . . . 18

3.1.2 Overlapping in frequency . . . . 19

3.1.3 Overlapping in both time and frequency . . . . 20

3.2 Levels . . . . 20

3.2.1 Level One - no overlapping in time . . . . 20

3.2.2 Level Two - no overlapping in frequency . . . . 21

3.2.3 Level Three - weak overlapping in time and frequency . . . . 21

3.2.4 Level Four - true overlapping in time and frequency . . . . 22

3.3 Measure of correctness for onset extraction . . . . 22

(5)

3.3.1 What is a correct onset . . . . 23

3.3.2 Klapuri correctness . . . . 23

3.3.3 Error rate, derived measure correctness . . . . 23

3.3.4 Suggested correctness . . . . 24

3.3.5 Table of behaviors . . . . 24

4 Algorithms and their applications 25 4.1 Finding and defining areas of a drumsound . . . . 25

4.1.1 RMS/MAX analysis . . . . 25

4.1.2 Frequency-band decibel-threshold analysis . . . . 26

4.1.3 Mean decibel-threshold analysis . . . . 27

4.2 Separating drums . . . . 29

4.3 Finding and separating a drumsound . . . . 29

4.3.1 ICA - Independent component analysis . . . . 29

4.4 Finding, separating and match finding . . . . 30

4.4.1 Cross-correlation . . . . 31

4.5 Beat Tracking / Onset extraction algorithms . . . . 33

4.5.1 Different approaches . . . . 33

4.6 BTS . . . . 33

4.6.1 System Description . . . . 34

4.6.2 Frequency Analysis . . . . 35

4.6.3 Finding onset times . . . . 35

4.6.4 Detecting BD and SD . . . . 36

4.6.5 Beat prediction . . . . 36

4.7 Onset detection using psychoacoustic knowledge . . . . 36

4.7.1 System overview . . . . 37

4.7.2 Filtering of the bands . . . . 37

4.7.3 Onset component detection . . . . 38

4.7.4 Intensity of onset components . . . . 39

4.7.5 Combining the results . . . . 39

4.7.6 Conclusion . . . . 39

4.8 Onset detection by wavelet analysis . . . . 40

4.8.1 Wavelet analysis . . . . 40

4.8.2 Transformations of the modulus plane . . . . 41

4.8.3 Highlighting onsets . . . . 42

4.8.4 Conclusion . . . . 43

5 The analysis program 44 5.1 System overview . . . . 44

5.1.1 System flow . . . . 44

5.1.2 Class diagram . . . . 45

5.2 Key classes . . . . 46

5.2.1 DrumSequence . . . . 46

5.2.2 Spectrogram . . . . 46

5.2.3 Sample . . . . 46

5.2.4 DrumToc . . . . 46

5.2.5 DrumData . . . . 47

5.2.6 DSPTools . . . . 47

5.2.7 Error . . . . 47

5.2.8 Exception . . . . 47

5.2.9 GUIMainWindow . . . . 47

5.3 Using the program . . . . 47

5.3.1 Loading sample . . . . 48

(6)

5.3.2 Setting preferences . . . . 49

5.3.3 Calculating a spectrogram . . . . 49

5.3.4 Viewing frequency bands . . . . 50

5.3.5 Calculating onsets . . . . 50

5.3.6 The result output . . . . 51

5.4 The programming language . . . . 51

5.4.1 C++ . . . . 51

5.4.2 Java . . . . 52

5.5 Libraries . . . . 52

5.5.1 STL - template library . . . . 52

5.5.2 FFTW - FFT library . . . . 53

5.5.3 expat - XML parser toolkit . . . . 54

5.5.4 Numerical Recipes in C - math functions . . . . 55

5.5.5 Qt - GUI toolkit . . . . 56

5.5.6 QWT - graph visualization package . . . . 56

5.6 Tools . . . . 57

5.6.1 Doxygen - documentation system . . . . 57

5.6.2 CVS - Concurrent Version control System . . . . 57

5.6.3 ElectricFence - memory debugging . . . . 58

5.6.4 Qt tools . . . . 58

5.7 Experiences . . . . 60

5.7.1 Soundfile library . . . . 60

5.7.2 DSP library . . . . 60

5.7.3 Scientific visualization library . . . . 61

5.7.4 Garbage collector . . . . 61

6 Results 63 6.1 Onset extraction . . . . 63

6.1.1 Time-domain . . . . 63

6.1.2 Time-frequency domain . . . . 63

6.1.3 Final onset extraction with detailed results . . . . 69

6.2 Drumsound separation . . . . 80

6.2.1 Blind-Source Separation . . . . 80

7 Conclusion 83 7.1 Future work . . . . 84

A Data format for testcases 85 A.1 XML DTD . . . . 85

A.1.1 drumtoc element . . . . 85

A.1.2 source element . . . . 86

A.1.3 drum element . . . . 86

A.2 Example of testcase . . . . 87

B Glossary 89

Bibliography 97

(7)

List of Figures

2.1 Waveform of a drumsequence. . . . 8

2.2 Power spectrum of a drumsequence. . . . 9

2.3 Calculation of the power-density spectrum. . . . . 9

2.4 Sonogram of the same drumsequence. . . . 10

2.5 DFT positive frequency response due to an N-point input sequence containing k cycles of a real cosine: (a) amplitude response as a function of bin index m; (b) magnitude response as a function of frequency in Hz. Figure from [Lyo01]:75. . . . . 12

2.6 Shows a signal before and after being applied a window function. . . . . 13

2.7 Shows the Hanning window function. . . . 14

2.8 Shows the Hamming window function. . . . . 15

2.9 Shows the Welch window function. . . . . 15

2.10 Shows the difference from using CWT and DWT. Image from www.wavelet.org. . . . . 17

3.1 Overlapping in time. FFT binsize 256, linear energy plot, Hanning window. . . . 19

3.2 Overlapping in frequency. FFT binsize 256, linear energy plot, Hanning window. . . . 19

3.3 Overlapping in both time and frequency. FFT binsize 256, linear energy plot, Hanning window. 20 4.1 RMS and MAX analysis. RMS is blue, MAX is the dotted red. . . . . 26

4.2 Pseudo implementation of the Frequency-band decibel-threshold algorithm. . . . 26

4.3 Shows the results from the Frequency-band decibel-threshold analysis. The yellow lines signi- fies the estimated onsets of the drums in the drumsequence. . . . . 27

4.4 Shows the results from the Mean decibel-threshold analysis. The added red lines show the detected frequency boundaries of the drumsounds detected. . . . . 28

4.5 Pseudo implementation of the Mean decibel-threshold algorithm. . . . 28

4.6 Waveform of hihat-sample with zero-padding. . . . . 31

4.7 Auto-correlation of hihat-sample. . . . . 32

4.8 Cross-correlation between a hihat and a bassdrum. . . . . 32

4.9 The figure shows an overview of the BTS system. Image from [GM95]. . . . 34

4.10 Explains extracting onset components. Image from [GM95]. . . . 35

4.11 Shows detection of BD and SD. Image from [GM95]. . . . . 36

4.12 System overview. Image from [Kla99]. . . . . 37

4.13 Processing at each frequency band. Image from [Kla99]. . . . 38

4.14 Onset of a piano sound. First order absolute (dashed) and relative (solid) difference functions of the amplitude envelopes of six different frequency bands. Image from [Kla99]. . . . 39

4.15 Modulus values from clarinet solo. Image from [TF95]. . . . . 41

4.16 Detected onsets on modulus plane of clarinet piece. Image from [TF95]. . . . . 42

4.17 Detected onsets of footsteps with background music. Image from [TF95]. . . . 43

5.1 Shows the system flow of the analysis program. The input is an audiofile, the end result is text with the estimated drum onsets (the start of the drums). . . . 44

5.2 The figure shows the inheritance and collaboration of the main (digital signal processing) classes of the analysis program. . . . . 45

(8)

5.3 The figure shows the inheritance and collaboration of the complete program. The “DSP” classes

have all been put in one package. . . . . 46

5.4 Main window, also shows the current settings for FFT and smoothing. . . . 48

5.5 File->Load, loads a .wav file into the program. . . . 48

5.6 File->FFT Preferences, sets the preferences for FFT calculation and the optional smoothing of the frequency bands. . . . . 49

5.7 Display->Spectrogram, calculates a spectrogram. . . . . 49

5.8 Display->Waveform, displays a single frequency band in the spectrogram. . . . . 50

5.9 Analyse->Find Drums, extracted onset components. . . . 50

5.10 The result output. . . . 51

5.11 The figure shows how Iterators glue together Containers and Algorithms . . . . 53

5.12 1D Real Transforms, Powers of Two UltraSPARC I 167MHz, SunOS 5.6 Image from www.fftw.org. . . . . 54

5.13 Comparison of six XML parsers processing each test file. Image from [Coo99]. . . . 55

5.14 the projects “meta” makefile. . . . . 59

6.1 Sonogram of a drumsequence. . . . 64

6.2 Results from the analysis. The yellow lines are detected drum onsets. The circles are drum onsets that went undetected by the algorithm. . . . 65

6.3 Shows the fluctuations of the powerspectrum in a sonogram. The figure shows band 3, around 260 Hz. . . . 66

6.4 The blue graph is the smoothed one using Holt’s method, the green is the original unsmoothed one. . . . 67

6.5 The blue graph is the smoothed one using Savitsky-Golay smoothing filters, the green is the original unsmoothed one. . . . 68

6.6 The yellow lines are the detected onsets, the red lines are the estimated frequency-limits. . . . 69

6.7 Shows the two original drums. . . . 80

6.8 Shows the different mixes of the two drums. . . . . 80

6.9 Shows the unmixed drums. . . . 81

6.10 Shows a sonogram of the two original drums. FFT binsize 64, Hanning window, logarithmic power plot, 80dB resolution. . . . 81

6.11 Shows a sonogram of the different mixes of the two drums. FFT binsize 64, Hanning window, logarithmic power plot, 80dB resolution. . . . . 81

6.12 Shows a sonogram of the two unmixed drums. FFT binsize 64, Hanning window, logarithmic power plot, 80dB resolution. . . . 82

A.1 DTD for the testcases (drumtoc.dtd). . . . . 85

A.2 Example of a drumtoc XML file (hovedfag03_03.dtc). . . . . 87

A.3 Example of drums starting at the same time. . . . . 87

(9)

List of Tables

1.1 Overview of traditional modern drums. . . . 6

2.1 Sidelobe, fall off and other descriptive values for window functions used in the thesis. All the values are from [Bor98] except for Welch window function. . . . . 16

2.2 Recommended usage and quality of window function where signal is arbitrary non-periodic. The table taken from [Tecxx]. . . . . 16

3.1 Table of behaviors. . . . 24

6.1 Onset extraction where no windowing function was used. Average correctness: 18.1%. . . . . 70

6.2 Onset extraction using Hamming window. Average correctness: 44.7%. . . . 71

6.3 Onset extraction using Hanning window. Average correctness: 71.1%. . . . 71

6.4 Onset extraction using 50% overlapping in the STFT. Average correctness: 46.0%. . . . . 72

6.5 Onset extraction using no overlapping. . . . . 72

6.6 Onset extraction with no smoothing functions. Average correctness: 73.3%. . . . 73

6.7 Onset extraction using Savitzky-Golay smoothing. Average correctness: 9.5%. . . . . 73

6.8 Onset extraction Savitzky-Golay smoothing and overlapping. Average correctness: 23.4%. . . 73

6.9 Onset extraction where binsize is 128. Average correctness: 50.4%. . . . . 74

6.10 Onset extraction where binsize is 256. Average correctness: 71.1%. . . . . 74

6.11 Onset extraction where binsize is 512. Average correctness: 29.8%. . . . . 75

6.12 Onset extraction including multiple onsets and a 5ms correctness threshold. Average correct- ness: 60.2%. . . . . 76

6.13 Onset extraction ignoring multiple onsets and with a 5ms correctness threshold. Average cor- rectness: 70.2%. . . . . 77

6.14 Onset extraction including multiple onsets and with a correctness threshold of 10ms. Average correctness: 66.5%. . . . . 78

6.15 Onset extraction ignoring multiple onsets and with a correctness threshold of 10ms. Average correctness: 77.5%. . . . . 79

(10)

Introduction

Goal

The goal of this thesis is studying the possibility of developing a system that enables the user to compose a new drum sequence by using an original drumloop as a template to work from. The following functions must be present:

1. loading/saving samples (drumloops) 2. extract the onset of each drumsound

3. extracting the frequency boundaries a drumsound resides in 4. separate the drumsounds from each other

5. find replacement drumsounds from a database of samples if needed 6. rebuild a new drumloop as close as possible to the original one 7. a basic editor for composing new beats

The purpose of such a system is to change an original drumloop in a way not previously possible.

You can change the speed1 of the drumloop without changing its pitch2, in other words the BPM3. You could mute selected drums in the loop, or change their position, or even swap a drum altogether.

The system is a composition tool where you can use an existing drumloop, separate its elements (the individual drumsounds, e.g. bassdrum/hihat/snare) and re-compose the drumloop until a desired result is achieved. The system can also be used to study at leisure the composition of a drumsequence from for example an educational perspective.

Problem

The areas of research that needs the closest attention in order to make this system work, can be defined by these three main problems:

1. extracting drumsound onsets and defining the frequency boundaries for a drumsound

1This effect is also called time-stretching or pitch-shifting. There exist good algorithms for doing this, but the algorithms work on the drumloop as a sample (which it is) and not as a composition of different drumsounds.

2The pitch is the key of a sound. Even though drumsounds are not normally stationary signals, the often have a perceived pitch.

3BPM - beats per minute.

(11)

2. separating the drumsounds

3. finding matches of similar sounds in a database.

The first task of finding the onset and frequency boundaries of a drumsound was first taken lightly.

We thought that a relatively simple algorithm working in the frequency domain would be sufficient, and we also originally combined the onset/frequency problem together with the separation problem.

As this thesis will show the first task was not as easily solved as first anticipated. In addition many of the methods discussed in the thesis show that a clean separation of the problems as outlined above not always is the best approach towards a solution.

(12)

Chapter 1

Psychoacoustics and drum basics

1.1 Drum types

The standard classification of instruments following the Sachs-Hornbostel scheme (mentioned in the Ph.D by W. Andrew Schloss) [Sch85], divides instruments into four main categories:

1. Idiophones where the vibrating material is the same object that is played (free of any applied tension), e.g. woodblocks, gongs, etc.

2. Membranophones where the vibrating material is a stretched membrane, e.g. drums

3. Chordophones where the vibrating material is one or more stretched strings e.g. lutes, zithers, etc.

4. Aerophones where the vibrating material is a column of air, e.g. flutes, oboes, etc.

We will be looking at sounds produced by only two of the main categories, namely idiophones and membranophones. Mallet instruments (marimba, xylophone, vibraphone, glockenspiel) which might be considered drums are more similar to piano and although being idiophones will not be included here.

1.2 Frequency assumptions

The assumption that theoretically would make this composing system possible, is that you can find some key attributes that define a drumsound and makes it unique.

Initially we were hoping that the frequency distribution would be more sparse for the different drumsounds i.e. bassdrums occurring just in the lower parts of the frequency spectrum, hihats just in the upper parts. This however turned out not to be totally true, as a drumsound usually spreads over a relatively large frequency area, of course including the main area of which it resides in. As Schloss says “The fact is that drums differ enormously in timbre and pitch clarity. Minute differences in shell size and shape, and the thickness and type of membrane, contribute a great deal to the unique sound of different drums” [Sch85].

When describing a particularly recording of Brazilian percussive music, Schloss, in his article on the automatic transcription of percussive music, notes “... the extremely diverse percussion instru-

(13)

ments cover an enormous bandwidth, from below 40 Hz to the limit of hearing1. Each instrument has a wide spectrum, overlapping the others both in spectrum and in rhythmic patterns.” [Sch85].

One reason for our assumption of a more strict frequency distribution may have been caused by the perceived pitch of certain drums. For example “The acoustics of the timpani have been described briefly by Thomas Rossing, including the reasons these drums elicit a reasonable clear pitch, when in fact they should be quite inharmonic [Ros82]” [Sch85].

The frequency distribution is still an attribute that can be used for describing a drumsound.

1.3 Attack

Another attribute that is normally present in a drumsound is a fast attack. Attack is the time from the start of the sound to the time of peak intensity. Drumsounds are normally generated by hitting surfaces, and the sound produced usually reaches its peak intensity in a short period of time. Sounds that have been extensively changed by signal processing tools (e.g. reversed) may loose this attribute and might be harder to detect.

1.3.1 Perceptual Attack Time

Still there is a problem in musical timing context that should be mentioned - the problem of Perceptual Attack Time (PAT). As Schloss writes “the first moment of disturbance of air pressure is not the same instant as the first percept of the sound, which in turn may not necessarily coincide with the time the sound is perceived as a rhythmic event. There inevitably will be some delay before the sound is registered as a new event, after it is physically in evidence.” [Sch85]:23.

We will not delve any deeper into this problem in this thesis as it actually can be largely ignored.

“It turns out that the actual ’delay’ caused by PAT in the case of drumsounds is quite small, because the slope is typically rather steep.” [Sch85]:24. The test data used for our experiments with different onset algorithms is subjectively crafted using our own perception of the drumsamples. “In trying to define limits on temporal discrimination, researchers have not always agreed on their results; one fact stands out, however - the ’ear’ is the most accurate sensory system in the domain of duration analysis.” [Sch85]:20. Hence, if the system corresponds with the test data, it will also correlate with our perception of the drumsamples and their onsets.

1.4 Decay

Also, drumsounds normally have a "natural" decay time, unless reverb has been added, it is a special instrument or has been altered in a sample-editor. By decay we mean the time it takes from the peak intensity until the drumsounds disappears, so not like the decay we know from ADSR envelopes in synthesizers or samplers. Is is not however easy to give a general rule for the rate of decay for drumsounds as it is with attack. Therefore, making any rules based on this attribute would make the system less general.

1about 22 kHz when we are young

(14)

1.5 Specific drumsounds

In this thesis some specific drumsounds will be mentioned and in order to make it possible for the reader to know what kind off drums they are, here is a brief description with images.

1.6 Limits of temporal discrimination

When studying and trying to mimic the human auditory system it is important to understand the limits of our perception of sound and temporal changes.

1.6.1 Separating close events

First of all for this thesis focused on rhythm and drum onsets in particular it is important to find out what the limits of discriminability is in terms of time. What is the minimum duration between two events that still can be perceived by our ear as two distinct events? This is interesting because we want to know what ’resolution’ our system for onset detection should operate in.

In a classical study by Ira Hirsch [Hir59] it was found that “it was possible to separate perceptually two brief sounds with as little as 2 msec. between them; but in order to determine the order of the stimulus pair, about 15-20 msec. was needed” [Sch85]:22.

In this thesis 5 ms will be used as the lower limit for error in the system, for as Schloss claims: “In the normal musical range, it is likely that to be within 5 msec. in determining attack times is adequate to capture essential (intentional) timing information” [Sch85]:22.

1.6.2 Weber law

Weber’s law states that “the perceptual discriminability of a subject with respect to a physical attribute is proportional to its magnitude, that isδx/x=k where x is the attribute being measured, andδx is the smallest perceptual change that can be detected. k is called the Weber ratio, a dimensionless quantity.”

[Sch85]:20.

In other words, if you examine something of a certain magnitude, your discriminability will vary proportionally with the magnitude.

It would be interesting to see if this also applies to our hearing. “It turns out that, for very long or very short durations [pauses between rhythmic events], Weber’s law fails, but in the area from 200 milliseconds to 2 seconds, a modified version of Weber’s law seems to hold, according to Getty [Get75].” [Sch85]:20.

To conclude, the human auditory system works best in the ’musical range’ with intervals of 100 - 2000 ms. And people normally have the most accurate sense of temporal acuity in the range of 500 - 800 ms. [Sch85]:21-22.

(15)

Name Description Image

Snare drum sharp short sound

Bass drum deep bass sound

Hihat thin light sound

Cymbal broad light sound

Tomtom different “pitched” sounds

Table 1.1: Overview of traditional modern drums.

(16)

Chapter 2

Digital sound basics

This chapter is a brief introduction in digital sound processing. We give a thorough explanation of the different DSP terms used in this thesis, and we explain and discuss the transformations and their advantages and disadvantages.

2.1 LTI systems

In this thesis we only focus on discrete-time systems, specifically linear time-invariant (LTI) systems.

"A linear system has the property that the output signal due to a linear combination of two or more input signals can be obtained by forming the same linear combination of the individual outputs. That is, if y1(n) and y2(n)are the outputs due to the inputs x1(n) and x2(n) , then the output due to the linear combination of inputs

x(n) =a1x1(n) +a2x2(n) (2.1)

is given by the linear combination of outputs

y(n) =a1y1(n) +a2y2(n) (2.2)

." [Orf96]:103.

"A time-invariant system is a system that remains unchanged over time. This implies that if an input is applied to the system today causing a certain output to be produced, then the same output will also be produced tomorrow if the same input is applied." [Orf96]:104.

2.2 Waveform

A sound presented in the digital domain is called a sample. With this we mean an array of numbers representing the form of the sound. A samplepoint is one such number in a sample array. The numbers in the array describe the amplitude of the sound. Figure 2.1 shows how this looks graphically, and it is called a waveform.

By amplitude we mean a positive or negative value describing the position of the samplepoint relative to zero. Notice that magnitude is very similar, but by magnitude we mean a positive value de- scribing the amplitude of a signal. In other words magnitude is the amplitude of the wave irrespective of the phase.

(17)

Figure 2.1: Waveform of a drumsequence.

When we look at a waveform we are studying the sample in the time-domain. Time is on the horizontal axis and the amplitude is along the vertical axis.

2.3 FFT and the power spectrum

“In 19th century (1822 to be exact), the French mathematician J. Fourier, showed that any periodic function can be expressed as an infinite sum of periodic complex exponential functions. Many years after he had discovered this remarkable property of (periodic) functions, his ideas were generalized to first non-periodic functions, and then periodic or non-periodic discrete time signals.” [Pol01].

The DFT’s (discrete Fourier transform) origin is the CFT (continuous Fourier transform) but since our focus is on discrete-time systems we only discuss the DFT. The DFT equation in exponential form is:

X(m) =

N−1

n=0

x(n)e−i2πnm/N (2.3)

where x(n) is a discrete sequence of time-domain sampled values (basically a sample) of the continuous variable x(t), e is the base of natural logarithms and i is

−1. Equation 2.3 can based on Euler’s relationship e−iθ =cos(θ)isin(θ)be changed into this rectangular form:

X(m) =

N−1

n=0

x(n)[cos(2πnm/N)isin(2πnm/N)] (2.4)

X(m): the mth DFT output

m: the index of the DFT output in the frequency-domain

x(n): the sample

n: the time-domain index of the sample

i:

−1

(18)

N: the number of samplepoints in the sample and the number of frequency points in the DFT output [Lyo01]:49-51.

The point of the Fourier transform is to determine the phase and amplitude of the frequencies in a signal. The result of a DFT is a sequence of complex numbers where the modulus describes the amplitude and the argument describes the phase. If we want to study a sample in the frequency-domain this can be done by performing the DFT (discrete Fourier transform), or its fast implementation FFT (fast Fourier transform), on the sample and by using the result from the transform calculate a power spectrum of the sample, see Figure 2.2.

Figure 2.2: Power spectrum of a drumsequence.

In Figure 2.2 the vertical-axis shows the power of the signal in dB and the horizontal-axis shows the frequency in Hz. As we can see the sample has a peak in the lower parts of the spectrum and gradually the power declines towards the upper frequencies. This presentation of the frequency parts of the drumsequence does not tell us anything about their relation in time. If the sample being analyzed was a frequency sweep the power spectrum would not tell us anything but the total amount of power the different frequency components in the sample had.

Ignoring the negative frequencies returned from a DFT, calculating a power density-spectrum with results from the discrete Fourier transform applied on a real (as opposed to complex) signal could be implemented as shown in Figure 2.3.

// assuming FFT by four1(buffer-1, N, 1); from Numerical Recipes: www.nr.com for (int k = 0; k <= N/2; k++) {

real = buffer[2*k];

imag = buffer[2*k+1];

ps.freq[k] = sampleRate*(double)k/(double)N;

ps.db[k] = 10. * log10( 4.*(real*real + imag*imag) / (double)N*N );

}

Figure 2.3: Calculation of the power-density spectrum.

(19)

In Figure 2.3 ps.db[] is a vector for the dB calculated, N1 is the size of the FFT binsize and bu f f er[]is the vector the power spectrum calculation is performed on (the result returned from the FFT algorithm).

2.4 STFT and the sonogram

STFT (short time Fourier transform) is based upon a series of segmented and overlapped FFTs that are applied along the sample. These segments are often referred to as windows. In the STFT, the individual FFTs from these multiple windows are rendered as a 2D plot where the color or intensity represent the power. This is know as a sonogram or spectrogram.

The purpose of using overlapping windows is to produce a time-frequency representation of the data. A high degree of overlapping of the windows can result in a more accurate time-frequency spectrum. Since the size of a FFT binsize decides the frequency resolution there is a fixed frequency resolution when using STFT and the resolution is set mainly by the size of the windows2. The time resolution is also set by the size of the windows. By using small windows one get good time resolu- tion but poorer frequency resolution. And likewise, using larger windows will give better frequency resolution but poorer time resolution. Overlapping can help to produce better time resolution, but only to a certain degree. The window-size thus controls the tradeoff between frequency resolution and time resolution, and it will be constant everywhere in the time-frequency spectrum. The STFT is thus classified as a fixed or single resolution method for time-frequency analysis.

Figure 2.4 shows the result of STFT applied to a drumsequence. The color represent the power of the signal, the horizontal-axis represent the time and the vertical-axis the frequency. Thus we are looking at time-frequency representation of a sample. With this representation we can decide among other things whether a signal is stationary or not. A stationary signal is a signal whose average statistical properties over a time interval of interest are constant.

Figure 2.4: Sonogram of the same drumsequence.

1If the binsize used is say 512 then one gets a resolution of 256, or N/2, frequency-bands.

2Zero-padding the windows can change the resolution somewhat

(20)

2.4.1 STFT calculation synopsis

The way a sonogram is calculated is thus, block the signal into smaller parts, window these, and do DFT/FFT on them. The number of bins decides the frequency resolution of the sonogram, and the amount of overlapping decides the time resolution. The blocking of the signal can be overlapped, so that the second block spans over some parts of the first block, and the third block spans over parts of the second. A usual amount of overlapping is 50%, which means that the blocking advances at steps half the size of the bins.

2.5 Real-world signals and windowing

The DFT of sampled real-world signals gives frequency-domain results that can be misleading. “A characteristics, known as leakage, causes our DFT results to be only an approximation of the true spectra of the original input signal prior to digital sampling.” [Lyo01]:71. There are several reasons for this, one being that the input data is not periodic, an actual requirement for the Fourier transform to return the correct frequency components in the signal. Another reason is the fact that the DFT will only produce correct results when the input data contains energy precisely at integral multiples of the fundamental frequency fs/N where fs is the samplerate and N is the DFT binsize or size of the window if you will. This means that any sample that contains intermediate frequencies like for example 1.5 fs/N will produce incorrect results. Actually “this input signal will show up to some degree in all of the N output analysis frequencies of our DFT!” [Lyo01]:73. This characteristics is unavoidable when we perform DFT on real-world finite-length samples.

2.5.1 Amplitude response

If we look at the amplitude response for an N sized DFT in terms of one specific bin, X(m), for a real cosine input having k cycles, it can be approximated by the sinc function:

X(m)≈N

2 ·sin[π(k−m)]

π(k−m) (2.5)

In Equation 2.5 m is the bin index. We can use Equation 2.5 to determine how much leakage happens when using DFT.

(21)

Figure 2.5: DFT positive frequency response due to an N-point input sequence containing k cycles of a real cosine: (a) amplitude response as a function of bin index m; (b) magnitude response as a function of frequency in Hz. Figure from [Lyo01]:75.

Study Figure 2.5 and notice the main lobe andsidelobes of the curve. Since the DFT is only a sampled version of the continuous spectral curve, the DFT will only give correct analysis “when the input sequence has exactly an integral k number of cycles (centered exactly in the m=k bin)”

[Lyo01]:74. When this is the case no leakage occurs, ie. the sidelobes are zero.

Another characteristics with the DFT is that it leakage also wraps around. “The DFT exhibits leakage wraparound about the m=0 and m=N/2 bins. And finally an effect know as scalloping also contributes to the non-linear output from DFT. Consider Figure 2.5 and picture all the amplitude responses for all the bins superimposed on the same graph at the same time. The rippled curve, almost like a picket fence, illustrates the loss DFT has because some (most) frequencies are between the bin frequency centers.

(22)

2.5.2 Windowing

DFT leakage is troublesome because it can corrupt low-level signals in neighbouring bins, they will drown in the leakage in not be registered as ’interesting’ frequency components.

Windowing is the process of altering the input data in a sequence, a window. In other words windowing is applying a window function to a sequence of input data. The purpose of windowing is to reduce the sidelobes we experienced in Figure 2.5. If these sidelobes are reduced the DFT leakage will also be reduced.

The benefits of windowing the DFT input are:

• reduce leakage

• reduce scalloping loss

The disadvantages of windowing are:

• broader main lobe

• main lobe peak value is reduced (frequency resolution reduced)

However as Lyons puts it “the important benefits of leakage reduction usually outweigh the loss in DFT frequency resolution.” [Lyo01]:83.

Figure 2.6 shows a signal before and after being applied a window function. The signal is not periodic, and this will produce glitches, which again will result in frequency leakage in the spectrum.

The glitches can be reduced by shaping the signal so that the ends matches smoothly. By multiplying the signal with the window function we force the ends of the signal to be zero, and then fit together.

Starting and ending with the same value is not enough to make the signal repeat smoothly, the slope also has to be the same. The easiest way of doing this is to make the slope of the signal at the ends to be zero. The window function has the property that its value and all its derivatives are zero at the ends.

0 50 100 150 200 250 300 350 400 450 500

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

Original sample

n

Magnitude

0 50 100 150 200 250 300 350 400 450 500

−1

−0.8

−0.6

−0.4

−0.2 0 0.2 0.4 0.6 0.8 1

Windowed sample

n

Magnitude

Figure 2.6: Shows a signal before and after being applied a window function.

2.5.2.1 Rectangular window

A rectangular window is a window with no attenuation, or no distortion of the segment it is being applied to. A rectangular window is the window function all other window functions are compared to

(23)

The main lobe of the rectangular window is the most narrow, but the sidelobe is only -13 dB below the main peak lobe. The rectangular window is also know as the uniform of boxcar window.

2.5.2.2 Hanning window

The Hanning window, Equation 2.6, is an excellent general-purpose window [Tecxx]. The Hanning window has a reduced first sidelobe, -32 dB below the main peak lobe, and the roll off or fall off, which means the amount of reduction in dB per octave, is -18 dB [Lyo01], [Bor98]. The Hanning window is also know as raised cosine, Hann or von Hann window and can be seen in Figure 2.7.

w(n) =0.5−0.5cos(2Πn

N ),0≤nN−1 (2.6)

0 50 100 150 200 250 300 350 400 450 500

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Hanning Window

n

Magnitude

Figure 2.7: Shows the Hanning window function.

2.5.2.3 Hamming window

The Hamming window, Equation 2.7, has a an even lower first sidelobe than Hanning, at -43 dB.

The fall off on the the other hand is only -6 dB, the same as a rectangular window. “This means that leakage three or four bins away from the center bin is lower for the Hamming window than for the Hanning, and leakage a half dozen or so bins away from the center bin is lower for the Hanning window then for the Hamming window” [Lyo01]:84. Figure 2.8 shows the Hamming window, notice how the endpoints do not quite reach zero.

w(n) =0.54−0.46cos(2Πn

N ),0≤nN−1 (2.7)

(24)

0 50 100 150 200 250 300 350 400 450 500 0

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Hamming Window

n

Magnitude

Figure 2.8: Shows the Hamming window function.

2.5.2.4 Welch window

The Welch window, Equation 2.8, is also know as the parabolic window and can be seen in Figure 2.9.

w(n) =1−(nN2

N 2

)2,0≤nN−1 (2.8)

0 50 100 150 200 250 300 350 400 450 500

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Welch Window

n

Magnitude

Figure 2.9: Shows the Welch window function.

2.5.3 Window function tables of merits

Table 2.1 shows a comparison between the different window functions used in the analysis program in this thesis.

• Sidelobe: the attenuation to the top of the highest sidelobe

• Fall off: the rate of fall in dB off to the side lobe

• Coherent Power Gain: the normalised DC gain

• Worst Case Processing Loss: the ratio of input signal-to-noise to output signal-to-noise, includ- ing scalloping loss for the worst case

(25)

Window function Sidelobe (dB) Fall off (dB/octave)

Coherent power gain

Worst case processing loss (dB)

Rectangular -13 -6 1.00 3.92

Hanning -32 -18 0.50 3.18

Hamming -43 -6 0.54 3.10

Welch -26

Table 2.1: Sidelobe, fall off and other descriptive values for window functions used in the thesis. All the values are from [Bor98] except for Welch window function.

Window function Best for these signal types

Frequency resolution

Spectral leak- age

Amplitude accuracy Rectangular Transients &

Synchronous Sampling

Best Poor Poor

Hanning Random Good Good Fair

Hamming Random Good Fair Fair

Welch Random Good Good Fair

Table 2.2: Recommended usage and quality of window function where signal is arbitrary non- periodic. The table taken from [Tecxx].

Table 2.2 shows a recommended usage of the same window functions as in Table 2.1. As a note the best window to prevent spectral leakage is the Blackman window function, and the best for amplitude accuracy is a Flat Top window [Tecxx], these were not included in the table since they are not used in the analysis program.

2.6 Wavelet

The problem with STFT is finding the correct window function and FFT binsize to use. The size controls both the time and frequency resolution for the whole analysis.

“The problem with STFT is the fact whose roots go back to what is known as the Heisenberg Uncertainty Principle . This principle originally applied to the momentum and location of moving particles, can be applied to time-frequency information of a signal. Simply, this principle states that one cannot know the exact time-frequency representation of a signal, i.e., one cannot know what spectral components exist at what instances of times. What one can know are the time intervals in which certain band of frequencies exist, which is a resolution problem.” [Pol01].

Choosing the correct binsize and window function is application specific, there are no magic solu- tions. And as Polikar notes, finding a good binsize and window function “could be more difficult then finding a good stock to invest in” [Pol01]. The wavelet transform solves this dilemma of resolution to some extent.

The biggest difference between the wavelet transform and sliding FFT is that the resolution of the time and frequency axes change for wavelets, as with sliding FFT they stay the same. Generally speaking we can say that the lower frequencies of a sample has poor time resolution but accurate

(26)

frequency resolution using the wavelet transform. High frequencies have accurate time resolution but poor frequency resolution. The wavelet transform is because of this called a multiresolution analysis.

“Multiresolution analysis is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and poor time resolution at low frequencies. This approach makes sense especially when the signal at hand has high frequency components for short durations and low frequency components for long durations. Fortunately, the signals that are encoun- tered in practical applications are often of this type.” [Pol01].

2.6.1 DWT - discrete wavelet transform

The DWT’s (discrete wavelet transform) origin is the CWT (continuous wavelet transform). In the CWT, Equation 2.9, the transformed signal is a function of two variables, τ and s, which are the translation and scale parameters. ψ is the wavelet basis function or the mother wavelet and ∗ is complex conjugation.

CW Txψ(τ,s) =Ψψx(τ,s) = 1 p|s|

Z

x(t)ψ(t−τ

s )dt (2.9)

“The wavelet analysis is a measure of the similarity between the basis functions (wavelets) and the signal itself. Here the similarity is in the sense of similar frequency content. The calculated CWT coefficients refer to the closeness of the signal to the wavelet at the current scale.” [Pol01]. Figure 2.10 shows the results from a CWT and DWT analysis.

Figure 2.10: Shows the difference from using CWT and DWT. Image from www.wavelet.org.

(27)

Chapter 3

Testing

To solve the problem of analyzing and separating a drumsequence, we need to experiment with differ- ent algorithms. In order to choose one algorithm or method instead of another, we have to have some kind of measure that tells us which is best.

One way of doing this is to run a number of different drumsequences (custom made, where one knows the correct separation beforehand) through the system, and compare the results from the system with the correct ones. This is a legitimate way of doing this kind of measure, but it would probably yield even better results if a set of questions were defined that determine the effectiveness of the different algorithms. By classifying the test data into different levels and measuring the correctness of each algorithm and how they perform on the different data we can achieve more finely grained test results. One type of leveling is to use the complexity of overlapping as a distribution indicator.

3.1 Overlapping drumsounds

A drumsequence consists of different drumsounds mixed together. These drumsounds can when com- bined overlap each other in various ways.

1. overlapping in time 2. overlapping in frequency

3. overlapping in time and frequency.

3.1.1 Overlapping in time

Overlapping in time means that there are more than one drumsound played simultaneously (e.g. Like a bassdrum and a hihat). The question of separating drumsounds that are only overlapping in time could then be a filtering problem. One would have to define the different areas the drumsounds reside in, and then use the proper filters. Figure 3.1 gives an example of two drumsounds overlapping in time.

(28)

Figure 3.1: Overlapping in time. FFT binsize 256, linear energy plot, Hanning window.

3.1.2 Overlapping in frequency

Overlapping in frequency means that there are two drumsounds that inhabit the same frequencies.

These drumsounds are not played simultaneously but exist in the same drumsequence (e.g. The same drumsound played twice, with a pause between). To separate drumsounds that are just overlapping in frequency, one solution is to find the start and stop samplepoints of the drumsounds and just separate these from the original drumsequence. Figure 3.2 gives an example of two drums overlapping in frequency.

Figure 3.2: Overlapping in frequency. FFT binsize 256, linear energy plot, Hanning window.

(29)

3.1.3 Overlapping in both time and frequency

The last kind of overlapping is in both time and frequency. This is the most normal form of over- lapping in our problem domain (drumsequences). An example of this is two drumsounds played simultaneously with overlapping frequencies, like a snare-drum and a hihat. Both of these drums are in the higher frequency area. A separation of this kind is not easily solved by either filtering or find- ing the start and stop samplepoints of the drumsounds involved. Figure 3.3 gives an example of two drumsounds overlapping in both time and frequency.

Figure 3.3: Overlapping in both time and frequency. FFT binsize 256, linear energy plot, Hanning window.

3.2 Levels

By using overlapping as a measure of complexity the following levels can be defined.

3.2.1 Level One - no overlapping in time

At this level there is a pause between each drumsound. This means that there might be overlapping frequencies, but this is not an issue since the different drums are in different time-areas of the drum- sequence. Questions asked to determine the effectiveness of an algorithm at this level could be:

1. how many drumsounds are separated

2. how accurately is the onset of each drumsound detected 3. how accurately is the end of each drumsound detected 4. how does it perform on weak drumsounds.

Typical problem drumsequences could include weak hihats and reversed hihats1etc.

1To detect drumsounds of this type accurately is not expected by this system.

(30)

3.2.2 Level Two - no overlapping in frequency

On the second level we have no overlapping in frequency. This means two drumsounds do not share the same frequency-area of a drumsequence, but they could be played simultaneously (i.e. overlapping in time). It should be noted however that this scenario is quite rare as drumsounds typically occupy a broad frequency-area.

Questions asked to determine the effectiveness of an algorithm at this level could be:

1. how many drumsounds are separated

2. how accurately is the onset of each drumsound detected 3. how accurately is the end of each drumsound detected

4. how accurately are the highest frequency-limits of a drum detected 5. how accurately are the lowest frequency-limits of a drum detected 6. how accurately are the drums separated.

One might think that finding the start and end points of a drumsound on level two would be the same task as it is on level one, this is not necessarily so. On level two we can have drumsounds overlapping in time, this could mean that two sounds are played simultaneously, and an accurate algorithm would distinguish the two drums as separate drums and separate start times.

Since we have overlapping in time, meaning we have mixed signals, the detection of frequency limits are also measured. We look upon a drumsound here as a rectangle (viewed in a multirate-view) which has an upper and lower frequency limit, and an onset and end point. The detection of the limits might be a separate problem from the actual separation of the different drumsounds. Therefore the quality of the separation of drumsounds are also questioned (e.g. This might be a test of the accuracy of the filters used in extracting the different drums. It is no simple matter to make ideal digital filters.

The reason why the accuracy (or quality if you like) of the separation is not tested on level one is because this will be covered in the start/stop test. Separation on level one is just to extract the data found from drum onset to end without any filtering.

Typical problem drumsequences on level two could include drumsounds that reside in very near frequency areas, or drumsounds that have got weaker frequency-parts that still are crucial to the overall

"feel" of the drumsound. Also, drumsounds that have weaker frequency-parts in the middle of their total frequency-span might prove very difficult to separate correctly and should be included in the test set (they might be taken for two drums instead of one).

3.2.3 Level Three - weak overlapping in time and frequency

On the third level we have got what we call weak overlapping in time and frequency. With this we mean that we can "block" the different drumsounds i.e. draw unique polygons along the time and frequency-areas a drumsound occupies, and the polygons will not share the same areas. Here we look on the drumsounds as polygons rather than rectangles. Questions asked to determine the effectiveness of an algorithm at this level could be:

1. how many drumsounds are separated

2. how accurately are the higher frequency-limits of a drum detected

(31)

3. how accurately are the lower frequency-limits of a drum detected 4. how accurately are the drums separated.

Testing for the start and end points of a drumsound is not that interesting at this level. What is interesting is to measure the quality of the detection of the frequency-limits, and the separation of the drumsound. What makes it different from level two is that the detection and separation has to handle changing frequency-limits.

Typical problem drumsequences could be the same as used in level two that also has weak over- lapping.

3.2.4 Level Four - true overlapping in time and frequency

On the fourth level we have true overlapping. With this we mean that we have overlapping in both frequency and time, and it is not possible to "block" the drumsounds in different unique polygons. An example of this could be two similar drums played at the same time. Questions asked to determine the effectiveness of an algorithm at this level could be:

1. how many drumsounds are separated

2. how accurately is the onset of each drumsound detected 3. how accurately is the end of each drumsound detected

4. how accurately are the higher frequency-limits of a drum detected 5. how accurately are the lower frequency-limits of a drum detected 6. how accurately are the drums separated.

At this level it becomes interesting again to look at the start and end points detected. Since we have true overlapping we can have two very similar sounds played almost at the same time. These should be detected as different drums with different start points. This is different from level two, because in level two we didn’t have overlapping in frequency (but possibly in time), and the overlapping in frequency adds new complexity to the problem.

Also detection of the limits of a drumsound in the frequency area is highly interesting to measure at this level. Since we have true overlapping the drumsounds will be mangled together and trying to define the silhouette that defines the boundaries of each drumsound will be difficult.

The separation of true overlapped drumsounds will also be very interesting to measure at this level. A goal here is of course to include as little as possible of the other drumsounds occupying the same time and frequency-area.

Typical problem drumsequences will be as mentioned above similar or even identical drumsounds played at the same time. They could also include weak drumsounds mixed on top of strong ones (e.g.

a weak hihat played together with a strong snare).

3.3 Measure of correctness for onset extraction

At the moment there are no standard for measuring the correctness of an onset extraction algorithm.

Different methods have been proposed and it is natural to use these, and if possible extend them and enhance them to better suit our needs.

(32)

In the article "Issues in Evaluating Beat Tracking Systems" by Goto and Muraoka [GM97] they say

"Because most studies have dealt with MIDI signals or used onset times as their input, the evaluation of audio-based beat tracking has not been discussed enough".

Also Rosenthal mentions in his paper "Emulation of human rhythm perception" [Ros92a] that evaluating the ideas behind many of the previous systems is not a straightforward task, and that dif- ferent researchers have worked on different aspects of the problem.

One such aspect is the measure of correctness for onset extraction from audio material.

3.3.1 What is a correct onset

A drum onset is defined as the time where the drumsound starts (or the magnitude reaches a certain peak). A correctly extracted onset has a time equal to the original onset, within a certain threshold.

For my test this threshold has been subjectively set to +/- 5 ms unless noted otherwise in the test results.

3.3.2 Klapuri correctness

The measure of correctness used in Anssi Klapuris paper [Kla99] worked by getting a percentage based upon the total numbers of onsets, the number of undetected onsets, and the erroneous extra onsets detected:

correct=totalundetectedextra

total ·100% (3.1)

Where total is the number of onsets in the original sample, undetected is the number of missed detections, extra is the number of erroneous detections.

This seems like a reasonable measure but as we look closer at it we find that it is possible to get negative percentages, and even negative percentages below -100%. As an example let us imagine an algorithm tried on a drumsequence with 6 onsets in it. The imagined algorithm extracts 3 correct onsets, misses 3 and also has 4 extra onsets detected. The calculated correctness would in this case be:

−16.7%=6−3−4

6 ·100% (3.2)

Not a really good measure for this example. The onset extraction algorithm does get half of the onsets correct so a negative percentage for its correctness seems pretty harsh.

3.3.3 Error rate, derived measure correctness

A measure of error instead of a measure of correctness could be better for certain situations. Error rates are usually allowed to be above 100%, and when the error rates get small enough, it turns out to be more convenient to use error rates, since small numbers are more intuitive, for humans, than numbers very close to 100.

error rate=undetected+extra

total ·100% (3.3)

Where total is the number of onsets in the original sample, undetected is the number of missed detections and extra is the number of erroneous detections.

Referanser

RELATERTE DOKUMENTER

The results of the data analysis are presented and discussed in Section 3, including a general flow characterization of the experimental site (Section 3.1) and an investigation of

Sorption of Cu, Sb and Pb (%) as a function a function of the total concentration of elements in the pond with charcoal and iron hydroxide as sorbents in two

The difference is illustrated in 4.23, and as we see, it is not that large. The effect of applying various wall treatments is of course most apparent in the proximity of the wall.

The dense gas atmospheric dispersion model SLAB predicts a higher initial chlorine concentration using the instantaneous or short duration pool option, compared to evaporation from

We assessed changes in the tissue levels of astrocytic glutamate transporters and other proteins that could be involved in a neuroprotective effect of VPA, and we measured the effect

describing the trajectory of a leaf with this assumption would be irrelevant.. communicating the validity of the assumptions and an analysis of their impact on the results are

HPLC analysis of the purified sample resulted in a chromatogram containing additional peaks with UV-VIS absorption in the 330 nm window, indicating that the liquid-liquid

Type approval measurements of tyres are performed on an ISO-surface, and the results presented in this report, based on modelling or drum measurements on ISO-surfaces, are