• No results found

Artificial Finger Control - Inverse kinematics in soft robotics

N/A
N/A
Protected

Academic year: 2022

Share "Artificial Finger Control - Inverse kinematics in soft robotics"

Copied!
92
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

Artificial Finger Control

Inverse kinematics in soft robotics

Andreas Thoresen

Thesis submitted for the degree of Master in Nanoelectronics and Robotics

60 credits

Department of Informatics

Faculty of mathematics and natural sciences

UNIVERSITY OF OSLO

(2)
(3)

Artificial Finger Control

Inverse kinematics in soft robotics

Andreas Thoresen

(4)

© 2019 Andreas Thoresen Artificial Finger Control http://www.duo.uio.no/

Printed: Reprosentralen, University of Oslo

(5)

Abstract

More compliant robots have certain benefits when coming to cooperating with humans. Standard industrial robots today require many safety measurements to be legal and safe to operate. Soft robots can therefor be safer, have more compliance for external forces naturally. Positioning and control of a robot can be a difficult task and computational heavy. In this thesis, we explore the possibility of controlling a soft artificial finger with heuristic methods such as neural networks.

The finger would be compliant and in no way be endangering humans or the environment it is in. Due to the compliance in the materials the servos, cannot be broken as a result of misuse or external forces applied to the finger.

Design of the finger will be discussed and reasoned. Inspiration from the human anatomy will be used, but for machine learning purposes there will be simplifications.

Datasets will be generated using the model made and a webcam for tracking the fingertip. The collection of data and the impact the data have on the algorithms will be discussed. We will test different dataset with different amounts of joints.

Distal supervised learning will be used as an approach to solve the inverse kinematics of the soft finger and see if the neural net can control the finger.

Focusing on being accurate enough to position in space, but also learning fast to be able to adapt to external changes, that may affect the finger.

The thesis will also implement a smaller regression method for the dataset.

KNN-regression will be tested and compared to the result of the distal supervised learning.

Both of the methods were able to learn the inverse kinematics of the soft robot.

(6)
(7)

Acknowledgements

I want to thank my supervisor Mats Høvin at the University of Oslo for supervising me for this thesis. The engagement and many interesting discussions have been of great value, and the thesis would not have been the same without it.

Would also like to thank Vegard Søyseth for the help with the practical work and use of the workshops at the University of Oslo.

At last, I would like to thank my family, friends and fellow students for continuous support through the whole of this thesis. You have always encouraged me to do my best.

My sincerest thanks - Andreas Thoresen

(8)
(9)

Contents

1 Introduction 1

1.1 Introduction . . . 1

1.2 Problem . . . 1

1.3 Hypothesis . . . 2

1.4 Method . . . 2

1.5 Summary . . . 2

1.6 Structure of the Thesis . . . 3

2 Background 7 2.1 Physical Robot Design . . . 8

2.2 Kinematics . . . 10

2.2.1 Forward Kinematics . . . 10

2.2.2 Inverse Kinematics . . . 10

2.3 Machine Learning . . . 10

2.3.1 Supervised Learning . . . 10

2.3.2 Overfitting . . . 11

2.3.3 Local Optima . . . 11

2.3.4 Backpropagation . . . 11

2.3.5 Distal Supervised Learning . . . 12

2.3.6 Why Distal Supervised Learning? . . . 13

2.4 KNN-Regression . . . 14

2.4.1 KNN-regression . . . 14

2.4.2 Dataset Quality . . . 14

2.4.3 Why KNN-regression? . . . 15

2.5 Comparison between Distal Supervised learning and KNN- regression . . . 15

2.6 Previous work . . . 16

3 Tools and engineering processes 19 3.1 Tensorflow and Keras . . . 19

3.2 3D-modelling software . . . 20

3.3 3D-Printer and Slicer . . . 20

3.4 Dynamixel Servos . . . 21

3.5 OpenCV . . . 21

3.6 Silicone Elastosil . . . 22

4 Implementation 23

(10)

4.1 Design of Finger . . . 23

4.1.1 Choice of lengths, mounting points, and reduction of friction . . . 23

4.1.2 Platform/palm . . . 24

4.2 Design of Tendons . . . 27

4.2.1 Circular Tendons . . . 27

4.2.2 Non-Circular Tendons . . . 28

4.2.3 Square Tendons . . . 28

4.2.4 Tendon in Tendon . . . 29

4.3 Finger-tip Tracker . . . 29

4.3.1 Camera detection algorithm . . . 29

4.3.2 Requirements for the tracking . . . 30

4.4 Finger angle calculation . . . 31

4.4.1 Parameters for tracking . . . 31

4.4.2 Need for frames per second and stability . . . 32

4.5 Collection of Datasets . . . 32

4.5.1 Datasetgeneration processes . . . 32

4.5.2 Dataset for 1 Joint . . . 33

4.5.3 Dataset for 2 joints . . . 34

4.5.4 3 Joints Challenges . . . 34

4.5.5 Collection of all data points . . . 36

4.6 Distal Supervised Learning . . . 36

4.6.1 Forward model . . . 36

4.6.2 Inverse model . . . 36

4.6.3 Architecture Choice . . . 36

4.7 KNN-Regression . . . 37

4.7.1 K . . . 37

5 Experiments 39 5.1 Fingertip tracking and datasetgeneration . . . 39

5.1.1 Inverse kinematics . . . 39

5.2 Finger Design and System Properties . . . 40

5.2.1 Linear result with tendon setup . . . 40

5.2.2 Getting to a non-linear system . . . 42

5.2.3 Finger hanging straight down . . . 43

5.2.4 Silicone tendon benchmarking . . . 43

6 Results 47 6.1 Distal Supervised Learning . . . 47

6.1.1 Parameter Search . . . 47

6.1.2 Setup and parameters for all results below . . . 49

6.1.3 1 Joint attached with thread . . . 50

6.1.4 1 Joint attached with silicone tendon . . . 50

6.1.5 1 Joint attached with silicone and reversed dataset . . 54

6.1.6 2 Joints attached with silicone tendons . . . 54

6.2 KNN-Regression . . . 57

6.2.1 No normalization . . . 57

6.2.2 With normalization . . . 61

(11)

6.3 Comparision between Distal Supervised Learning and KNN 63

7 Discussion 65

7.1 Distal supervised learning . . . 65

7.1.1 1 Joint with thread . . . 65

7.1.2 1 Joint with silicone tendon . . . 65

7.1.3 2 Joints with silicone tendons . . . 65

7.2 KNN-regression . . . 66

7.2.1 2 Joints with silicone tendons . . . 66

7.3 Comparison of Distal Supervised Learning and KNN- regression . . . 66

8 Conclusion 67 9 Future Work 69 9.1 KNN-regression . . . 69

9.2 Tendon design . . . 69

9.3 Supervised Learning . . . 69

9.4 Datasetgeneration . . . 70

9.4.1 Number of points . . . 70

9.4.2 Work area . . . 70

9.4.3 3 joints . . . 70

(12)
(13)

List of Figures

1.1 Assembled finger with servos, fishing line, silicone tendons

and weight at the fingertip. Palm version 2 . . . 4

2.1 Human finger joints and bones. Image taken from https://www.fyzical.com/plymouth/Injuries-Conditions/Hand/Hand- Issues/Swan-Neck-Deformity-of-the-Finger/a 289/article.html 7 2.2 Extensor hood. Image taken from https://eorthopod.com/hand- anatomy/ . . . 8

2.3 Central Slip. Image taken from https://eorthopod.com/hand- anatomy/ . . . 9

2.4 Dexterous Hand from Shadow Robot Company. https://www.shadowrobot.com/products/dexterous- hand/ Spotmini from Boston Dynamics. https://www.bostondynamics.com/spot 9 2.5 3D fitness landscape with multiple local optimums. Here maximizes the value for best result. Image is taken from Nygaard [22], on page 32 . . . 11

2.6 Inverse Network with 2 servos for controlling 2 joints . . . . 12

2.7 Finger with 2 joints, receiving 2 servo angels and moving the finger to a XY-position . . . 13

2.8 KNN-Regression example for K = 3 in a 2D space. The illustration is taken from Hu[14], on page 3 . . . 14

2.9 Example for two curves of coordinates for the KNN- regression. The KNN caluclates nearest neighbour and get an old neighbour. . . 16

4.1 3d model of the palm version 1 from Fusion360 . . . 25

4.2 Range of motion for the AX-18A Dynamixel servo. Image is taken from http://emanual.robotis.com/docs/en/dxl/ax/ax- 18a/ . . . 25

4.3 3d model of the palm version 2 from Fusion360 . . . 26

4.4 . . . 27

4.5 Tendon mold for silicone casting of circular tendons . . . 28

4.6 Mold for square tendons and tendon with a hole, for tendon in tendon setup. . . 29

4.7 A square tendon and a tendon allowing for the tendon in tendon design. A ring was inserted to expand the hole and let the tendon move more freely. . . 29

4.8 Tracking of fingertip . . . 30

4.9 Angle Calculations . . . 31

(14)

4.10 N=3 for joint1, joint2 cannot move from 0-90 degrees for the last N as it would result in the fingertip being outside the

camera frame. . . 33

4.11 Joint2 have flipped over,θ2has a negative value. . . 35

4.12 Two mainpulators with different configurations that reaches the same XY-coordinate. . . 35

5.1 Inverse kinematic illustration . . . 39

5.2 Plotting of f(ϕ) =θ . . . 41

5.3 Actual test of finger with fishing line . . . 41

5.4 Angle of the finger with 1 joint, plotted against the servo angle 42 5.5 Angle of the finger with 1 joint, plotted against the servo angle 43 5.6 Tendon benchmark setup . . . 45

5.7 Servo angle plotted along the x-axis(Length of tendon), with required current to pull on the y-axis . . . 46

6.1 Loss for the distal supervised learning, with batch sizes varying from 1-32 . . . 48

6.2 Loss for the distal supervised learning, with number of hidden nodes varying from 1-64 . . . 48

6.3 Loss for the distal supervised learning, with learning rates varying from 1e-01 - 1e-06 . . . 49

6.4 1 joint attached with thread, results for the forward network. 51 6.5 1 joint attached with thread, results for the inverse network. 51 6.6 1 joint attached with thread, loss for the forward network. . 52

6.7 1 joint attached with thread, loss for the inverse network. . . 52

6.8 1 joint attached with silicone tendon, results for the forward network. . . 53

6.9 1 joint attached with silicone tendon, results for the inverse network. . . 53

6.10 1 joint attached with silicone tendon, loss for the forward network. . . 54

6.11 1 joint attached with silicone tendon, loss for the inverse network. . . 55

6.12 1 joint attached silicone, results for forward network with reversed dataset. . . 55

6.13 1 joint attached silicone, results for inverse network with reversed dataset. . . 56

6.14 1 joint attached with thread, loss for the forward network with reversed dataset. . . 56

6.15 1 joint attached with thread, loss for the inverse network with reversed dataset. . . 57

6.16 Setup for the tests with 2 joints attached with silicone tendons. Weight is attached with a fishing line from the finger-tip. . . 58 6.17 2 joints attached silicone tendons, results for forward network. 59 6.18 2 joints attached silicone tendons, results for inverse network. 59

(15)

6.19 2 joints attached with silicone tendons, loss for the forward network. . . 60 6.20 2 joints attached with silicone tendons, loss for the inverse

network. . . 60 6.21 Test set loss for K in range 1 to 15 with no normalization of

the data . . . 61 6.22 Test set loss with standard deviation with no normalization

of the data . . . 62 6.23 Test set loss for K in range 1 to 15 with normalization of the

data . . . 62 6.24 Test set loss with standard deviation with normalization of

the data . . . 63

(16)
(17)

List of Tables

2.1 Names for bones and joints . . . 8 6.1 Hyper-Parameters set for parameter search in distal super-

vised learning . . . 47 6.2 “Optimal” parameter set found by our parameter search.

Used for all results below. . . 50 6.3 Test results from running tests on actual finger . . . 63

(18)
(19)

Chapter 1

Introduction

1.1 Introduction

Robot grippers of today are accurate and offer a strong grip and working with humans this strong grip may be too much. Having more compliant robots will be safer and easier to work with for humans and also use for gripers. This is where soft robotics come in, soft robots are much more compliant than our conventional robot and are more able to give a human- like grip. Soft robotics have been more and more developed over the last decades because of their capabilities. A soft robot is built of one or more soft materials that will make the robot more compliant.[36] This could be pneumatic artificial muscles for actuators, tendons of silicon or the whole robot is silicon. A soft finger that is compliant could be used in prosthetics where the compliance would provide safety for humans. The finger could be built up with soft tendons, stretching when the finger is pulled, much like a human tendon, that is able to stretch.[28] The compliance in the material used would give the finger a force regulation, meaning a softer grip for griping fragile items.

1.2 Problem

There certain issues with soft robotics, the kinematics can be very complex due to non-linear behavior of the soft part of the robot. The elastic part of the material makes numerical solutions too big, and analytics methods heavy to compute. The difference in materials used between robots also requires that each setup needs its own calculations.

(20)

1.3 Hypothesis

To solve the problem without heavy computation and that the calculations are dependent on how the robot is built we propose the following solution. Using machine learning on a gathered dataset, we believe that the algorithms can achieve high enough accuracy to position the soft robot in space, while adapting to environmental changes and external forces, such as temperature changes, added weight and changes to the robot.

1.4 Method

The objective of this thesis is to find and evaluate different heuristic models that can control an artificial human finger. The finger will be constructed using 3d-printing and silicon casting and will be considered as a soft robot.

The finger consists of 3d-printed parts, with a mounting system for the tendons. The tendons are cast in silicon and therefore elastic and are making the finger compliant for external force. We have made two palm versions to solve the issue, experiments with the first palm led to a new design of the palm. Distal supervised learning and KNN-Regression are the two algorithms that will be tested in this thesis, as it suited for finding the inverse kinematics needed to control the finger.

1.5 Summary

1. CAD modeling of a suitable Finger bone / Joint structure Tendon configuration

This will be designed in Autodesk Fusion 360 2. 3D printing of a test finger - bones/joints

Printing parts at the Fortus 250mc 3D-printer 3. Silicone casting of tendons to attach to the finger 4. Assembly of the complete system with linear actuators

Dynamixel servos will be used in this thesis

5. Dataset generation - linear actuator positions/fingertip XY position, camera-based XY feedback

OpenCV with a simple webcam will be used to collect data points while controlling the servos through DynamixelSDK library.

6. Learning inverse kinematics by artificial neural networks Using distal supervised learning and KNN-regression.

The focus of the first design is a finger and tendon configuration that is more optimal for the heuristic model to learn. By this, the finger will have

(21)

ball bearings to remove friction from moving the joints. A second finger that is more human-like is considered if the heuristic model can learn the first one and we have a baseline for 5 and 6.

We will focus on objective 5 and 6 as this is the focus of this thesis. The dataset generation will be done by tracking the tip of the finger with a webcam and storing the values of the servos and the XY-position of the fingertip in the plane. By reducing friction with ball bearings, the dataset will be able to store the information on how the tendons behave so that the artificial neural network will be able to learn the inverse kinematics. Main challenges with the problem at hand is to be able to accurately position the finger and making the algorithms learn fast enough to adapt to model or environmental changes.

The first version will be as followed. A 3d-printed palm will have mounted electric servos, tendons, and the finger. Half of the tendons are attached on one side of the finger and the palm. These tendons are static and therefore give a constant pull one way. The other half of the tendons are attached on the other side of the finger, which runs to the servos. With these servos and tendons, we can apply forces to the finger to make it move. The tendons will roll up on a wheel as the servo pull the tendon.

Task 1-4 was repeated to create the changes for version 2. The second version does not have the static tendons running on the side of the finger.

These were replaced with a weight at the of the fingertip. The angle of the finger is now also altered, as it will be lifting the bottle. This is further explained in section 4.1.2 on page 24. An image of the complete setup with palm version 2 can be seen in figure 1.1 on the next page.

1.6 Structure of the Thesis

We have split the thesis into the following parts, Background, Tools and Engineering process, Implementation, Experiments, Results, Discussion, Conclusion and at last Future Work. In the background we will discuss the different methods used, their origin and use today. The design process and inspiration for our robot design will also be mentioned in this part, and we are generally aiming to give insight into what method we are using and some reasons why. Tools and Engineering aim to explain software and hardware used to prototype and program the machine learning.

Giving insight to the versions, developer, and advantages of using such equipment, software and libraries. The bigger section implementation will go through the steps of creating and implementing the project. Choices of lengths and parameters, as well as simplifications that have been made, why they were made and problems that we encountered. Results and Experiments will show the results and compare the different methods of doing the inverse kinematics. We will see how the 2 different algorithms work and how changing the parameters affects the performance we are getting. The conclusion will be summing up our result how well the

(22)

Figure 1.1: Assembled finger with servos, fishing line, silicone tendons and weight at the fingertip. Palm version 2

(23)

solution worked for our problem and discuss what could have been done differently or improved. Which ends the thesis with the future work, looking at what one could look further into. Problems we encountered that can be solved and also things we can improve on.

(24)
(25)

Chapter 2

Background

Figure 2.1: Human finger joints and bones. Image taken from

https://www.fyzical.com/plymouth/Injuries- Conditions/Hand/Hand-

Issues/Swan-Neck-Deformity-of- the-Finger/a 289/article.html The human hand anatomy is ne-

cessary to make a finger that is in- spired by it and also human-like.

Figure 2.1 shows the name of the different bones and joints in the hu- man finger, for simplicity we will refer to the bones and joints as seen in Table 2.1. The metacarpophalan- geal bone is the bone in our palm connecting to our finger, and cre- ating the joint that is the knuckle [31]. The knuckle is where bones glide over each other with low fric- tion, so the ends that are in con- tact with another bone is covered in articular cartilage that serves as lubrication and also absorbs shock.

The bones are held together by lig- aments, that connect directly to the bone. Ligaments also restrict side- ways bending of a finger or bend- ing too far back(hyperextending)

[1]. Ligaments are attached to all bones in the finger to keep all of the bones together. Tendons run from our muscles to our fingers, the tendons are like a “rope” pulling the finger to either contract or extend. Tendons that pull the finger back out again, straightening the finger, is called ex- tensor tendons. They change into the extensor hood, as seen in Figure 2.2, that connects at the fingertip and the middle phalanx. The connection point for the middle phalanx is called the central slip, Figure 2.3. We do not have any muscles in our fingers, most of the muscles for our hands start at the el- bow and forearm. Muscles contract and pulls on the tendon, either curling up the finger or extending it. For our fine motor skills, we have muscles in our wrist called intrinsic muscles [1], these keep our fingers steady during

(26)

Named Reefered to as

Distal phalanx Fingertip

Middle phalanx Middle phalanx Proximal phalanx Proximal phalanx

Metacarpal Palm

Distal interphalangeal joint Joint 3 Proximal interphalangeal joint Joint 2 Metacarpophalangeal joint Joint 1 Table 2.1: Names for bones and joints

Figure 2.2: Extensor hood. Image taken from

https://eorthopod.com/hand-anatomy/

fine motoric tasks. At last, we have nerves and blood vessels, this will not be explained as it is not needed for this study.

2.1 Physical Robot Design

Robot design is often inspired by nature, from our own bodies as in prosthetics and humanoid robots to animals where we replicate a unique trait they possess.[22] We are often challenged with a task, and their performance is measured in how well they can perform this one task.

Making robot designs functional is a priority, this means that replicating directly from nature is not always the best idea, as simplifications often have to be made. In figure 2.4a and 2.4b we can see robot designs inspired by the human hand to fulfill the purpose of a hand and a legged robot inspired by four-legged creatures.

(27)

Figure 2.3: Central Slip. Image taken from https://eorthopod.com/hand- anatomy/

(a) Dexterous Hand (b) Spotmini

Figure 2.4: Dexterous Hand from Shadow Robot Company.

https://www.shadowrobot.com/products/dexterous-hand/ Spotmini from Boston Dynamics. https://www.bostondynamics.com/spot

(28)

2.2 Kinematics

2.2.1 Forward Kinematics

Defining the relationship between a robot angles in each joint, to the position of the end effector. The end effector in our case will be the fingertip. One popular convention for defining the forward kinematics of a manipulator is the Denavit Hartenberg convention[12]. Presented by Denavit and Hartenberg in 1955, the convention for defining the forward kinematics is still popular today.

2.2.2 Inverse Kinematics

Inverse kinematics is the opposite of forward kinematics, which is by the position of the end effector in space find the required angles of the manipulator to achieve the position. Inverse kinematics can be found by multiple methods, some of them being a geometric approach or neural nets as we will explore in this paper.[4] [7] [11] [35] [37]

2.3 Machine Learning

Machine learning or artificial intelligence has been around since 1940 but has become more and more popular in recent decades with more computational power, new techniques and new fields of application.

[15] [21] The possibilities for AI has increased and frameworks such as Tensorflow has been made, to easier prototype and develop further, tensorflow is explained later in section 3.1 on page 19.

2.3.1 Supervised Learning

Supervised learning is one variant of machine learning, based on labeled data. The training of artificial intelligence is based on calculating an error of what it proposes as an answer and the true label of the data. This way with techniques such as backpropagation section 2.3.4 on the next page we can minimize the error and have a higher probability of receiving the correct answer next iteration. This is done with neural networks or multilayer perceptron as it is also called. [6] A neural network is built up by nodes and connected by edges, each node in layer H is connected to all nodes in the preceding layer H-1. A figure is shown later in the paper, in figure 2.6 on page 12. The network takes an input of a chosen size and multiples it with the weight matrix in each edge, followed by adding a bias and applying an activation function. The activation function is what makes neural network able to learn non-linear functions. At last the output layer will output a chosen number of outputs.[20]

(29)

Figure 2.5: 3D fitness landscape with multiple local optimums. Here maximizes the value for best result. Image is taken from Nygaard [22], on page 32

2.3.2 Overfitting

Overfitting is when we train the weights in the neural network too much.

Too much means they start specializing in the training dataset and losing the general solution to the problem. The loss for the training will get smaller, but validation and test sets will get bigger losses.[19] The training set is the data represented to the network during the training of the weights, the validation set is the data set where performance is checked with regular intervals to see if the network is overfitting or generalizing a solution. At last the test set is used to see if the network is general or specialized, as this data has never been presented to the network at this point, a test set is not always used. [19]

2.3.3 Local Optima

Machine learning algorithms are heuristic which means they are not guaranteed to find the optimal solution, the best solution. They will more than likely settle on a local optimum. Machine learning algorithms can do this very quickly in many problem cases or for computational heavy problem. What one can do is ’ensure’ by parameter tuning that we have found a good local optimum. Local optima illustrated in figure 2.5. The figure is showing a landscape with fitness value, in supervised learning, this would be the equivalent to loss.

2.3.4 Backpropagation

Backpropagation is a standardized and optimized way of updating the values for the weights in a neural network. After each forward pass of

(30)

S1 S2

...

x y Hidden

layer Input

layer

Output layer Forward network

x y

...

S1 S2 Hidden

layer Input

layer

Output layer Inverse network

Figure 2.6: Inverse Network with 2 servos for controlling 2 joints

a network we can calculate our loss, and based on this loss we can use backpropagation to change the weights in a direction where we minimize the loss we are receiving. [27]

2.3.5 Distal Supervised Learning

A problem with supervised learning is to backpropagate based on what the neural networks proposes, as one would need to know the correct answer. As in our problem, we will have a neural network that produces the correct angles for the servos to achieve a position of the fingertip in 2D-space. When we are training the network, we do not have servo angles for all possible XY coordinates, only a set of them. So we do not know the servo angles that are correct in order to backpropagate based on the error. In offline training, this could be avoided by only using the coordinates where we know what servo angle we would like to achieve, but this would not solve an application that would learn online. Instead, with distal supervised learning, we train a forward neural network, seen in figure 2.6, that we have trained to be equal to the actual finger, as illustrated in figure 2.7 on the next page. This network can then always produce XY-coordinates from any servo angles we provide it. Taking the output from the inverse network that produces the servo angles based on XY-coordinates, as seen in figure 2.6. Feeding this to the forward to get XY-coordinates again, then the error can be calculated from the suggested XY-coordinate and the desired XY-coordinate. This is assuming that the forward network is correct. The backpropagation would go back through the forward neural network with locked weights not allowing changes, then through the inverse neural network changing the weights in the direction to produce the right servo angles, giving the right coordinate.[18]

[9]

(31)

S1 S2 (x,y) Finger

Figure 2.7: Finger with 2 joints, receiving 2 servo angels and moving the finger to a XY-position

2.3.6 Why Distal Supervised Learning?

We chose to use distal supervised learning as the function of the model is unknown. It would also make it possible to generalize for different setups with the human-like finger or manipulators. This would give us the freedom to experiment with more biological correct, simpler designs and functional designs. It also makes the model more robust and with the possibility of measuring errors from the camera, friction in between joints, tendon and 3d-printed parts. Neural networks have shown to be a possible solution to complex inverse kinematics problems.[7] [11] [4]

Supervised learning also produces output based on input. This is suitable for the inverse kinematics as we want to know needed servo angles to achieve the desired position. The distal supervised learning as explained in section 2.3.5 on the preceding page, is suited to an online performance, where the environment may change and affect how the finger behaves, that being due to changes in weight, temperature or model changes. The main advantage for distal supervised learning is when the finger can have multiple solutions for one coordinate, as many manipulators may have.

Training a forward network is no problem, as every set of servo angles will only have one XY-coordinate. Then connecting the inverse network and calculating a loss based on XY-coordinate error, we will not have to think about what servo angles are produced. Michele Giorelli [11] found supervised methods promising and showed results with high accuracy which is something we are looking to replicate or improve on. This with the fact that we can have the model learning live as long as we feed the coordinates from the camera makes the method applicate for updating live, should there be a change to the model, making the function different, but still similar to the one already learned.

(32)

Figure 2.8: KNN-Regression example for K = 3 in a 2D space. The illustration is taken from Hu[14], on page 3

2.4 KNN-Regression

2.4.1 KNN-regression

K-nearest neighbors regression(KNN-regression) is to give an approxima- tion based on similar data. So given a new data point to approximate, the algorithm will look at the closest neighbor, given by a distance measure and calculate the result from a mean of the K nearest neighbors.[14] [29]

The distance measure should have a meaning in space, in our case the dis- tance in pixels, for coordinates. The algorithm can be explained with these steps.

1. Given a test point calculate distance from the test point to all points known using the distance measure.

2. Sort after distance

3. Given the K nearest data points take the average of the value, one want In figure 2.8 we can see an example from a 2D space how we choose K nearest neighbors for one test point.

2.4.2 Dataset Quality

The dataset quality is important for both algorithms, as for any machine learning. Machine learning algorithms are often limited in their perform- ance, not because of their parameters or use, but by the data itself. [3] With KNN-regression the accuracy will increase as the number of data points increase, the algorithm is making the approximation on previously known points, the closer these are to the target point, the better the regression.

(33)

The KNN-regression is sensitive to outliers, as if counted they can push the mean value the algorithm produce of the general path of the function we are approximating. [29]

2.4.3 Why KNN-regression?

Pros

Chen J.[4] showed good results using KNN-regression. KNN-regression is one of the simplest forms of regression. The simplicity of the algorithm makes the algorithm easy to test and experiment with. It only has one hyper-parameter being the K number of neighbors to take the average of. The KNN needs a distance measurement to be able to tell what the k nearest neighbors are. With the inverse kinematics, our input will be x and y coordinate, making it suited for the KNN regression as we can use Euclidean distance as a distance measure as it makes sense with the data we have.

Cons

The computational time of the algorithm increases as the number of data points increases, we also have to keep all data points in memory. As for classification with KNN, the regression variant also suffer from the curse of dimensionality and outliers in the dataset. The dimension here is the finger operating in a 2d-space with the 2 servos for 2 joints. Outliers in the data can be generated by measuring error, friction or other errors.

KNN algorithm is affected by the outliers, shifting the mean.[29] As for changing the manipulator or for changes in the environment the KNN cannot differentiate between old or new data. It will only take the K nearest points regardless if the data point was recorded with different weights.

The more data points we collect the KNN will be more accurate and able to have an accurate answer for multiple weights, however, if the data points are close together they may interrupt with each other and decrease performance, something we do not have control over. As illustrated in figure 2.9 on the next page, we can see that for a case here, the nearest neighbor is from a different setup. The dimensionality problems and outlier problem discussed above could be improved on by using techniques to filter the data, dimensionality reduction or detecting outliers.[3] [38] [29]

2.5 Comparison between Distal Supervised learning and KNN-regression

Distal supervised have a learning phase to get the weights correct and adjust to the current setup. Upon initialization, the network is not good

(34)

x y

(0, 0)

Old dataset New dataset

Figure 2.9: Example for two curves of coordinates for the KNN-regression.

The KNN caluclates nearest neighbour and get an old neighbour.

at all, but offline training can be done to train the network. If we then put the distal supervised network to the finger and for example, the weight is changed, the network would be off, in what degree would depend on the weight change. However, with enough iterations, the network would learn this new setup and forget the old one, as the backpropagation adjust the weights to new values. Comparing this to the KNN we will start with giving it a set of data points of know values, this will be the “training” stage of the algorithm, even though there are no weights to be adjusted. It would base it on known data, and now have two different curves to calculate. This is where the KNN might struggle as mentioned in the previous section.

2.6 Previous work

Controlling soft robots with heuristic methods such as neural nets have been done before, but with different setups and different heuristic methods to solve the non-linear problem with soft robotics. Michele Giorelli [11]

have looked at solving the inverse statics of a soft arm. The robot consisted of a soft arm with cables in the structure. This made the robot soft, but unlike conventional robots, there are no joints making the mathematics behind the inverse kinematic very complex. In the paper, Giorelli used a neural network and Jacobian method to solve the problem. He concludes that supervised learning methods look promising for controlling the soft robot. The accuracy was only off with less than a millimeter.

Jie Chen and Henry Y. K. Lau [4] have looked at the same problem as Michele Giorelli, learning the inverse kinematics of a tendon-driven soft manipulator. They used different algorithms than Michele Giorelli, hence K-nearest neighbor regression and Gaussian mixture regression. They also found that both of the heuristic methods worked for learning the inverse kinematics of the tendon-driven soft robot. In their research, they found

(35)

that the K-nearest neighbor regression was the best approach for learning the kinematics, outperforming the Gaussian mixture regression in accuracy.

The different methods for learning the inverse kinematics of a soft robot is exciting and a proof of concept. The results that Michele Giorelli and Jie Chen present shows that it looks promising to apply a heuristic method to learn the inverse kinematics of a soft finger. Even though the finger is not soft, but the tendons that are pulling the finger, the heuristic method should be the same and then also give similar results as Giorelli [11], and Chen [4] found. Transferring to a different setup should not be a problem either, as the setup is not affecting the heuristic method, also stated by Chen [4]. Pneumatic artificial muscles(PAMS) have also been used as an actuator to make conventional robots soft. M.A. Oliver-Salazar [23] made a pneumatic finger, controlled with pairwise McKibben muscles. Salazar looked into how the muscles behaved when it came to retraction, force and air pressure. He was able to use the information he gathered to make a muscle that behaved the way he wanted to and be able to control the mechatronic finger.

(36)
(37)

Chapter 3

Tools and engineering processes

3.1 Tensorflow and Keras

Tensorflow1 is Google’s open sourced machine learning library. It al- lows for fast prototyping and experimenting with neural networks. Back- propagation and many more machine learning techniques are implemen- ted and optimized for performance. Tensorflow also supports GPU’s, al- lowing for a much faster training if one should require it. Tensorflow offers low-level to high-level API, depending on what one would like to make. In addition to the machine learning framework, they have a visualization tool effective for displaying results and debugging, called Tensorboard.[32]

Keras2is a high-level API, that can be run on top of several machine learn- ing frameworks, that including Tensorflow. It offers increased simplicity for rapid prototyping and testing of ideas. This ease of use with their user- friendly API, modularity of graphs, examples in python code and at last ease of extensibility which was important for this thesis when experiment- ing with the distal supervised network.[13]

We choose to work with Keras to simplify and speed up the prototyping, use already implemented functions as it is not efficient coding the needed machine learning from scratch. It would be a less effective implementation and more likely to give more errors. Keras also have good documentation on their website, and there is a wide range of users and forums to discuss on. It is written in python which is our programming language of choice due to previous experience with the language as well as helpful libraries.

1https://www.tensorflow.org/overview

2https://keras.io/

(38)

3.2 3D-modelling software

3D-modeling of the finger and the different parts I needed was done with Fusion 360 from Autodesk3. Models are made from 2D-sketches and extruded out to 3D-models. Fusion 360 is free for students and offer software with all the tools one can need. It also supports simulation, putting a model together and creating animation, rendering parts for high- quality visualization of parts. It is also cloud based on multiple of the services it provides, meaning that one can run the rendering and other simulation in the cloud. Fusion 360 also supports parametric design, giving the ability to resize design frequently and easy, combined with a timeline that can show all steps taken to get to the current state that is interactive.

Giving the possibility of inserting and removing steps at specified points in the timeline and roll back to look at the steps taken.

For a more biological inspired design, Fusion360 can offer a sculpting mode, where one can create 3d objects directly and manipulate these shapes by pushing and dragging. This can allow for easier prototyping of shapes that would be quite complex to model in a 2D-sketch.[5]

We choose to work with Fusion 360 as this was a tool we already were familiar with from previous courses here at the University of Oslo. The capabilities and workflow of the tool are sufficient to model the parts we wanted.

3.3 3D-Printer and Slicer

We used the Fortus 250mc 3D-printer4 for printing the parts used in this thesis. The printer is highly reliable offering a build space of 25.4 x 25.4 x 30.5 cm, and printing in ABS plastic. The workspace gives freedom to rotate and scale 3D-models allowing to be printed in the most optimal direction.[30]

The Slicer for the 3D-printer, that takes the 3D-model and translates it into paths for the printer to print, is called Insight5. Here we can change the type of support styles, degrees of self-supporting material, rotate the part, choose printing density for the part. The ability to manually remove unwanted support allows for more complex structures to be built without the need for much work when the part is finished printing.[17]

The Fortus is a printer at the University of Oslo, giving us access to print when needed and rapid prototype parts. It is reliable and capable of printing the parts in the sizes that we wanted, it also provides great print quality, sufficient for the project. If needed we have access to more advanced 3d-printers with higher resolution and greater print quality but did not find this necessary for the project.

3https://www.autodesk.com/products/fusion-360/overview

4https://support.stratasys.com/products/fdm-platforms/fortus-250mc

5https://www.smg3d.co.uk/3d_design_software/insight_software

(39)

3.4 Dynamixel Servos

For version 1 of the palm, we used Dynamixel AX-18A servos from ro- botis6. These servos come with resolution down to 0.2930 degrees, in a 300- degree arc. They can also be used in a wheel mode, allowing for an end- less turn, but no position data. They are easy to mount as they have mul- tiple ways to screw down and measurements are specified on their website, making 3D-designing parts simple.

In version2 we switched to the MX-64AT, with a resolution of 0.0879 in a 360deegre turn, they offer high precision steering. As the AX-18A they could also be controlled in wheel mode, allowing for endless amounts of turns, but the MX-64AT servo has a multiturn mode, allowing for multiple turns with position feedback. This was the main reason for switching ser- vos as we need a greater range of motion, but still, be able to read of a position. It would also make it a lot easier to work with, as fine-tuning the servo could be done with an offset at the servo angle and not the physical robot.

Both servos also support python code for controlling the servo. The servo works based on the register, so by writing bytes over a connection, the servo moves based on the values in the registers. They are connected through a USB dongle to the computer.[10]

Dynamixels were chosen for this project for the ease of connection and talking to the servo, being able to communicate with the servo over USB through python suited the project well as the machine learning frame- work also is in python. Making the reading of values and communica- tion between the machine learning and the servos simple to connect. Ro- botis have released a library for communicating with the servos, Dyna- mixelSDK7.[26] They also provide high enough accuracy for positioning along with the needed speed and torque. We were also prepared for that the servos in version 1 were maybe not sufficient enough when it came to a range of motion and that we may need an upgrade. The dynamixels servos all communicate the same way, and the code could be reused with a differ- ent servo, making dynamixel the ideal servo to figure out the requirements the setup had.

3.5 OpenCV

OpenCV8 is a library for computer vision and is free to use for academic and commercial use. It offers a big API with many pre-implemented methods and techniques. It also comes with multiple trackers for tracking of objects in a video stream. It is widely used, making documentation and examples ease to get, making it easier to learn.[24] We chose it for the same reasons for a machine learning library, the ability to use already

6http://www.robotis.us/dynamixel/

7https://github.com/ROBOTIS-GIT/DynamixelSDK

8https://opencv.org/

(40)

implemented functions that are effectively implemented. OpenCV also has a good API to read up on the functions, often with examples. Along with a big community, it was simple to find the functions needed and good suggestions on how to use the library, as well on how to implement a simple tracker.

3.6 Silicone Elastosil

Silicon used is Elastosil M 4601 A/B RTV-29, the silicon is two component and vulcanizes at room temperature in 24 hours.[2] The process could drastically be shortened by applying heat to the silicon. The silicone is though and elastic, with an increased force needed to pull as it stretches.

This is the silicone that the University of Oslo have in stock, and we have used it in a previous course and research. It is easy to work with and vulcanizes in 3d-prints of PLA and ABS, which meant that we could make molds from 3D-printing.

9https://www.wacker.com/cms/en/products/product/product.jsp?product=9125

(41)

Chapter 4

Implementation

4.1 Design of Finger

4.1.1 Choice of lengths, mounting points, and reduction of friction

As we started the design of the finger, we looked at several prosthetics, Shadow Dexterous1, Hy52, bebionic3, Johns Hopkins University Applied Physics Lab next-generation prosthetic4. [8] [16] [33] [25] We used this prosthetics as inspiration when designing the finger.

We want to remove friction as much as we can by design, friction will cause the recording of the dataset to be different from run to run. To create consistency and be able to replicate results we do several things. We change the design from the biologically correct knuckle sliding in knuckle to a more practical joint with a fixed rotation point, removing the friction in between the bones. In addition, we add ball bearings, one on each side of the finger, to reduce the friction even further. We then add a plastic sheet in between the printed parts, allowing them to slide freely when the finger is assembled.

Choice of sizing was made for the convenience of assembling and attaching tendons. We also design each joint to have more than 180 degrees of movement and does not have anything that would stop the finger from moving the wrong way when it comes to the design of the actual finger. The mounting points on the finger for the tendons have been placed furthest away. As with many of the other choices, the machine learning should be able to handle an arbitrary choice. However, we chose to put the mounting hole for the tendons as far up on the finger as possible. This way we can more accurately control the finger, and therefore be able to position the finger more accurately. Mounting points further down would move the

1https://www.shadowrobot.com/products/dexterous-hand/

2https://www.hy5.no/

3http://bebionic.com/

4https://www.jhuapl.edu/prosthetics/

(42)

finger more with less movement of the servo, but in cost of accuracy, so for simplicity and proof of concept the mounting points were not placed there.

We have previously done work on soft robotic fingers and experimented with different shapes and setups. The work was done in connection to a course running at the University of Oslo. The design made in this thesis was not one of the designs previously experimented with. As these fingers were designed to be more biological correct and human-like. During the work, we got experience with the friction created between the joints, how to reduce it and what problem it caused. This is why we have chosen to go with a more frictionless and less human-like solution. [34](Unpublished)

4.1.2 Platform/palm

The palm is where we mount our servos, line guide, attach static tendons and mount the finger. We created two versions as the thesis went on and experiments required us to expand and change the palm. Version specific differences will be explained in section 4.1.2 and section 4.1.2.

Version 1 of the palm

Version 1 was made for standing on the table, letting the finger go from a horizontal position to standing upright vertically. We made four mounting points for AX-18A servos, only three was intended to be mounted at once.

The fourth one was to have room to experiment with the position. The mount for the finger was the same design as the finger with ball bearing on each side and room enough to put plastic sheets in between the plastic for lower friction. The mount is raised so there is space for tendons to run under it and attach to the palm. This was intended for tendons that would pull the finger out, the opposite way that the servos would pull.

This would make for an antagonist system, returning the finger when the servos roll back again. Version 1 of the palm can be seen in figure 4.1 on the facing page.

Version 2 of the palm

The AX-18A did not have the range of movement we required, as it only supports 300degrees while supporting reading of a position, illustrated in figure 4.2 on the next page. We changed it for the MX-64AT dynamixel servo, supporting multi-turn mode, meaning the servo can do multiple 360 rotations while being able to read of the position. This simplifies the setup of experiments, attaching the right lengths of tendons and gives the possibility to full range of motion on the finger. The MX-64AT have a different hole pattern for screwing it in place compared to the AX-18A, so we redesigned the platform to a full grid format, allowing for custom servo mounts to be printed, that was compatible with the grid on the palm. We

(43)

Figure 4.1: 3d model of the palm version 1 from Fusion360

Figure 4.2: Range of motion for the AX-18A Dynamixel servo. Image is taken from http://emanual.robotis.com/docs/en/dxl/ax/ax-18a/

(44)

Figure 4.3: 3d model of the palm version 2 from Fusion360

also increased the size of the palm, allowing the servos the be placed more freely on the platform and also because the MX-64AT servos are slightly bigger than the AX-18A servos. We also needed more space to be able to set the servos a bit apart, a problem discovered with the first version when placing multiple servos, the tendons would crash into one another.

With the new setup and spacing with the servos, we added something we called a line guider. Before the tendons go into the finger they would go through this “tunnel” making the tendon going straight to the finger, but at an angle to the servo. The reels on the servo were designed with a chamfer such that they allowed the tendon to come in at an angle. The biggest change from version 1 to version 2 would be that the finger now goes from a vertical position hanging down to a horizontal position. This was done due to experimenting with the exponential property of the silicone tendons. Which is explained in chapter 5.2 on page 40. This with removing the static tendons from underneath the finger and replacing with a weight to be lifted. The tendons were also cut short, and fishing line was attached to the servos, running through the line guider and attaching to the tendons, also explained in the chapter 5.2 on page 40.

(45)

(a) Line guider for tendons or fishing line. Mounts in between servos and the finger, on the palm.

(b) Reel for the MX servo, chamfer added on the inside to easier rool on the fishing line in an angle.

Figure 4.4

4.2 Design of Tendons

4.2.1 Circular Tendons

The molds for casting silicone were printed at the Fortus mc250 3d-printer in ABS plastic. For casting circular tendons, pipes were printed at the desired diameter, and we printed molds for 3mm and 6mm tendons. The molds are long straight pipes, the longer, the better as the tendon could be easily be cut to the desired length. Printing the molds laying down in the printer makes the layers go along the tendon, making it easy to pull the tendon out after it is cast. After print the silicon is injected into a pipe with a syringe, due to the high viscosity of the silicone it is a practical way of filling the pipe. Then the pipe is placed upright in a holder, and a hot air gun is used to harden the silicone in the bottom to make it tight. A smaller piece is placed on top of the pipe with a bigger diameter than the tendon, and this is used after the silicone is hardened to pull the tendon out. The final result will have the lines from the layers in the 3d print and also a rough surface due to printing an overhang without support. Problems with this method of casting tendons this way, especially 3mm, is that air bubbles in the silicone make the tendon snap where the air bubble was created. An image of the mold can be seen in figure 4.5 on the following page

Instead of 3d-printing the molds straws were bought and tried as well.

The process is the same, and the result is a tendon which smooth, since then the imperfections of the 3d-print are removed. Using straws is cheap and certainly makes the process of mass production easy, but removes the possibilities that 3d-printing gives. That is printing the longest mold as possible, dimensions of the tendon and the shape of the tendon. Straws are widely available in almost all lengths and dimensions, so by searching long, enough appropriate straws could be found.

(46)

Figure 4.5: Tendon mold for silicone casting of circular tendons

4.2.2 Non-Circular Tendons

There is no requirement for the tendons to be circular and using other shapes could have other advantages and disadvantages. The biggest problem with circular tendons is that a mold with openings in each end is the only possible way to achieve the shape. With for example a square tendon the mold could be open on the side, allowing air bubbles to escape and simplifying the casting process. Having fewer air bubbles in the tendons will result in better tendons that are more equal to each other. Also, it would allow creating a tendon that splits in two and joins together again later on.

4.2.3 Square Tendons

Casting a square tendon is a lot easier as discussed in the last section and is what we ended up choosing. The mold as seen in figure 4.6 on the next page, allowed for fast production of square tendons with reliable success rate. Tendons ended up on a 3x3mm dimension as they would fit the current sizing of the finger. The machine learning should be able to adjust to different sizing and different types of silicone, so the size was arbitrary and solely chosen due to the ease of production and use.

(47)

Figure 4.6: Mold for square tendons and tendon with a hole, for tendon in tendon setup.

Figure 4.7: A square tendon and a tendon allowing for the tendon in tendon design. A ring was inserted to expand the hole and let the tendon move more freely.

4.2.4 Tendon in Tendon

With the square tendons, the mold could easily be changed to create tendons that split and join again, as seen figure 4.6. The human finger has tendons that go in one another, creating a point where the force is redirected during a pull, making it possible to move joint more independently of another. The current design was a bit tight creating too much friction in between the tendons, and the hole should be bigger. We also inserted a

“ring” into the hole, expanded it and making it more static, something that helped a bit, but the silicone had too much friction against the new material. Grease was also applied to reduce friction and see if the tendons would slide in one another easily. The difference was drastic and certainly reduced the friction.

4.3 Finger-tip Tracker

4.3.1 Camera detection algorithm

Using OpenCV, we have managed to track the XY-position in the image effectively at around 40 frames per second. Using a web-camera, we turn the images into grayscale. From there we threshold on a low value, meaning dark objects in the image. All pixel that is dark enough is marked white, and all others are marked black. This thresholded binary image is sent to OpenCV findContours, where we find contours in the image, due to different light in the images some of the shiny bolts come off as black, and we get more than one contours. These contours are small and usually consist of just a few pixels, that way we can pick the contour with the biggest contour area and always get the bolt. Given the contour, we calculate the center of it, and that gives us the XY value for our fingertip.

A few error checks have been added to make everything run smoothly, at

(48)

Figure 4.8: Tracking of fingertip

startup while the web-camera is focusing the brightness can be way off, giving us zero contours, so we check for that before trying to find the biggest contour. Moments of a contour that are the measurement used to calculate the center of the contour does sometimes return 0, so a check has been added to not divide by zero. This check during a collection of a dataset will skip the data point that frame was represented, leaving holes in the dataset. We will discuss this more in the data collection, section 4.5.

4.3.2 Requirements for the tracking

A few things need to be in place to make this finger detection to be working.

The point to be tracked, in this case the fingertip, need to have a black object attached to it. We used a black bolt. Optimally it should be the only black object in the picture frame, and it has to be the biggest black object. Therefore there is placed a white background behind the finger. In figure 4.8 we can see the path the finger had moving up. The finger is designed so that the black bolt can be attached to the end of each joint, making it easy to change between recording datasets for 1-3joints. The image is blurred since the webcam can’t focus on objects this close, a blurred image is good as it removes noise and would have been applied if the webcam was in focus.

(49)

Figure 4.9: Angle Calculations

4.4 Finger angle calculation

A calculation of the angle of the finger with 1 joint was made for checking the finger tip tracker and understanding the function between the servo angle and the finger angle. Given a setup as seen in figure 4.9, the points (x0,y0) and (x1,y1) representing the rotation point and finger tip respectively. Then the length is given by (4.1) can be used to calculate the angle with (4.2).

f = q

(x0−x1)2+ (y0−y1) (4.1)

h= y0−y1 arcsin

h f

=θ (4.2)

4.4.1 Parameters for tracking

The parameter to adjust in the tracker is how the intensity of the pixel to be considered a part of the bolt. The bolt should be the darkest part of the

(50)

image, and therefore all pixels that are under a certain threshold will be classified as the bolt. As mentioned earlier we calculate the biggest contour to get the bolt, for there will be pixels that are misclassified. The threshold value will be dependent on the lighting in the setup, but for our setup, the threshold value used was 50, where the intensity of the pixel in the image ranged from 0 to 255(28). This would result in marking the wanted pixels with 1 when src(x,y) < threshold and 0 otherwise.

4.4.2 Need for frames per second and stability

Having the process of tracking the finger go faster means more data points when collecting the dataset, but we also do not want to collect any points that would be wrong or misclassified. Possible variables that could cause this is the lighting, bad threshold value and not covering the entire frame with a white background, potentially showing black objects in the background. The more frames we can collect on a movement with the finger will give us more data points to work with. When collecting the data points, we want as many as possible, because we can always limit the number of data points during training.

4.5 Collection of Datasets

4.5.1 Datasetgeneration processes

The dataset generation processes are in general the same for all number of joints. We want to capture as many data points as possible when we are moving the servos through all possible positions, recording the tip of the finger. This with excluding moving the finger into a position it cannot reach, as in bending a joint backward. Resulting in a long CSV file with as many servo angles and XY-coordinate pairs as possible. For any number of joints, we set the interval, the min, and max on the servo, and move the servo to either value. From there we set the servo to move to the opposite side of the interval, starting the recording. We read the position of the servo and get the XY-coordinate by the algorithm explained in section 4.3.1 on page 29, if the servo position or XY-coordinate is unable to return a value, we skip to the next frame. Otherwise, we will append the servo angle and XY-position to the list. The skipping of frames would leave “holes” in the dataset, and this may affect our learning. However, the skipping of frames rarely happens, and the density of data points is high. Therefore it would be a need for multiple skips in a row to affect the dataset in any significant way. After the servo angle reaches its position the recording stop. For 1 joint we can calculate the angle of the finger and compare this to the servo angle, going from 3D data down to 2D. Then we can see if the relationship is linear or non-linear. We did this as we inspected the properties of the tendon and the relationship to the servo

(51)

Figure 4.10: N=3 for joint1, joint2 cannot move from 0-90 degrees for the last N as it would result in the fingertip being outside the camera frame.

angle, further explained in chapter 5 on page 39. Since the finger is inspired from our own finger, we have put some limitation on the finger, that is not modeled physically in the 3d-printed parts. A human finger can only curl one way, and the 3d-finger has no such restrictions. We have set the range of motion of joint1 to be in between 0-90deegres, this limitation comes from our current fingertip tracker. Joint2, however, can move from 0 degrees to a greater angle than 90deegres, as long as it is in the frame of the camera.

It also due to the previously discussed problem with joints affecting each other, can not reach the same range of motion for all angles of joint1. When joint1 is at 0 degrees we can pull joint2 as far as we want, but when joint1 is at 90deegres, joint2 needs to be at 0 degrees to stay in frame, as shown in figure 4.10.

4.5.2 Dataset for 1 Joint

Setting up the dataset generation is for 1 joint is done the following way.

Finding the servo angles that give 0-90 degree movement of the joint during the setup process and then hit record. The finger will move through all possible position for the finger, and the only limit is the camera tracking on how many data points we will get along the curve. For our dataset, the python script recorded 430 points. Image from recording with one joint can be seen in figure 4.8 on page 30, showing one red dot for each recorded point, and as we can see from the image it is one solid line, meaning there are no holes significant enough to that we have lost the data.

(52)

4.5.3 Dataset for 2 joints

Collecting a dataset for 2 joints proved more difficult than previously thought. Not only do we have a larger number of possible position and configurations, where every angle for joint1 gives a new position 0-90 for joint2. To collect a dataset, we will put in N number of angles evenly spread between 0-90deegres. For every N we move our joint1 to, we will move joint2 from 0deegres relative to the joint1, to a max point in the top of the camera frame. The way this was recorded is what is making the jumps in the graph, and this could be altered to avoid the jumps and have a continuous take of the position. Challenges with this approach are to know what servo angle joint2 can be at for all of the N angles for joint1.

Choosing a safe value making the joint2 angle greater than 0deegres at the start and will reduce the number of data points we will have to learn and also not show the minimum of what a neural net can except. A safe value here in the meaning that we do not risk the joint flipping over since no physical 3d-printed part is holding it back. Choosing a value closer to 0 degrees runs the risk of getting a negative value for joint2 if we are wrong and the dataset is useless. The finger would end up in a configuration as we can see in figure 4.11 on the next page. So for each of the N angles of joint1, we need a min-max for joint2. If the joint2 has flipped over it either has to reel back down, to a free hanging position or be helped out. With 2 joints the manipulator would also be able to achieve the same position of the fingertip with more than one configuration. We illustrated this in figure 4.12 on the facing page, where the two manipulators reach the same XY-coordinate. However, the dashed one have a joint that is flipped over, something that a finger would not be able to do, so we will not look further into this.

4.5.4 3 Joints Challenges

The main problem with adding the last joint would be finding a good number of positions for the first two joints to go through and also all the different intervals the joint 3 would need. Then if joint 1 has 4 angles and joint 2 has as well, we are looking at 16 configuration, which means 16 configurations for joint 3 to be found, to be able to record the dataset.

The 3 joint problem would require some smarter way of moving the finger around to gather the possible positions and how to reach them, as the flexibility of recording with a different number of angles to be explored for joint 1 and 2 would be quickly narrowed down by the amount of work to find intervals for joint 3. This is the same problem we had for 2 joints, but the number of angles to desired ranges are far greater than for 2 joints.

With the additional joint, we also have another joint that can flip over and therefore getting stuck in a position we do not allow our finger to be in. We would, in general, have the same problems as for dataset for 2 joints.

(53)

Figure 4.11: Joint2 have flipped over,θ2has a negative value.

Figure 4.12: Two mainpulators with different configurations that reaches the same XY-coordinate.

(54)

4.5.5 Collection of all data points

The biggest challenge of a dataset for more than 1 joint is capturing the whole dataset, making the finger go through all possible positions. Going through all positions would give a massive amount of data. We can approach this complete dataset by increasing the number of angles joint1 visit, by setting a high value we would see “all” positions that the finger could reach. Our collection of the dataset for just 1 joint, we got 470 data points, which means a good amount of angles to visit for joint 1, could be up towards 500, assuming that the network got sufficient data on the dataset for 1 joint with 470 data points. Gathering such a complete dataset could give us a better benchmarking potential on how good our algorithm is approximating with fewer data points when we have the solution.

4.6 Distal Supervised Learning

4.6.1 Forward model

Creating our forward model we feed it servo angles and it will produce coordinates. The backpropagation is then done on the error between the suggested coordinates and the actual solution. This error, our loss, is calculated as an MSE(Mean square error). This process is done a set number of epochs before we evaluate the performance of our forward network.

The forward is considered a replicate of the model and should produce the same result as the physical finger. This is the assumption that we need to make in order to create an inverse network, any error in the forward network would, therefore, transfer to the inverse network.

4.6.2 Inverse model

Takes XY-coordinates and outputs servo angles. The network is connected to the forward model which will then transfer this to coordinates again.

Calculating error and doing backpropagation. The error is calculated the same way as in the forward network with MSE. The inverse network runs for the same number of epochs as the forward before getting evaluated.

The network can be seen back in figure 2.6 on page 12.

4.6.3 Architecture Choice

We made the architecture of the two networks the same, just inversed as one takes input of what the other gives as output. The idea behind this is that if one network can go from servo angles to coordinates with a certain architecture, the same architecture flipped around should be able to reproduce the results. For the number of hidden layers, we went with 1

Referanser

RELATERTE DOKUMENTER

Figure 2: Blurry (Sticky) Finger: proprioceptive pointing and se- lection, using the finger to point at (and even segment) the target but without focusing on it (thus blurred),

For image classification with machine learning techniques, we investigate both feature vector based supervised classification and neural network based classification; see Figure 1 for

Figure 2: Our Learning Algorithm: We use self-supervised learn- ing to generate paired dataset using a training dataset with refer- ence images only and initialize the model

In Figure 7 we show a qualitative example, in the targeted set- ting, of how the learning model increases its robustness after train- ing on our band-limited adversarial examples..

Viewpoints predicted on the test set, i.e. unseen models, by our network trained with ML+GL labels can be seen in Fig. We stress that due to label ambiguity the network is not

In traditional Internet Protocol (IP)-networking (Figure 2.1) each router or switch runs their own local control software (control plane), which further dictates how the

While we managed to test and evaluate the MARVEL tool, we were not able to solve the analysis problem for the Future Land Power project, and we did not provide an answer to

tion analysis included index finger pointing as predictor in the model. Results of this mediation analysis are depicted in Figure 2. No signifi- cant indirect effect was found in