• No results found

Software, Hardware and Sensor Integration

Methodology and Implementation

4.1 Software, Hardware and Sensor Integration

4.1.1 Software

ROS

Robot Operating System (ROS) [39] is an open source project for developing robotics software.

Contrary to its name, it is not an operating system, but rather a framework consisting of several tools and libraries. It is supported in C++ and Python among other languages. ROS provides a communication infrastructure, different features and certain ROS-specific tools.

The communication infrastructure consists of nodes connected to each other in a network by a master node. It is based on a peer-to-peer network where information can flow between relevant nodes. A node is defined as a process that performs certain tasks. A node can listen to certain messages, or in ROS terms, it can subscribe to certain topics. It can also publish topics that other nodes can subscribe to. For instance, the driver for a lidar is a node that publishes point cloud messages on a topic that the detection node is subscribing to. Several nodes can publish and subscribe to the same topic.

It is possible to define your own message types in ROS, but it also contain many standard messages. For instance, point cloud messages and navigation messages that can be used for state estimation. The standard point cloud message can be visualized in rviz, which is an integrated visualization tool in ROS. It is also possible to store data in ROS-bags, which is an built-in tool that can subscribe to topics and store the data in .bag files. It allows the user collect sensor data, which can be processed and analyzed at a later time.

PCL

Point Cloud Library (PCL) [48] is a large-scale open source project for image and point cloud processing. It is a framework that consists of a broad range of algorithms including segmen-tation, filtering, and registration. It is integrated as a toolkit in ROS, but can also be used as a standalone library in C++ or Python. It also includes visualization tools. Most of the point cloud operations for this project are done with PCL, which covers most of the filtering, trans-formations, calibration, and clustering.

Tensorflow

Tensorflow [3] is an open source framework for machine learning. It consists of various tools and libraries for deploying and evaluation of machine learning methods. It is supported in Python, JavaScript, C++ and Java, where the Python and C++ API is relevant for this project.

For classification it contains methods for building neural networks and it is possible to define the user’s network structure. For instance, the different fully connected or convolution layers, type of pooling, or the activation function used. It also includes support for GPUs and CUDA.

4.1 Software, Hardware and Sensor Integration

4.1.2 Integration of Sensors and Hardware

Atmos, the trolley and the cones

Atmos is the electrical race car that Revolve NTNU will use for the 2020 season of the Formula Student competition. The vehicle was created as an electrical race car fitted for drivers in the 2018 season of the Formula Student competition, but was adapted the following year to allow for autonomous driving. The vehicle is equipped with a variety of sensors and configurations to allow for autonomous racing. This includes sensors for detection such as lidars and cameras, sensors for state estimation and a processing unit. Atmos can accelerate from 0-100 km/h in 2.2 s with a top speed of 110 km/h. The overall goal for Atmos in the 2020 season is to drive 17 m/s on average on an already mapped track. This gives indications for the performance needed for the autonomous pipeline. A CAD representation of Atmos is given in figure 4.3.

For testing purposes, a trolley was adapted to be fitted with mounts for the lidars and cameras, along with the INS. It can be equipped with a 12V car battery and a DC-AC inverter making it mobile. Since Atmos is under maintenance and due to ease of testing, the trolley is used for the experiments.

There are mainly three types of cones that defines the track. The first is the large orange cones that are placed before and after the start and finish lines. These can be used to indicate and count the number of laps the vehicle has run. The other two types are the small yellow and blue cones.

The left border of the track is marked with small blue cones, and the right border is marked with small yellow cones. Their geometric properties are228mm×228mm×325mmas illustrated in figure 4.2. Since the orange cones are taller than the small cones, clustering managed to find them without a lot of false positives. Therefore, they will not be a focus area in this project.

Figure 4.2:Yellow cone with measurments

Lidars

For this project, two lidars were chosen. The reasoning behind this is that it is assumed to result in a longer detection range, which is important to be able to perceive and plan sufficiently ahead when driving fast. The lidars used for this project are the Hesai Pandar40 and the Hesai Pandar20B, abbreviated as Pandar40 or P40, and Pandar20B or P20 in this thesis. Both are created by Hesai Technologies for autonomous driving. They are mechanical rotating lidars that spins with a frequency up to 20Hz and provide information about x, y and z coordinates, along with intensity information. Hesai Pandar40 has 40 vertical laser receivers and emitters, while Hesai Pandar20B has 20. For further specifications see table 4.1. Hesai provides an interface for changing parameters on the lidars, such as the address it sends data to and narrowing the field of view, both horizontally and vertically. Both the lidars have their own ROS driver that

processes the raw sensor data into the correct coordinate system, and publish it as a point cloud message, making it compatible with the ROS environment.

Figure 4.3:Render of Atmos

Pandar40 Pandar20B

Channel 40 20

Measurement Range 0.3 m to 200 m 0.3 m to 200 m

Measurement Accuracy ±2 cm (0.5 m to 200 m) ±2 cm (0.5 m to 200 m)

FOV (Horizontal) 360 360

Angular resolution (Horizon-tal)

0.4 0.4

FOV (Vertical) -16to 7 -19to 3

Angular Resolution (Vertical) From 0.33to 1(nonlinear) From 0.33to 5(nonlinear)

Wavelength 905 nm 905 nm

Table 4.1: Hesai Pandar40 [57] and Hesai Pandar20B [56] specifications

Vectornav VN-300

The state estimation system on Atmos is a complex system, combining several sensors to output the most probable state of the vehicle. On the trolley on the other hand, a more simplified state estimation system is integrated. This consists of an INS that is combined with dual GNSS antennas with an integrated EKF to output the states. The used sensor is the Vectornav VN-300.

Relevant specifications for the sensor is given in table 4.2. It has a ROS driver configured to publish\nav msgs, which is a common ROS message type that is also used on Atmos.

VN-300 SPECIFICATIONS

Horizontal Position Accuracy 2.5 m RMS

Position Resolution 1 mm

Velocity Accuracy ±0.05 m/s

Velocity Resolution 1 mm/s Output Rate (Navigation Data) 400 Hz

Table 4.2:VN-300 Specifications