Advanced Search:

Contact us

联系我们

Telephone:(852) 2838 3620

Email:sales@silverwing.com.hk

Address: Unit 2, 4/F, Kwai Cheong Centre, 50 Kwai Cheong Road, Kwai Chung, New Territories, Hong Kong
Your Current Location :Home » News » Industry  » The future of autopilot: Multi-sensor fusion

The future of autopilot: Multi-sensor fusion

Source: Time:2017-09-07 10:30:13 views:

        The sensor is the hardware foundation of the surrounding environment where the car is perceived, and it is essential to achieve the various stages of automatic driving.

        Automatic driving can not be separated from the perception layer, the control layer and the implementation of the mutual cooperation. Camera, radar and other sensors to obtain images, distance, speed and other information, play the eyes, the role of the ear.


        Control module analysis and processing information, and to judge, issued instructions, play the role of the brain. Body parts are responsible for the implementation of instructions, play the role of hands and feet. And environmental perception is the basis of all this, so the sensor is indispensable for automatic driving.



Three important sensors

Camera: Smart driving eye
        Car camera is the basis for achieving many early warning and identification classes. In many ADAS functions, the visual image processing system is more basic, more intuitive for the driver, and the camera is the basis of visual image processing system, so the car camera for automatic driving is essential.


        Many of the above features can be achieved with the camera, and some functions can only be achieved through the camera. Car camera prices continue to fall, the future bike more cameras will become a trend. Camera costs are relatively low, the price from the 2010 more than 300 yuan continued to fall, by 2014 a single camera price has been reduced to 200 yuan, easy to popularize the application. According to the requirements of different ADAS function, the camera installation location is not the same. According to the installation location of the camera can be divided into front, side view, rear view and built-in four parts. The future to achieve a full set of ADAS function, cycling need to be equipped with at least five cameras.


Front view camera

        Front view camera using the highest frequency, a single camera can achieve multiple functions. Such as traffic records, lane departure warning, forward collision warning, pedestrian identification and so on. Front view camera is generally wide-angle lens, installed in the car rearview mirror or the front windshield on a higher position to achieve a far away effective distance. Side view camera instead of rearview mirror will become a trend. As the scope of the rearview mirror is limited, when another car in the rear of the car is located outside the scope of this "stealth", because the existence of blind spots, greatly increasing the chance of traffic accidents. And in the vehicle on both sides of the installation side view camera can basically cover the blind area, when a vehicle into the blind area, there is automatically remind the driver to pay attention.


Panoramic parking system

        Panoramic parking system through the installation of the body around a number of ultra-wide angle camera, while collecting images around the vehicle, after the image processing unit correction and stitching, the formation of a vehicle around the panoramic view, real-time transmission to the center console display device on. The driver can sit in the car "God perspective" intuitively see the location of the vehicle and the vehicle weekly obstruction. Car camera is widely used and the price is relatively low, is the most basic and most common sensor. Relative to the mobile phone camera, car camera situation worse, need to meet the earthquake, anti-magnetic, waterproof, high temperature and other demanding requirements. Manufacturing process complex, high technical difficulty. Especially for the ADAS function of the front view camera, involving traffic safety, reliability must be very high. So the car camera manufacturing process is also more complex.

Car camera industry chain

        Before becoming a tier-one supplier of the vehicle manufacturer, a large number of different types of rigorous testing are required. But once the vehicle manufacturers into the level of a supplier system will form a high barrier, it is difficult to be replaced, because the replacement of the cost of the supplier is too high, re-replacement of the supplier means that the vehicle manufacturers to conduct a complex test again The Global Vision Department ADAS leader Mobileye has been developing a visual processing system since its establishment in 1999. However, in 2007, the Mobileye product was marketed, from R & D to formal entry into the pre-installed market for eight years. But become a large number of vehicle manufacturers a supplier, Mobileye has become an absolute oligarch in this area. Since its company has been listed in 2014, with other companies competing for the major car manufacturers of intelligent car safety equipment tender, Mobileye's success rate is almost one hundred percent.


Millimeter-wave radar: ADAS core sensor


        The wavelength of the millimeter wave is between the centimeter wave and the light wave, so the millimeter wave has both the advantages of microwave guidance and photoelectric guidance:
1) Compared with the centimeter waveguide, the millimeter waveguide has the characteristics of small volume, light weight and high spatial resolution.
2) Compared with the infrared, laser and other optical seeker, the millimeter waveguide has the ability to penetrate the fog, smoke and dust, the transmission distance is far, with all-weather all day characteristics;
3) stable performance, from the target object shape, color and other interference. Millimeter-wave radar is a good way to make up for such as infrared, laser, ultrasound, camera and other sensors in the car application does not have the use of the scene.
        Millimeter-wave radar detection distance is generally between 150m-250m, and some high-performance millimeter-wave radar detection distance can even reach 300m, to meet the car in high-speed movement to detect a wide range of needs. At the same time, millimeter-wave radar detection accuracy is higher.


Millimeter wave radar is used in adaptive cruise

        These features enable the millimeter-wave radar to monitor the operation of a wide range of vehicles and to be more accurate for the detection of the speed, acceleration, distance and other information of the front vehicle. Therefore, it is an adaptive cruise (ACC), automatic emergency brake (AEB) The preferred sensor. The current 77GHz millimeter-wave radar system unit price of about 250 euros, high prices limit the millimeter-wave radar car-based applications.


Lidder: Powerful
        Laser radar performance is excellent, unmanned best technology line. Lidars have a very superior performance compared to other autopilot sensors:
1, high resolution. LIDAR can achieve very high angles, distance and speed resolution, which means that the lidar can use Doppler imaging technology to obtain very clear images.
2, high precision. Laser linear propagation, good direction, the beam is very narrow, very low dispersion, so the accuracy of the lidar is very high.
3, strong resistance to active interference. Unlike microwave and millimeter-wave radars, which are susceptible to the widespread effects of electromagnetic waves in nature, there are few sources of information that can interfere with the lidar in nature, so the ability of the lidar to resist active interference is strong.

Space modeling of lidar

        Three-dimensional lidar is generally installed in the roof, you can rotate at high speed, in order to obtain the surrounding space point cloud data, so real-time mapping of the vehicle around the three-dimensional map. At the same time, the lidar can also measure the other vehicles in the direction of the distance, speed, acceleration, angular velocity and other information, combined with GPS map to calculate the location of the vehicle, these large and rich data transmission to the ECU analysis and processing, For the vehicle to make a quick judgment.


Laser radar vehicle program
Map-centric: Unmanned Internet companies represented by Google and Baidu are map-centric, mainly because LIDAR can draw high-precision maps for these companies.
Take the car as the center: for most car prices, they want a tailor-made for the car laser radar products.

Baidu automatically drives the lidar on the car

        First, compared with the heavy "big flower pots" for surveying and mapping, small lasers and cars are better equipped. In order to take into account the beauty and drag coefficient, automatic driving cars and ordinary cars should not have any difference in appearance, the laser radar as much as possible to be Made into a small volume directly embedded in the body, which means that the mechanical rotating parts to do the minimum or even abandoned. So the car laser radar does not use large volume rotation structure, but in the production process, the rotating parts to do the product inside. For example, Ibeo's laser radar product LUX, to a fixed laser light source, through the internal glass plate rotation to change the direction of the laser beam to achieve multi-angle detection needs.
        And Quanergy's product S3 is a solid-state product that uses a phase matrix new technology with no rotating parts inside. But good things are expensive Lidder unit price in million units, high prices make it difficult to market. 


A change caused by an accident
        In May 2016, the United States, Florida, opened a autopilot mode (Autopilot) Tesla and white heavy truck collision, resulting in Tesla owners died.

        This is known as "the world's first autopilot death" accident, so many people began to worry about the safety of automatic driving, but also to Tesla Meng on a layer of haze.

Tesla
        After the accident, Tesla and its visual identification system supplier Mobileye termination of cooperation, and in September through the OTA push V8.0 system, strengthen the role of millimeter-wave radar, to enhance the main control sensor. Tesla V7.0 era of automatic driving mainly to image recognition, millimeter-wave radar is only a secondary sensor, V8.0 system on the entire technical program to make a lot of adjustments: to millimeter-wave radar, image recognition Supplemented by the radar can monitor the range is 6 times before, significantly enhance the ability of Tesla's front barrier to identify. By October 2016, Tesla also released Autopilot 2.0, announced that all future production models will have a fully automatic driving hardware system. At the same time, Tesla said in this hardware based on the safety of automatic driving has been an unprecedented upgrade.
Autopilot 2.0 vs. Autopilot 1.0 hardware comparison
Tesla's fully automated driving hardware system includes:
1) body around the installation of eight cameras, to measure the object within the scope of 250 meters;
2) equipped with 12 ultrasonic sensors, to assist in detection;
3) upgrade the enhanced version of the millimeter-wave radar, can work in bad weather, but also to detect the front of the vehicle;
4) the performance of the car motherboard is 40 times the previous paragraph, significantly enhance the computing power.

        Tesla released Autopilot 2.0 full autopilot hardware change is the biggest camera, the number from the original one to eight. This also indicates that Tesla perceived the technical route from the original camera, to rely on the radar, and finally re-selected the camera. Tesla's Changing Master Sensors Selection Description The sensing end is not yet fully fixed at the technical route, and Tesla itself is constantly advancing in exploration.


Mobileye

        In fact, with Tesla "break up", is Mobileye put forward. After more than ten years of R & D innovation, Mobileye with its EyeQ series of high-level visual algorithm on the chip can achieve a variety of ADAS function, has become the visual system ADAS products in the absolute leader.


        Starting from the first generation of EyeQ products developed in 2007, Mobileye and STMicroelectronics continue to upgrade the chip technology, optimize the visual algorithm, EyeQ3 product computing speed is the first generation of products 48 times.

        From the table we can see that the first three generations of products are equipped with only one camera. Currently EyeQ4, EyeQ5 product plan has been published, which EyeQ4 will start using multi-camera program.

        Chip upgrade and algorithm optimization, Mobileye's chip algorithm will fuse more sensors, will launch a multi-purpose camera + mm wave radar + laser radar solution, full support for unmanned.July 2016, Mobileye announced and Tesla termination of cooperation, EyeQ3 will be Mobileye and Tesla's last cooperation. Almost at the same time, Mobileye also announced that Intel, BMW to cooperate. In March this year, Intel to premium 33% + price acquisition Mobileye.



In fact, Mobileye and Tesla termination of cooperation in the deep reason is:
1) style strategy is different. Mobileye is relatively conservative, Tesla is relatively radical, so Mobileye more inclined to cooperate with the traditional car manufacturers.
2) Data attribution is controversial. Mobileye proposed a concept called REM, the data will be shared by the members joined, and as the accumulated mileage and the most data Tesla do not want to share the data in vain to other depot.
        But Tesla is only one of the many vehicle customers faced by Mobileye, but with Intel's powerful combination, Mobileye will benefit from Intel's resources from the chip side to help build a vision-based, sensor integration to achieve a powerful algorithm to promote The visual algorithm continues to move towards automatic driving.


Trend - multi - sensor fusion
        On the Bittra with Mobileye product upgrades we will find that "old lover" of the flesh, although separated, but the spirit is still the same. Are by increasing the number of sensors, and allow multiple sensors to improve the automatic driving ability.
        The above mentioned Tesla accident, the main reasons are:
        Millimeter-wave radar ranging may be misjudged. Millimeter-wave radar measured in front of a huge obstacle, but may be because the truck reflex area is too large and the body is too high, millimeter-wave radar will be trailer mistakenly suspended for the traffic above the road signs;

Camera light blinding

        Front camera EyeQ3 may misjudge. Accident trailer is tapping, body white, no color warning, in the sun strong environment, the image recognition system is easy to mistaken the trailer as a white cloud. In extreme cases, Tesla's millimeter-wave radar and front-facing cameras were misjudged. Visible camera + mm wave radar program lack of redundancy, poor fault tolerance, it is difficult to complete the mission of automatic driving, the need for multiple sensor information integration comprehensive judgment.
        Sensors have their own advantages and disadvantages, it is difficult to replace each other, the future to achieve automatic driving, is the need for a variety of (a) sensors together constitute the vehicle's perception system. Different sensor principle, function is different, in different use of the scene can play their respective advantages, it is difficult to replace each other.


        Multiple similar or different types of sensors to obtain different local and category information, these information may complement each other, there may be redundancy and contradictions, and the control center can only issue the only correct instruction, which requires the control center must The information obtained by multiple sensors to fuse, comprehensive judgment.
        Imagine if a sensor gets the information that requires the car to immediately brake while the other sensor shows that it can continue to travel safely, or if a sensor requires the car to turn left while the other sensor requires the car to turn right, in this case, if not Sensor information fusion, the car will be "feel confused and at a loss", and ultimately may lead to accidents. Therefore, in the case of using a variety of (sensors), in order to ensure safety, it is necessary for the sensor information fusion. Multi-sensor fusion can significantly improve the system redundancy and fault tolerance, so as to ensure the decision-making speed and correctness, is the inevitable trend of automatic driving.



Multi - sensor fusion requirements
1) the hardware level, the number should be enough, that is, different types of sensors should be equipped to ensure that information is sufficient and redundant;
2) software level, the algorithm should be sufficient to optimize the data processing speed to be fast, and fault tolerance is good, in order to ensure the final decision-making speed and correctness.

The algorithm is the core of multi-sensor fusion
        Simply put, sensor fusion is the data obtained by multiple sensors, and the information is integrated together to analyze the external environment more accurately and reliably, thus improving the correctness of the system decision.


The basic principle of multi - sensor fusion
        The basic principle of multi-sensor fusion is similar to the human brain's comprehensive treatment of environmental information. Human perception of the external environment is through the eye, ears, nose and limbs and other senses (various sensors) to detect the information transmitted to the brain (information fusion center), and with a priori knowledge (database) to integrate, so that its The surrounding environment and the events that are happening are made quickly and accurately.
       Multi-sensor fusion architecture: distributed, centralized and hybrid.

        1) distributed. The original data obtained by each independent sensor is processed locally, and then the result is sent to the information fusion center for intelligent optimization combination to obtain the final result. The distributed demand for communication bandwidth is low, the calculation speed is fast, the reliability and continuity are good, but the accuracy of tracking is far from centralized.

        2) centralized. Centralized to the sensor to obtain the original data directly to the central processor for fusion processing, can achieve real-time integration. The high precision of the data processing, the algorithm is flexible, the disadvantage is the high demands on the processor, low reliability, large amount of data, it is difficult to achieve.

        3) mixed type. Hybrid multi-sensor information fusion framework, part of the sensor using centralized fusion, the remaining sensors using distributed fusion. Hybrid fusion framework has strong adaptability, taking into account the advantages of centralized fusion and distributed, strong stability. The hybrid fusion structure is more complex than the first two fusion schemes, which increases the cost of communication and computing.


        Because the use of multi-sensor will increase the amount of information needed to deal with, which even have contradictory information, how to ensure that the system quickly deal with data, filter useless, false information, so as to ensure that the system finally make timely and correct decision is critical The At present, the theoretical methods of multi-sensor fusion are Bayesian criterion method, Kalman filter method, D-S evidence theory method, Harvatek fuzzy set theory method, artificial neural network method and so on. From our above analysis can be seen, multi-sensor fusion in the hardware level is not difficult to achieve, the focus and difficulties are in the algorithm. Multi-sensor fusion hardware and software is difficult to separate, but the algorithm is the key and difficult, with high technical barriers, so the algorithm will occupy the main part of the value chain.



Conclusion
        In the tide of automatic driving, independent brands of car prices on the intelligent, electronic demand is more robust than the joint venture, followed by the independent is a second-class parts suppliers in the field of opportunities in the past few years , The parts industry is also continuing to wait for the market to open. Relative to the control layer and the implementation of the Internet by the Internet giants, OEMs and Tier 1 control, the sensor layer of the parts suppliers are more dispersed and the threshold is relatively low, relatively short into the cycle. The sensing layer is still the easiest entry point for domestic companies to enter the autopilot industry.


                                          Home |  About us |  Product  |  Solution Provider  |   News |  Contact us  粤ICP备17091917号-1

                              HK Address: Unit 2, 4/F, Kwai Cheong Centre, 50 Kwai Cheong Road, Kwai Chung, New Territories, Hong Kong


Top