Computing technologies: platforms, processors, and controllers Chapter | 3 33
The training of machine learning models on large quantities of historical
data and their application on real-time data will allow developing solutions for
traffic estimation, travel time prediction, and vehicle breakdowns. The adoption
of machine learning in the automotive industry can also offer route recommen-
dations based on fuel consumption and even parking availability. To introduce
safe, personalized, and predictable AD experiences, OEMs and Tier 1 suppliers
are investing heavily in machine learning and predictive analytics. Predictive
algorithms and artificial neural networks help smart vehicles see and interpret
road environments up to 99.8% better than human drivers.
The next generation of ECUs that host deep learning models is a fact and
the market interest for them is steadily growing according to recent market
researches (Amsrud & Garzon, 2018 ). The volume of data that such units are
expected to process strengthens the need for integrated deep learning models
that will adapt to any condition and will be an intrinsic part of the modern auto-
motive electronics systems.
The need for local data processing and adaptability to the conditions is evi-
dent in autonomous vehicles, in drones and robots that navigate in open and
uncontrolled environments. Local data preprocessing in combination with rein-
forcement learning algorithms that run locally will allow such systems to adapt
to any condition, avoiding the network latency and other security risks that
cloud processing-based solutions can face. An indicative example is the extrac-
tion of traffic information directly from the onboard camera of vehicles, using
computer vision models that run near the sensor and not in the cloud. This is
expected to reduce the communication overhead and balance the computational
load between the cloud and the edge devices. The development of low-cost and
low-consumption hardware that will be able to run on the edge and successfully
and efficiently analyze video sequence is still challenging, although many steps
have yet been accomplished. The energy consumption that is still high, the limi-
tation in computational power and memory and their high cost are a few of the
factors that still keep embedded platforms far from being the common practice
for automotive applications.
Although Google and other AI specialists have developed deep learning
models for computer vision that have an outstanding performance in scene
decomposition and object identification, that in some cases outperforms even
the human eye (which has a 5% error rate), they still rely on large scale hard-
ware architectures. Such architectures comprise multiple GPUs or tensor pro-
cessing units that can process efficiently huge amount of computation loads and
are pushing toward the use of deep learning models for computer vision.
On the other side, Tesla takes advantage of the NVidia technology (the
Drive PX2 processor), which is embedded in the on-board driving control unit
(Lambert, 2017) and offers advances deep learning functionalities for processing
ultrasonic, camera, and radar data and supporting the vehicle’s ADAS system.
The surround view system developed by AdasWorks is another deep learning
implementation, which processes visual data from multiple cameras and sensors