Blogue
Autonomous cars on the roads ... and drones in the skies


Before driving on our roads, each autonomous car will have to obtain a certification from Transport Canada, a step that’s only possible if cars are shielded from computer attacks. (Photo: Eschenzweig, license CC BY-SA 4.0).
By putting in-car artificial intelligence (AI) behind the wheel, autonomous vehicle designers are opening the door to vulnerabilities that could compromise the safety of users and others in the vicinity of these cars. Professor Gabriela Nicolescu from the Department of Computer and Software Engineering at Polytechnique Montréal, is leading the industry - and many others - to rethink the way they design embedded systems to ensure the safety of these walking computers, directly from the design stage.
A car unable to recognize a mandatory stop sign on the corner of the street. Another for whom a pedestrian has become invisible. A truck accelerating at the sight of a red traffic light. These hypothetical scenarios could become reality if hackers attacked the code of an autonomous vehicle.
History has already demonstrated that these cars are not immune to attacks, as was proven in this 2015 experiment carried out by university researchers and a reporter for Wired magazine.
Of course, embedded software is getting better all the time, but flaws are always possible with the arrival of new models. Tesla, for example, learned this valuable lesson in 2019 in the Pwn2Own competition where computer security researchers try to find flaws in different computer systems. Senegal’s Amat Cama and American Richard Zhu identified a problem in the Tesla 3's on-board software at the time, much to the company’s dismay.
"Security by design" in embedded systems
![]() |
If each company designs its own solutions, then each company must also validate the “tightness” of these systems against computer attacks.
Professor Gabriela Nicolescu also takes care of this sort of concern, from an academic and practical perspective.
Specialising in the design of security systems for the Internet of Things, she's working with an American software/hardware giant to ensure the security of its autonomous vehicles embedded solutions. She's even led it to change the way it designs its solutions, adding specific security steps from the outset.
“Previously, the design of systems was primarily based on performance and precision needs,” she explains. “Today, systems security has been added in the early stages of the design flow. "
The challenge, she says, is to ensure that these additional features don't burden systems or increase their power consumption. “Autonomous vehicles operate with limited resources and computing power. Designers are therefore required to compress and optimize embedded AI software as much as possible, and that sometimes creates gaps," she notes.
These breaches make various types of attacks possible - malicious modification or code injection, extraction of intellectual property: these are just some of the threats to the on-board AI of autonomous vehicles. Threats that affect are vehicles' computer vision system are particularly significant, according to Professor Nicolescu.
Drones are also vulnerable to attacks that can cause, among other things, a denial of service (by saturating the system with requests) or the spoofing of communications between drones. Professor Nicolescu's team is developing a strategy to prevent the emergence of vulnerabilities at the design stage, by rethinking the design flow of these devices. (Photo: Trotaparamos, license CC BY-NC-SA 2.0.)
|
"By attacking the algorithm of this system, hackers can cause the AI to no longer be able to recognize signs or traffic lights, for example," she says. “The consequences could be tragic and very serious."
“Competitive concerns are also at stake," adds the cybersecurity specialist from Polytechnique Montreal. “Intellectual property issues are important because an implementation that requires very little computing power will be of great value to whoever owns it,” she explains.
The latter work is also transposed into the somewhat more distant universe of “autonomous drones.”
Nicolescu and her team are working to lay the groundwork for new software and hardware architectures for these devices, in order to make them safe and allow them to one day fly autonomously without risking an incident for the people below. Here again, "safety by design" is also required in order to get these devices certified by government agencies.
Automating... fluidity |
|
In another project involving an industrial partner, Professor Nicolescu's team is also working on securing decision support systems that supervise the movements of vehicles in the same fleet in real-time. These systems will also monitor the route taken by users in stations in order to facilitate the fluidity of their movements. "Our role here is less about cyber security than about safety," says the specialist. "Terminals’ circuits can be subject to failure, so we ensure we monitor the energy level of each one, their temperature, etc. to detect problems before they affect the proper functioning of the system." "These AI-integrated systems work with sensors that generate a lot of data,” she adds. “The work here is mainly to optimize the parameters to reduce the overall energy consumption of the system and to detect the failures of these on-board systems." |
Learn more
Professor Gabriela Nicolescu's expertise
Department of Computer and Software Engineering website
Comments
Commenter
* champs obligatoire