The ESCAR US conference took place June 20-21 and focused on topics such as how to set up bug-bounty programs, how to exploit and reverse-engineer vehicular software, how the vehicular industry should deal with security problems, problems with using machine learning algorithms for example for road-sign detection, and algorithms for anomaly-based IDS systems.

Bug-bounty programs are great tools if used correctly. A problem discussed was that the vehicular industry seems to not really understand their power. When setting up such programs, the industry does not seem to protect the persons working for them. The legal terms often protect the industry and claim that the hacker or researcher is responsible for all consequences: “You must not violate any law, disrupt or compromise any data that is not your own, if third-party components are hacked they will be notified [and possibly take further action?], etc.” To be able to use the full power of such programs and engage skilled people, we need to treat them as our own employees and protect them from, for them, unknown consequences.

Reverse engineering of software is common today. The problem is not to read out the memory contents of ECUs, this is normally not very hard: EEPROMs are trivial, software vulnerabilities such as stack overflow attacks allow insertion of own code to dump memory, and if really needed, hardware tools can be used. We also learned that encryption of memory is meaningless since the memory content is any way placed in emulators which will decrypt the memory when executing the code. More complicated ECUs are not disassembled or statically analyzed due to the code complexity, but other techniques such as tainting memory content can be used to follow incoming and outgoing messages and see what other variables affect their contents. This way, crypto keys and behavior of the ECU becomes known.

There were also talks about IoT threats, how common they are and how they may affect devices and that we should assume that cyber-criminals soon have similar capabilities as governments. We also learned that machine deep-learning algorithms are not always reliable, a practical and now famous example was given where road signs were slightly manipulated with stickers which caused a stop sign to become a speed limit sign, but for the human eye, the sign was still, without doubt, a stop sign. Another talk was about the deployment of an anomaly based IDS system, which seem to work pretty well with an FPR (false positive rate) of less than 0.1% which is good for such a system, but it would still cause several false alarms per second per vehicle if deployed today. We also learned about similarities and differences between automotive security when compared to aerospace, train, and maritime systems.

The conference had many interesting talks from skilled speakers. For more details and information, Please contact Tomas Olovsson, Chalmers.

 

Written by Tomas Olovsson, Chalmers.

Facebooktwitterredditlinkedinmail