ChatGPT got quite much attention in the last months. The release of GPT4 demonstrated once more the potential of foundation models. There have been many discussions and proposals on how natural language processing (NLP) can be used, e.g., the chat-bot functionality in Microsoft products (Microsoft 365 Copilot) and the use of such models to write code. Cybersecurity has not been left out, shortly after the release of ChatGPT, people demonstrated that they were able to write malware and argued that a new wave of phishing emails will come due to the help in generating more convincing emails.
GPT, however, is not the only foundation model out there. For instance, in this newsletter we will briefly describe an intrusion detection system which uses Bidirectional Encoder Representations from Transformers (BERT) to detect anomalies in CAN network traffic. Alkhatib et al. propose CAN-BERT in their article “CAN-BERT do it? Controller Area Network Intrusion Detection System based on BERT Language Model” . The authors chose BERT because it is bidirectional meaning that relationships to the left and right are captured. In comparison, unidirectional models, such as GPT and long short-term memory (LSTM), only process the sequence from left to right. Previous research on anomaly detection also showed that bidirectional models work better in detecting anomalies.
The authors use the Car Hacking: Attack & Defense Challenge 2020 dataset (containing flooding, fuzzing, and a malfunction attack) and use a feature-based sliding window technique. They train the model with the normal/benign data only. Once the model is trained, it returns the predicted candidate set with the normal CAN IDs. If the anticipated CAN IDs are within this predicted set, it is likely benign/normal traffic, otherwise it is an anomaly.
The authors provide an evaluation of this technique and compare it with other methods, such as “traditional” machine learning techniques like Isolation Forest, but also with deep learning techniques such as LSTM autoencoders. The evaluation indicates that CAN-BERT outperforms the other techniques when the sequence length is increased (>=32).
Using deep learning for in-vehicle intrusion detection may sound a bit controversial in general as it requires a lot of resources compared to more traditional techniques like k-nearest neighbour algorithms. However, exploring how deep learning, especially foundation models, can be applied in cybersecurity is quite interesting and shows their potential as it may be more efficient to apply a foundation model to a domain specific task than training a model for one specific task from scratch .
 N. Alkhatib, M. Mushtaq, H. Ghauch and J. -L. Danger, “CAN-BERT do it? Controller Area Network Intrusion Detection System based on BERT Language Model,” 2022 IEEE/ACS 19th International Conference on Computer Systems and Applications (AICCSA), Abu Dhabi, United Arab Emirates, 2022, pp. 1-8, doi: 10.1109/AICCSA56895.2022.10017800.
 Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., … & Liang, P. (2021). On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258.
- Update on the Kia/Hyundai case (our newsletter covering their update): 23 state attorneys argue that warning stickers, longer alarm sounds, and a software update are not enough (link to news article).
- Ransomware attack against Ferrari
Written by Thomas Rosenstatter