New Orleans Truck Attack: How Terrorists Exploit Smart Technologies – by Eleonora Ristuccia & Alessandro Bolpagni

On January 1, 2025, former US Army soldier Shamsud-Din Bahar Jabbar drove a pickup truck on Bourbon Street, in the heart of New Orleans, killing 14 people and injuring 35 more. According to Lyonel Myrthil, Federal Bureau of Investigation (FBI) Special Agent, Jabbar had visited the city twice prior to the attack.

First, at the end of October, he rented a house for a few days and recorded a video while cycling through New Orleans’ French Quarter. He wore a pair of Meta Ray-Ban smart glasses, equipped with cameras in the frame, which allowed him to tape recordings without using his hands and thus going unnoticed. In addition to taking videos and photos, the Meta glasses worth $300 contain speakers enabling users to make regular calls and interact with their phone’s digital assistant, notably Siri for iPhones, as well as to livestream events.

Figure 1: Jabbar wearing Meta Ray-Ban smart glasses – source FBI/Reuters

He then visited New Orleans on 10 November, though no further details concerning this second trip have been disclosed yet. Jabbar, who pledged allegiance to the Islamic State (IS) in a number of videos posted on the night of the attack, was found with the same pair of Meta glasses after he was shot dead by the police. Smart glasses such as those worn by the New Orleans attacker are only one of the latest devices bridging the physical and virtual worlds. In the face of the proliferation of these products, terrorist organisations become even more resilient, perfectly adapting and incorporating a new cyber technological landscape.

Indeed, whilst it appears to be the first time that the Meta smart glasses are used to organise an act of terror, the metaverse and Artificial Intelligence (AI) tools have been extensively employed by terrorist groups. As noted by Adigwe et al., the ease of access to AI systems is a double-edged sword, opening the door for misuse by malicious individuals. Moreover, advanced AI tools often enable attackers to operate anonymously. This was clear in the 2017 NotPetya cyberattack, whose organisers remained hidden. Terrorist groups have also been experimenting with generative AI in order to enhance recruitment efforts. For instance, a guide on how to securely use this tool was issued by IS in 2023 and several propaganda videos were released since early 2024 by a pro-IS non-institutional media house. Metaverse technologies have been explored as well, including through online gaming. These platforms prove to be valuable means to radicalise and mobilise young people to action, as was the case for two German teenagers who started to engage with extreme-right contacts on Roblox in the early 2020s. The idea that the metaverse may be useful to circumvent classical communication channels when designing terrorist attacks and recruiting new individuals was already suggested in a report published by the Council of the European Union in 2022. Terrorist organisations have been adding the metaverse to their digital ecosystem in light of operational benefits such as improved interactivity and security.

The New Year’s Day terrorist attack illustrates how terrorists have been increasingly employing smart technologies to fulfil their purposes. Broadly speaking, terrorist organisations have clearly shown how they are able to adopt always-evolving technologies to sharpen their propaganda capabilities and their ability to carry out attacks. In wider terms, AI tools and the metaverse have been reshaping the terrorism landscape, thereby raising new challenges and complicating counter-terrorism efforts. As the risk of counter-terrorism efforts turning into a mere ‘whack-a-mole’ is often present, it is therefore crucial to develop new methodological and operational approaches that can intercept these trends and understand in which direction these organisations are steering their attention.