After a solid week in Vegas and another solid week of recovery, I’m back in the office (AKA sitting on my couch eating Doritos with chopsticks so I don’t get my keyboard dirty) to bring you my official Black Hat 2023 recap. This year’s event was noticeably scaled back, with fewer people swag surfing the business hall and more technical security folks in search of solutions for actual business problems. This shift led to far more engaging conversations with people in our booth, as well as at the various happy hours and parties I attended. Also noticeably scaled back were the booths of many companies, potentially demonstrating slowdown and consolidation in a space that has experienced wave after wave of recession-defying growth.
While XDR enjoyed the spotlight at Black Hat 2022, it was nowhere to be seen a short 12 months later. Instead, in the event you’ve been living under a rock (or maybe just in a SOC) for the last few months, the entire world is talking about AI. One could say it’s *puts on mom jeans* generating a lot of buzz. It comes as no surprise, then, that many vendors at Black Hat were keen to trend jack, introducing (mostly novel) features that claim to bring the powers of AI to the security stack.
The people at Black Hat, including founder Jeff Moss and keynote speaker Maria Markstedter, wanted to talk about the potential dangers of AI, such as:
Poisoning the well – The possibility that large language models can be compromised with malicious data or algorithms, rendering their output malicious
AI-powered social engineering – Bots that know exactly how to mimic your boss’s voice, writing style, and even video
Large-scale, self-perpetuating misinformation campaigns – the more misinformation is published, the more likely it is to be published and distributed
The threat of multi-modal AI – exploits no longer needs to be text, they can be hidden in the metadata of images, audio files, etc
As keynote speaker Maria Markstedter put it:
Corporate arms races are not driven by a concern for safety or security. As we all know, security slows down progress. Move fast, break shit, that’s the motto.
Unfortunately, when it comes to AI, optimizing means optimizing, and breaking shit might be fine with a single application or piece of code, but exploits like WannaCry showed us that self-perpetuating threats have the potential to break the entire internet. If a worm is bad, an artificially intelligent worm that is capable of adapting to circumvent security tools is really bad.
This is the threat of AI, but also likely one of the largest business opportunities since the dawn of the internet. After all, the only way to combat AI is with – AI (yes, this absolutely sounds like the beginning of Terminator). This means that companies who gather, filter, move, and manage data will be critical to both creating and defending an artificial future.
Some of the most interesting AI-powered security solutions I saw at Black Hat included:
Anomaly detection models that identify suspicious activity in network traffic
Malware detection models that learn the characteristics of known malware and identify new threats
Incident response tools that use AI to automate tasks such as triage and remediation
Lastly, but most importantly, threat intelligence models that aggregate and analyze threat data from a variety of sources
If you’ve spent any amount of time standing near me, you’ve probably heard me talk about web3, which has become synonymous with blockchain but is really more of an entire mindset and way of thinking about technology. The open-source, decentralized nature of the next generation of Internet technologies will require collaboration and information sharing to effectively defend. To this end, one of the other notable themes at Black Hat was a collaboration between private and public enterprises, and DARPA (inventors of the internet and many other creepy-but-awesome things) was on hand to announce a new strategic initiative aimed and funding the next generation of AI companies.
DARPA’s goal is to enable innovators to build AI-powered tools that can find and fix vulnerabilities in critical code. This was one of several public/private partnerships and efforts announced or promoted during the week, including a DefCon keynote presented by CISA Director Jen Easterly and Bsides founder Keren Elazari together. It seems that government, military, and law enforcement have accepted that if adversaries are open-sourcing code, collaborating, and sharing information, defenders may need to do the same. I personally view this as a positive development and an important step towards a more secure future, and most of the other security practitioners I spoke to on both sides of the fence agreed.
Overall, I walked away from Black Hat with more than just a cough and some new socks; I was left with the impression that, whether legacy companies are willing to accept and adapt or not, we have finally reached a post-proprietary stage in data. Consumers who once tolerated siloed data or proprietary languages are now demanding choice. Security teams are seeing the value in orchestration, not just tooling, and companies, customers, and the public sector are beginning to embrace collaboration. The market as a whole is realizing that it’s not how much data you can collect, it’s how much data you can find and use, and without the context provided by collaboration between vendors, most data is useless. This gives me hope that we will continue to be able to address the challenges of an artificially intelligent, hyper-connected future.