Offensive Threat Intelligence
CTI isn’t just for blue teams. Used properly, it sharpens red team tradecraft, aligns ops to real-world threats, and exposes blind spots defenders often miss. It’s not about knowing threats, it’s about becoming them long enough to help others beat them.

Learn from Adversaries to Inform Better Practices
Cyber Threat Intelligence (CTI) is often misunderstood. It gets bundled in as just reading threat reports or tracking high-level actors but often that’s missing the point. CTI, especially the “cyber” side of it, is often picked up by folks without much technical background. That’s not a problem on its own, but it can show when the analysis lacks depth or understanding as to how an attack occured and how the techniques are used in the real world.
Used well, CTI shouldn't be passive. It involves using historical data, evolving incident patterns, and the overlap of red and blue capabilities to understand how adversaries think, act, and adapt. Done well, CTI strengthens an organisation’s overall security posture by informing how defences are designed, tested, and evolved.
Just like in offensive security, the end goal isn’t to show off or break things for the sake of it (and often breaking things is what trips alerts so keeping away from differentation from the norm is important). The goal overall is to improve defence, all offensive security operations undertaken by red teams is offensive security for defensive purposes. Whether you’re a pentester, a red teamer simulating real adversaries, or a blue teamer chasing alerts, you’re all aiming for the same thing: helping the organisation become harder to breach, making security better all around just with a different path to get there.
Offensive Security != Threat Intelligence?
In technical circles, there’s still a divide between offensive security and CTI. Even though threat-led testing and red teaming are gaining traction, offensive threat intelligence is rarely talked about as a discipline in its own right.
But it should be.
This goes far beyond copying Indicators of Compromise or reading finished intelligence (fintel) reports. Offensive Threat Intelligence means understanding adversary actions at a tradecraft level, recognising shifts in their tooling and tactics, and turning that knowledge into capability development to improve the manner in which we approach red teaming as attackers. It feeds into emulation planning, operational decision-making, and validation of controls.
This is where the red team benefits not just from thinking like an attacker, but from understanding how attackers evolve in the real world and how they get caught.
Bridging the Gaps
Traditional CTI roles tend to sit closer to the defence side of the house. Analysts produce reports that guide blue teams or feed into SIEM rule creation rather than understand attacker techniques and uncover new ones. But there is just as much value for offensive teams. The key difference is in how the intelligence is applied, and built upon, using the knowledge from the 'bad actors' that get caught, bad in the sense of performing for illegitimate purposes and bad because, well they got caught didn't they?
When CTI is used to inform offensive simulation and adversary emulation (and these two phrases are used interchangably, they actually differ), it becomes a force multiplier to greatly improve the approach red teams take. Red teams can align their objectives with what real attackers are doing, in specific sectors or with certain motivations to help play test blue teams better and align to what to expect in the real world, encouraging them to adapt and understand what's good enough today may not stand up tomorrow.
This lifts exercises from basic exploit demonstrations to more advanced threat-informed operations that force defenders to address blind spots in their detection and response programmes and it also encourages blue teams to stay one step ahead developing detections for emerging techniques uncovered by red teams and thus identifying when malicious behaviour occurs in their environment.
Know Your Enemy
... and their motives
The best operators are not just technical. They understand attacker logic, motivations, and timing. Knowing how to exploit something is one thing, but understanding when an adversary would use it, how it would blend into network noise, and what outcomes they are pursuing is what separates decent red teams from great ones.
Adversaries are human. They follow incentives. They cut corners when needed. They reuse infrastructure, adopt publicly released tools, and sometimes leave operational mistakes behind. Yet they also innovate and share knowledge between groups.
By closely reviewing the tactics of advanced persistent threats, ransomware operators, access brokers, and other criminal groups, we as red teams, adversarial engineers and operators gain better insight into how real attacks unfold. We learn how they chain weaknesses together, exploit poor segmentation, or target overlooked services (like that one internal web app nobody seems to care about). This context shapes how red teams operate, helping them emulate realistic behaviour rather than just chase high-value access like domain admin over and over again.
Understanding threat actor targeting also provides sector-specific and industry awareness. Knowing which industry verticals are being targeted, and by whom with what types of techniques, allows teams to tailor their operations for maximum relevance and realism.
A good example of this may be the manner in which a red team target an organisation, the traditional social engineering -> get a payload to execute and traverse the internal network may not always be the best way of performing an exercise, and understanding that potentially starting the team as a legitimate employee under the guise of a specific job role may offer far better value in understanding gaps in controls and detections while maintaining realism.
The threat landscape is forever expanding and the methods that threat actors/adversaries use is changing too, just like macros were phased out, other methods of initial access are being used by different types of attackers and depending on motives will depend on how they attack your organisation.
The diagram below (something I made many years ago) shows how an increase in attacker motivation and impact, alongside a rise in capability and prevalence, maps across different threat tiers.
We can see the pervalence of types of threat actors and the motivations that follows each, along with the capability vs motivation and impact trade off getting more sharp at the tip of the triangle.
If you are reading this not on blog.zsec.uk and the article is a word for word copy, know that the original is located on Andy Gill's blog.
CTI as an Offensive Security Focused Professional
At its core, CTI is about making sense of threats before they strike (or understanding breaches when they have occurred). It involves collecting, analysing, and synthesising information that reveals how digital assets might be compromised. But for red teams, it is not about indicators. It is about patterns of behaviour and how those patterns evolve over time and how to take existing techniques and modify them, like the technique I described as an expansion of timestomping, Commit Stomping as an example.
Understanding the threats, breaches, and actors out there stops red teams from falling into the trap of always going for the same targets and objectives. Instead of chasing domain admin every time, threat-informed operators shape their attacks around how real breaches happen. That might mean going after production systems instead of test ones, or stepping outside of traditional Active Directory to hit data stores that actually matter.
It could mean mimicking the slow, careful movements of espionage groups rather than the loud, fast tactics of common criminals and ransomware groups. And sometimes, using techniques that might get you caught is the point it forces defenders to react properly and put their response plans to the test. While this post isn't focused on red teaming, it is important to understand it and how it aligns with CTI.
This kind of thinking forces defenders to look deeper. It is no longer enough to block a hash or IP. They have to understand intent, lateral movement patterns, and what behaviour is likely to be benign versus malicious under specific conditions.
Intelligence-led red teaming creates a more adversarial mindset across all areas of the engagement. From infrastructure setup to payload delivery to the operator’s decision-making process, everything is shaped by what real adversaries do.
CTI Can Uncover Insider Threats Too
It’s not just about APTs, criminals and ransomware groups. CTI also reveals internal risks such as insider threats, abuse of trusted access, poor role separation, or high-value misconfigurations. These are often harder to detect, but can be just as damaging as an external adversary.
A good red team exercise informed by targeted threat intelligence might simulate an insider selling access or using legitimate access to further access to areas like source code or supply chain.
Having the knowledge as to what tools are used by insider threats typically allows an organisation to build detections for those things and understand once again where gaps might exist.
Intelligence Isn’t a Static Feed: It’s an Operational Enabler
CTI is not static and good intelligence should operate in a fluid manner. It is not just another PDF or a feed of indicators to be piped into a SIEM never to be seen again until shit hits the fan. When embedded properly, it becomes an operational component that informs how offensive operations are planned and executed.
This is where offensive threat intelligence shows its true value.
Use it to build infrastructure that reflects real-world attacker setups, from domain names, CDN choice and mirroring the target with TLS configurations to how payloads are staged(or stageless in many cases) and delivered. Having better observability into threat actor mistakes helps you build out your operational security models to avoid them and adapt.
Understand how adversaries pivot, and apply those lessons to test whether your client’s controls are doing anything meaningful in response is a critical point and one that many red teams and offensive security professionals miss.
CTI helps transform a red team from a group running tools to a team acting like adversaries. It sets the tone for realistic engagements that provide far more than a list of vulnerabilities.
Final Thoughts
CTI sharpens red's ability to think like adversaries and, more importantly, to operate like them in a controlled, repeatable, and measurable way. When paired with grounded tradecraft, that mindset enables red teams to deliver real value not just by proving something is exploitable, but by demonstrating how a genuine threat actor would approach it, why they would target it, and how long they might remain undetected.
Ultimately, it is not just about knowing the threats. It is about understanding them deeply enough to become them, temporarily, and then helping defenders learn how to beat them.