The U.S. military and other government organizations use red times to try to punch through cybersecurity holes before the enemy can. (MC2 Joshua J. Wahl / Navy)
Keeping the U.S. military’s network defenders sharp and up to date is increasingly the work of cyber red teams — security experts who try to punch through holes before the enemy can. But there’s an art to composing a good red team — and to turning the team’s penetration exercises into a solid plan for improving network security.
“Testing your own defenses has become a way of life,” said Tony Sager of the National Security Agency. Sager is chief operating officer of the Information Assurance Directorate, the division that trains and operates red teams for the agency.
“It’s not that defenders are bad — it’s the operational complexity,” he said.
Doug Steelman, chief security officer at Dell SecureWorks, said that simulating attacks is absolutely necessary.
“I don’t know how you sleep at night if you’re in the business of defending an infrastructure unless you know how well your people, process and tools are resisting modern threat actors and their techniques, tactics and procedures,” said Steelman, who previously worked at U.S. Cyber Command. “The only way I know to do that is to exercise via a team that emulates modern threat actor techniques.”
Because attackers use a broad spectrum of tools and tactics to compromise networks, red teams try to follow suit.
“A red team is not just a bunch of hackers that try to break into your system from remote. It’s about social networking; it’s about physical aspects. There are a lot of different pieces of security,” said Jeff Moulton, a researcher with the Georgia Tech Research Institute.
“An effective red team needs to be holistic in nature,” he said. “But it also needs to test the people’s security prowess, so having a systems administrator not understand what to do when an incident occurs is just about as bad as the actual malware or the actual attack itself. If they do the wrong thing, it could be more catastrophic than the event itself.”
Experts pointed to numerous successful tactics, particularly social engineering, that attackers are using beyond software exploits to gain information and get on networks.
In one case, a red team emulated an approach used by attackers by pretending to be a sailor’s wife. Using an instant messaging application, they managed to obtain a complete crew manifest and a complete itinerary for a vessel, including maintenance locations and dates. This is a variant of the successful spear phishing tactic, in which an attacker attempts to solicit information from a target by sending emails under the guise of being a friend or colleague.
In another case, a red team was able to tap into a network by accessing cables in a reception area not protected by more in-depth security.
Many organizations have outlawed portable storage devices such as thumb drives because attackers were slipping viruses onto the devices.
Even trash cans are routinely inspected, as users have a tendency to print out information that might provide access to a network.
“Everyone always thinks of just the hackers, but it’s much more than that,” Moulton said. “You’re really testing a social ecosystem. You’re really testing an entire security posture of an organization.”
Moulton, who previously led red teams for the U.S. Defense Department, said that he tried to put together teams with varying skills to make the simulation realistic.
“I’ll usually have a network engineer, I’ll usually have a hacker, I’ll usually have some social media person or a clinical psychologist,” he said.
A Different Mindset
What separates a cyber red team from most collections of cybersecurity experts is not so much their skill sets as their attitude.
Ed Skoudis, an instructor with the SANS Institute, has spent more than a decade training red team members for a number of three-letter intelligence agencies, as well as doing red team work in the commercial sector himself. He said that a certain competitive mentality tends to be common among red teamers.
“It creates a perverse mindset where you get all excited about finding the holes and flaws,” Skoudis said. “You have to want to get in. You have to have a thirst for that — the excitement of compromising defenses, finding flaws — while, at the same time, you have to realize you’re a good guy. Your job is to stamp out the stuff that gets you so excited.”
Any time spent in a war room during such an exercise includes displays of excited shouting and fist pumps as red teamers celebrate a successful exploit. Taunting is common, and mocking the defenders’ capabilities is a near certainty. Consideration of rank often takes a back seat as casual conversation about exploits percolates.
But while red teams find vulnerabilities, it’s the translation of those findings into action that can be a problem. Because of the mindset that accompanies successful testing, the reports written by red teams to describe the vulnerabilities and potential solutions can at times forget the end goal: improved security.
“I can feel it in my guts when I’m trying to write up a report,” Skoudis said. “I think, ‘We did some awesome stuff here; wait until I write it up so that someone just like me will appreciate how awesome it was.’ That’s wrong. The primary audience of this is not people who are on the red team. The audience for their work is defensive personnel. They have to write to help those people do their job better.”
This point was echoed by Richard Bejtlich, chief security officer at Mandiant, an information security firm based in Alexandria, Va.
“It’s really sexy to break in and show how smart you are and show how helpless a customer is,” he said. “But to then turn around and actually work with that customer to get them to a better point, that’s a lot of grinding work.”
Bejtlich, who used to run a red team for General Electric, said that past reports from red team exercises would go largely ignored.
“The idea behind a vulnerability assessment is you do some activity and say, ‘Look how vulnerable we are.’ And then the customer comes back and says, ‘Yeah, that’s all theoretical, I don’t really believe you. By the way, you sent me a list of 1,000 things to work on; I don’t even know where to begin.’”
Skoudis said that to combat the problem of less-than-stellar final reports, he encourages his students to think of their findings from another perspective.
“The final results should be written from a perspective of helping improve the operations of the environment, and be written to help the blue team [the defensive team] work in that environment,” he said.
Those red teamers who refuse to help figure out defensive strategies, Skoudis adds, are not providing as much value as they should.
Realism Through Intelligence
Besides the ability of red teams to translate findings into workable suggestions, another component of simulations that is equally critical, experts said, is the implementation of intelligence to generate attacks that mimic those of the adversary.
“We ought to pay as much attention as possible in building our red teams to not just the notional idea of what somebody might do, but actually study, use our intelligence and use intelligence to understand our adversaries’ cyber threat,” said Sam Visner, chief cyber strategist for CSC. “A good exercise doesn’t just treat the opponent as a black box. A good exercise tries to understand that opponent. You’re trying to interact with them; you’re not interacting with an inanimate object.”
Especially in the commercial world, where an investment in security has to be justified with a potential return on investment, this emphasis on emulating existing threats is critical. Pointing to a vulnerability that has yet to be exploited by the adversary rarely elicits action, experts said.
Instead, red teams try to create a specific scenario based upon real-world attacks that have already happened.
“Based on our knowledge of the adversary, we’re going to figure out if the detection mechanisms we’ve put in place and the defensive mechanisms actually work,” Bejtlich said. “We don’t want to find out when we see bad guys exercising our vulnerabilities. We want to find out a little more proactively.”
Because the threats change so quickly, just keeping up with what is being used can be a full-time job.
“It’s a way of life,” Skoudis said. “When we do our work, it’s continuous. We are constantly looking at new tactics, new techniques, we’re constantly evaluating and then updating our very next test using those techniques. If you’re not doing this at least every quarter, you’re probably falling behind.”
That need for intelligence can pose a bit of a problem for those working on commercial testing, as a large swath of information on attacks and attackers is classified and beyond easy reach. And as companies look increasingly to penetration testing as a means for beefing up security, the need to share information is gaining traction in Congress. Several cybersecurity bills are circulating, all of which include threat-sharing provisions.
The primary critique of penetration testing is that even with an accurately rendered attack simulation, security professionals don’t learn anything.
Skoudis quoted Marcus Ranum, a well-known network security researcher.
“Ranum said, ‘If you get a penetration test and they get in, it simply means that you suck. If they don’t get in, it simply means that they suck, and you can’t learn anything more from a penetration test — other than you suck or they suck.’”
Skoudis said he disagreed with that notion.
“I would say that’s a bad penetration test,” he said. “A good red team exercise or penetration test would not only tell you that you suck. It tells you specific areas where you suck, how bad you suck, and how to suck less. If it’s just making you feel bad, it’s not providing the value that you need.”