Randori Live Briefing on CVE-2021-3064

Webinars

Interview: What’s it Like to be a High-End Red Team Member?

David "Moose" Wolpoff, co-founder and CTO of Randori, a nation-state caliber attack platform, chats with Chris Sienko from the InfoSec Institute about a day in the life of a high-end Red Team operations professional.

This interview covers these questions and more:

  • How does a Red Team actually work?
  • What qualifications or experiences do you need to be qualified as a Red Team member?
  • Are there any common Red Team methodologies?
  • What types of companies employ Red Teams?
  • What's the future of Red Teaming?

Chris Sienko:

Welcome to another episode of cyber speak with InfoSec Institute. Today's guest is David “moose” Wolpoff co-founder and CTO of Randori, a nation state caliber attack platform. We're gonna be talking about red team operations and also about the Randori platform. Moose is a recognized hacker and expert in digital forensics, vulnerability research, embedded, electronic design, and most interestingly red team operations found prior to founding Randori moose has held executive positions at Kyrus tech leading defense contractor and ManTech, or he oversaw teams conducting vulnerability research forensics and offensive security efforts on behalf of government and commercial clients. Moose holds a bachelor of science and master of science degree in electrical engineering from the university of Colorado moose. Thank you for being here today. Let's start out. Obviously you've, had a very interesting career so far. How did you get started in computers and security? Was security always an interest, or did you move down that Avenue later in life?

David "moose" Wolpoff:

No, it was completely accidental. I think people often forget how young the career field is. So it's not like you could get a degree in cybersecurity or study it in school. When I went to school I started out doing electronics design, got it, systems design and it systems reverse engineering becoming a forensics expert, doing hardware forensics, Farber, reverse engineering that led to mobile device exploitation for forensic purposes, then mobile device exploitation for offensive computing and then kind of hacking everything else. The rest is really history over the last decade, running teams doing offensive security, high end red teaming.

Chris Sienko:

What was the, sort of the, the bite with sort of these more high intensity programs like white hat hacking and a high-end red team? Was there something about the thrill of the hunt that brought you to that direction?

David "moose" Wolpoff:

Well, I mean, people who are good at the hackings are good at the red team stuff tend to have a little bit of an addictive personality for the types of puzzles and problems that you hit. So definitely there's a bit of an itch that you got to scratch and I get get antsy if I haven't done a little bit of, you know, breaking into something or solving the puzzle it was really opportunistic. I was working at high-risk we had a service called hacker and retainer, which is a high-end red team, and we just recognized that the incentive alignment for a lot of the pen tests was really upside down. So we changed how we did that motivated us to become high end act attackers and it worked really well.

Chris Sienko:

So speaking of that I wanted to speak to you specifically today about high-end red teaming, which is a big, exciting topic right now. And we're hearing a lot more about it in the news, but not really with any real depth. So I wanted to talk to someone who's, who's been involved to get their perspective. So for those just coming to the topic, what is a red team? What is its primary purpose and how do we differentiate it from say, white hat, hackers are penetration testers or vulnerable risk vulnerability researchers or so forth?

David "moose" Wolpoff:

For sure. Yeah. I, I wish that I could say there was a consistent definition that was equally applied. So I'm aware of a number of internal red teams with you know, corporate institutions. So if you're a large tech firm, you could probably have a red team of some sort, and it's really meant to be an aggressor or an adversary working a little tiger team style against internal defenses or defenders in the context that I was working the red team, red teams, I've always been with our high-end external actors. So we played that guy and pretend to be the apt. And we really bring novel and dedicated attacks against the targets we're working against. So as opposed to a pen test or vulnerability scanner, or a vulnerability researcher where you might be looking at the security of a particular application, or trying to prove that there's weakness in an application or scanning for known vulnerabilities or known issues in a scoped or bounded way we've always worked with the gloves off you know, goal oriented, motivated, determined, adversary attack trying to be real bad.

David "moose" Wolpoff:

Yes. But then of course, working with the blue teams after the fact to help them learn from the experience. Right.

Chris Sienko:

So it also seems like it's kind of more of an overall attack rather than like a penetration test where you said you're focusing on one specific breach area here you're, you're sort of amassing an army and sort of hitting the company from all sides simultaneously. Is that right?

David "moose" Wolpoff:

Yeah. So all the engagements that I've been involved in were black box. So we're starting with a very limited information, a very limited perspective on what makes up the organization we're doing the full kill chain, right? So from a discovery, enumeration, reconnaissance through exploitation, pivoting, everything that's involved inside of a network all the way out to, you know, data exfiltration or whatever it is that our objective is inside that, that organization, right, as you said, working without a bounce. So you know, not limited in scope to a particular asset or a particular subset of assets, but really going after the whole organization trying to achieve some particular objectives.

Chris Sienko:

So let's because red teaming is as a process is by its nature pretty secretive. Let's sort of start at the beginning. What makes a good member of a red team? What, what backgrounds do high-end red team members generally have?

David "moose" Wolpoff:

Yeah, so on my team over the last several years, it's mostly been good programmers for reverse engineers systems, people folks with a deep level understanding years ago, I was tasked with explaining to somebody in a corporate management, what the difference between like a hacker and a high end developer was otherwise. It really it's, you know, you're pulling threads from all kinds of levels of a tech stack, trying to retrieve some series of events or effects that seems like a miracle to a domain expert. So really it's people who are really good at problem-solving decomposing, how pieces of systems work, how systems of systems work and then learning new information, really. So, you know, when we've exploited perimeter system items typically we're working with technologies that we've never seen before that we don't know how to debug. We don't know how to reverse engineer and we have to quickly disect those things, figure out what the ramifications of actions that we're going to take might be bound to the risk and then be able to move forward. So it's a pretty high bar to do it really effectively. And depending on the context that you're in you, and maybe you have more time to reverse engineer and figure stuff out, but core skills are just deep understanding of how all of the technology works. Love of learning, how the stuff you don't know works.

Chris Sienko:

Is there, you said that, you know, is time of the essence, is this, is this, do these tend to be timed sort of attacks? You know, obviously it sounds like you, you are best served by having problem solving like under fast notice, but is, is there a stopwatch on you?

David "moose" Wolpoff:

Well, there's there's kind of two stopwatches that happen for a red teamer. The, the most obvious is a time is money, right? So at the end of the day, even for a very large engagement, you know, I would typically do six month engagements for longer. You're still timebox to some extent, and if you're waiting for an opportunistic event or for a defender to mess up and give you an opportunity, you know, you're eating through your time window. So there's that piece. And that's like kind of the one minor artifice that a real hacker might not have, right. They might not have that same degree of time pressure. The other time piece is I don't want to get caught, right. So if I'm breaking into an organization and I have some objectives that I want to achieve I'm trying to get the mission done.

David "moose" Wolpoff:

I'm not here to hack this perimeter system for the sake of hacking a perimeter system. I'm trying to steal your code, saying, steal your source code or whatever it is that I'm going after. Right. And as soon as I do something that might alert a defender of my presence, that makes my risk go up as a bad guy. Right. And so once you start taking actions against an organization in some sense, you know, the time the clock is, the clock is ticking right now, a defender might be on the sun. So I'm always trying to move as quickly as I can, so that I'm limiting my risk.

Chris Sienko:

If w you know, one of our listeners of this show, if you wanted to get into this line of work, what experiences, qualifications, accomplishments should you be able to point to that would make you desirable to other members of the red team?

David "moose" Wolpoff:

Yeah, I think the biggest thing for me is you have to understand all the tools work and how to build tools. So it's foundational knowledge knowing how to use utilities. You know, so if you're good with an exploit kit or a set of payloads or post exploitation tooling, like those are useful skills, but if you don't know how those things work under the hood, it's going to be really hard to take that next step.

Chris Sienko:

Okay. so we know that high-end red teams are differentiated from penetration testers and white hat hackers, by the way they approach vulnerabilities. But in sort of a day-to-day sense, how does a red team actually work with you? Like when you arrive on the assignment, where, where do you start guess is the question?

David "moose" Wolpoff:

Yeah. so engagements that we've always done. And as I said, you know, we're very long, so things six months plus would be typical for us to spend the first 14 to 30 days or so, just doing reconnaissance and surveillance. So getting an idea of what is the life cycle was the pattern of life look like for the organization that we're working against. So things like discovery of all the assets, first stage reconnaissance discovery of all the people and the observation of, are there things coming up and down? What's the rate of change? Can I measure things like how long is it patch cycle? So, you know, if patch Tuesday would come along or something, and you can observe a change to the perimeter system, like an I server or something, you could try to measure those things. So early stage, we start very, very hands-off.

David "moose" Wolpoff:

And then once we start doing that, we basically stack rank all of the assets that we can find on a client's perimeter. And that would be both technical infrastructure and people. And you say, what is an attack that I think in this organization is likely to be successful? What I want to go after first, where do I start doing research or whatever, I start poking up, obviously, if I already have global or a exploit for a vulnerability that's on the perimeter. And I can just go after that. And I might just try it, see what works. You have a lot of times on the red team side, we look at individual vulnerabilities or weaknesses as, you know, a nugget that might provide us useful information. So we might do a spear phishing campaign very early and engagement solely for the purpose of collecting information about the target, but without any real malicious payloads. Right. So just get whatever information we can get. And then we kind of go low and slow, right. Just take our time until we see something that looks good. But as soon as we get any sort of foothold inside the organization, the whole piece shifts, and we go from low and slow to move as quick as we can to get our job done.

Chris Sienko:

Do you, I I'm assuming every case is sort of different, but do you have sort of a universal methodology or toolkit that you break out with each, or do you really kind of like build your attack different differently with each, with each new one, a new assignment?

David "moose" Wolpoff:

Sure, sure. Yeah. So we have a number of things in the quiver that we always go to. So we, over the years to build lots of tools around automating reconnaissance and monitoring assets, when people you know, those are texts that we've principally pulled into this new company that we've got going. And then in addition to that, we have a lot of post exploitation tools or pivoting other utilities that we have that are kind of our custom stuff. So, you know, of course we're going to leverage, you know, a metal sploid or a partial empire or COBOL strike if it works. But we also have custom root kits that we've written that are purpose built for the types of missions that we go after. And depending on what's going on in the engagement, we might deploy something commodity. We might use something very custom, but yeah, it's really all about anytime I type the same command twice, I'm going to type it a thousand times. So we spend a lot of time in automation and making sure that we don't have to repeat things.

Chris Sienko:

I think one of the things that sort of caught people's imaginations about red teaming versus, you know, what seems like a, you know, cooler things like penetration testing, or what have you, is the sort of physical sort of brute force aspect of it that you're, you're, you're looking at the physical facility itself. You're trying to sort of get your way in physically, or, you know, look at the patterns of people coming and going from the building and things like that. So speak a little bit about the sort of physical aspect of it in addition to the automation and the technology.

David "moose" Wolpoff:

Sure, sure. Yeah. So we Mo the teams that I've been part of haven't been heavy into physical penetration testing or physical red team break-in right. Just because our objectives have always been able to be fulfilled through some other mechanism breaking into a building is higher risk for me than doing something remote. So obviously it's a method of last resort we've tended towards things that were more hybrid. Great. So, you know, shipping somebody, a piece of hardware that I can get them to plug in where we can plant it, the hardware built a thing that helps us pivot into an environment, but ultimately we do at works, right. So if I need to jump over the ceiling tiles, you know, to get past the glass door emotion sensors that we can get in and go plug into a building totally do that. And we've definitely walked into buildings and pretended to be an employee and given a desk or two to just sit and do our job because somebody thought we were working there, it's really just, you know, whatever works, same thing, real hackers do. Right. Do what you gotta do.

Chris Sienko:

Yeah. So I guess what kind of companies employ red teams to try and attack their defenses? Obviously these are we're talking corporations that have, you know, probably they feel like they have a pretty strong defense mechanism in place. Whereas one that, you know, might do a pen test because, you know, I just want to find out if there's one thing is okay, but like what level of security should your company already have in place before deciding to bring out the big guns of red teaming?

David "moose" Wolpoff:

Yeah, I would certainly never advocate it for an organization that doesn't have dedicated security personnel. And I think that many folks would be surprised how big companies get before they actually have dedicated full security Cokes. One of the big values that I always felt that I brought to the table as a red teamer was the opportunity for the defenders to learn from their attacker, right? It's not often that you get hacked and then get to ask the hacker what happened. And that's not super useful just to it practitioners, right? If, if you're really interested in finding things that need to be patched today, red, team's not going to be comprehensive in that kind of manner, but if you're interested in stressing your response and seeing, you know, do my defenders pick up what's going on, can I see a real actor in the event of a breach, do I know how to respond the reasonable way? You know, those things were more suited, I think, to the red team. Typically we see very large organizations that have some hybrid of internal threat actors. Who'd be like an inside red team. And then some outside red team doing, you know, the kind of goal oriented attacks.

Chris Sienko:

It seems like with renting. I mean, there's an understanding that you're going to get in. It's just a matter of how you get in and that's what they want to know.

David "moose" Wolpoff:

Yeah. there's some of that for sure. I think at the high-end it's less important how you're going to get in and it's more important what happens after you do? I, I really strongly believe that success in cyber is all about detecting quickly responding reasonably and keeping the lights on, right. Keep the business running while you're doing a workup. You don't, you don't keep hackers out of your network. Right. You kick them out quickly and you move on with your life. Right. So if we're dropping ODA in order to breach a company, that's really good. Like you've done it right. If I have to drop OTA to get into your network, but once I'm in, you need to know that I'm there and kick me out quick. Right. So we try and do a lot of coaching around those types of things.

Chris Sienko:

So what, what are some no-nos in red teaming? You know, I think this is probably one of those things that goes into the realms of sort of tabloids, but you hear stories about, you know, high-end red teams kidnapping the CEO, or, you know, like involving, you know, crazy things like that. How far is too far to get in, what is, or is there even such a thing as too far?

David "moose" Wolpoff:

You know, I, one, I would never go beyond the bounds of authority. Right. So make sure that you know, willing consent, right. I need to know that what I'm doing is within a down. So typically when I engage, we'll have some party in the company that we're working against the acts as a white cell or a referee. Right. And we'll be in constant communication with them about all of the actions that we're doing so that there's no surprises at that level. And they always have an opportunity to mitigate risk. And you don't, you don't want to break the business by trying to help them fix the business. Right. so it's a little contextual, but kind of like broad strokes. I don't do anything that's irreparable harm. Right. I try not to be destructive and we try not to go after anything that's like outside the bounds of what our objectives are.

David "moose" Wolpoff:

So if I'm going after, you know, proving that I can get the access to PII, I will take enough screenshots to prove that I had access to sensitive stuff, but I don't need to exfiltrate it. Cause I don't want to be responsible for tracking the data. Right. So in general I, I am happy to exfiltrate stuff that is you know, useful for pressing the attack within the context of what I'm trying to achieve. And I just stick to that. Obviously we've had a lot of clients over the years where we've happened to cross stuff that looks out of place. And a lot of times I'll just pick up the phone and call this as over call or contact and say, Hey, we saw this weird thing in this weird spot. We're hands off until you thumbs up, thumbs down and tell us what to do. Okay.

Chris Sienko:

So once you've broken the defenses, whether physically or technologically or through some combination, how do you report your findings to the company? Do you write a report? Do you offer a prescriptive solutions that would prevent you getting them second time?

David "moose" Wolpoff:

Yeah. A lot of times we will offer, you know, particular medicine for particular problems. More often it's coaching around systemic institutional issues that need to be addressed. We always give a report to our clients because we always want them to have that documented record of what we did, where we did it, what was going on. But most of the time we do that, the two things that I find most valuable are close contact with the white cell or with an agent inside the organization. So, you know, weekly or ad hoc calls with whoever's the stakeholder that's really owning the engagement just to make sure that everybody knows what's going on. And then the other piece is always doing a debrief or some sort of coaching session with the people who did the workup, right? So after I've turned up the volume high enough that the defenders know that we're there and they start doing incident response, if they have an opportunity to actually interrogate us after the fact. And so we always try to get folks this good chance to have that learning opportunity. So take the folks who thought it was a real exercise or thought it was real hack do their full workup and then get told, Hey, this was friendly set up that debrief and have what is usually a fairly warm discussion around Hey, how did this whole thing play out? What did you miss? What did we miss and make sure we have the opportunity for folks to learn from the engagement

Chris Sienko:

So why do you think that high-end red teaming at this moment is receiving such a boost and interest? Is this reflecting a growing unease about the prevalence of major hacks in the news? Or is it something else?

David "moose" Wolpoff:

Well, I think there's certainly the, the, the hacking zeitgeists rate every breach in the news every week. I think there's also a pretty broad sense that the general approach approaches taken to security testing. Aren't, aren't serving the real purposes, right? So if you are, if you're doing pen testing because you have a compliance requirement, right, you probably box it in that compliance bucket. There's a lot of folks who look at the program overall overall and say, I know how all these individual pieces fit into my security program, but I don't know how to test the whole program. So we get a lot of querying around folks who are very interested in that kind of, you know, holistic assessment as opposed to piecemeal or more targeted testing.

Chris Sienko:

So as, as we start to wrap up here a little bit, tell me about the Randori platform. You mentioned at the beginning of the show and you described it as a nation state caliber attack platform, which is a great term. What does, what does that mean? What is, how does the platform of this size allow its users to approach attacks and vulnerable,

David "moose" Wolpoff:

Like a larger scale? Sure. Yeah. So yeah, I mentioned earlier briefly that we've built a lot of tooling over the years to help us do high-end red team engagements and automate those pieces. We've kind of taken that, that Mindshare and we've automated a platform around it. So we are building a nation state caliber attack platform. That means we're building the attack platform that we all expect that relaxer series have and are using to reach us, but we're turning that around and letting our customers see it. So starting from zero knowledge, totally black box plug in an email address, or receive a dossier and consent and continue a monitoring of that dossier of all the, that make up a corporation or an institution. And then the opportunity to attack assets based on how interesting those things are to a hacker. So we have this concept of target some patient where we, you know, flavor all of the things that make up the perimeter of a company and then let us Cisco, or the operator inside of business, you know, press button, receive attack, see the ramifications of the attack, and then repeat that if they need to, or press the attack further into their environment.

David "moose" Wolpoff:

So we're trying to make it very easy for you know, mid to large size org to get that routine team experience and that a learnable moment from the red team engagement, but without having to have, you know, a costly high-end red team actually show up and do a services engagement.

Chris Sienko:

Okay. So this is something that you're marketing to the organization that they would use rather than you're using on them.

David "moose" Wolpoff:

Yeah. I think we anticipate that there'll be some very natural segues for red teamers to leverage a platform either internally or externally to a client. But the folks we're working with today are the actual, you know, victims of the hacking themselves who are using it to beat up on themselves and see how they do. So it's been, it's been fun.

Chris Sienko:

Interesting. So what along with Randori, what is, what do you think the future of red teaming is going to be? What will red teams and the companies that hire them have to do to keep steps ahead of hackers and other interlopers, where where's it going from here?

David "moose" Wolpoff:

Yeah, I think it's going to be a more driven towards the goal oriented attack, more driven towards a business-based risk management, right. I've mentioned before a strong believer that, you know, winning in cybersecurity is just detecting early, responding reasonably and keeping the business going. The only way that you can really stress your defenders and then learn from that experience is to have somebody come in and play that guy and bring you that experience. You know, whether that's an automated platform, that's doing it, or a group of dedicated hackers coming in and beating up on your defenders. I think that's a really valuable exercise for organizations to go through.

Chris Sienko:

That's great moose. Thank you for joining us today. I think we all learned a lot and especially considering how murky this topic has been to people sort of looking at the, you know, so I appreciate you breaking it down for us.