Scaling Assessment Automation (NahamCon)

In this presentation, delivered on June 13 2020 at the first-ever Nahamcon, we introduce the capability to push data out into Elasticsearch for analysis. We gloss over the configuration a bit in the talk, so if you want the full details on configuring AWS Elasticsearch instance check the slides, shown below.

Getting Started on Digital Ocean

We’ve had a few requests in the Slack about getting started on Digital Ocean. It’s as simple as starting a droplet and running the Docker One-Liner, but i’ll document the exact steps below.

First log into Digital Ocean. The interface is pretty straightforward, but first we need to create a project:

No need to move existing resources in, we’re starting a new project, so we’ll click “Skip for now”

Now, we have a project, so let’s create a new Droplet inside it.

And let’s make sure to give it at least 16GB of RAM:

Everything else can be standard, so let’s scroll down. Obviously make sure you set up / use an SSH key you have access to so you can log in. In this case, I’m using a pre-generated SSH key on my local system:

And then click “Create Droplet”

Now, give DO a few seconds, and we can browse to Droplets and get our IP Address

Then, we’ll use our SSH key to log in:

And first things first, let’s update the system and install Docker with the following command:

apt-get -y update && apt-get -y upgrade && apt-get -y install docker.io  

Then, simply run the docker one-liner command, found here.

This will generate a dynamic (and random) password, so make sure to make note of that:

Now, back on your laptop or PC, simply connect to the Digital Ocean host on https://YOUR_DROPLET_IP:7777, use the password provided in the startup output, and you’re live! No need to configure firewall or any other specifics for this droplet.

If you have any problems, feel free to file an issue on the Github repository.

Nahamsec interview with @th3g3nt3lman

Here’s a clip of an interview with @nahamsec and @th3g3nt3lman talking about how Intrigue Core can help bug bounty hunters and internal security teams. If you haven’t yet seen Nahamsec’s Twitch.tv channel, it’s a good source of techniques and fun to watch.

The clip talking about Intrigue Core starts at the 25:00 min mark. Check it out!

@nahamsec talking bug bounty recon with @th3g3ntl3man

Ident Docker One-Liner

On a pentest or in a hurry and want to try out ident to fingerprint an application quickly? Use this one-liner which pulls the latest build from dockerhub and kicks off a run:

docker pull intrigueio/intrigue-ident && docker run -t intrigueio/intrigue-ident --url YOUR_URL_HERE

Also, handy, add a -v to check for top vulnerabilities in a given technology (as long as we have a version, we’ll match to known CVEs)

If you’re interested in the details of how it works, add a -d to see debug output!

See the checks in all their glorious detail on Github. We’re well over 500 and adding more on a regular basis. If you don’t see a technology you’d like fingerprinted, create an issue or send us a pull request!

Gitrob Integration

Gitrob is a handy open source utility by Michael Hendrickson to find secrets in public Github repositories. Gitrob works by downloading all repositories under a given Github account, and then scanning for strings that might be an accidental leak. Even if a given line or file has been investigated, it may still be in the commit log, so Gitrob will check all commits for these potential leaks. Learn more about Gitrob.

This new Core integration makes it simple to spin up Gitrob every time we find a Github repository, and by combining it with the search_github task, we can now scale our search for leaked secrets very quickly!

This integration and task are now on the develop branch. To use it immediately, build a local Docker image.

BlueKeep (CVE-2019-0708) – Fortune 500 External Exposure

Recently, Rob Graham shared a post detailing how he used Masscan + RDPscan to check for vulnerable hosts, finding that over 1 million hosts were vulnerable to BlueKeep (CVE-2019-0708). I was curious how many of these systems were corporate or enterprise systems, given that the awareness is often higher in organizations with dedicated patch and vulnerability management teams.

To explore this, using scan data gathered on the Fortune 500 from Intrigue.io, I pulled all systems with port 3389 open, finding a total of 1140 systems.

Using the same tooling (rdpscan) as Rob, i then checked to see if these hosts were were still exposed to Bluekeep. When attempting to connect to these systems to verify that they were in fact RDP, we found that only 286 responded with an RDP protocol . The difference can probably be attributed to firewalls and other network security devices that respond automatically (and erroneously) when scanned.

So, using the set of 286 systems verified to be RDP and returning results from RDPscan, we found that across 49 unique F500 organizations exposing a system, they could be broken into the following statuses:

71 Vulnerable
85 Mitigated (CredSSP/NLA required)
130 Patched (Target appears patched)

This is pretty good, in my opinion. Given that the vulnerability was announced on 05/14/2019, and this check was run on 05/31/2019, two weeks to patch or mitigate 75% of the vulnerable systems is incredible. I’d attribute this to the fact that there are often dedicated teams inside these large organizations that pay special attention to externally accessible systems, and often will apply a patch “out of cycle” in cases like this.  

Of those 71 vulnerable systems, they were spread across 17 organizations in the following sectors:

The organization with the most publicly-exposed RDP services was an Oil and Gas company, and it was interesting to see systems attributed to the same organization that were only partially patched or mitigated, with many still vulnerable. Patching, even in this case, where the update is available, and could theoretically be automatically applied, is still a time consuming and change-controlled process in larger organizations. Their systems were about 2/3 patched or mitigated, with 34 systems still externally exposed and vulnerable:

45 Patched – Target appears patched
34 Vulnerable
18 Mitigated – CredSSP/NLA required

The other 32 organizations with exposed RDP had clearly been working on the vulnerability with almost 2/3 patched.

130 Target appears patched
85 CredSSP/NLA required

Wrapping up, this was a quick look from a different perspective around this vulnerability, in an attempt to see how many of those million systems were “managed” systems, attributable to an organization. As suspected, there were few externally accessible F500 systems still vulnerable to Bluekeep two weeks out from the announcement of the vulnerability. This speaks to the processes inside these organizations to manage and remediate important vulnerabilities such as BlueKeep.

This data was gathered per-organization using Intrigue Core based on a set of “seeds” attributed to each organization, and thus may not be 100% complete. It does not attempt to account for internal hosts, where an RDP worm would likely wreak havoc in most organizations. I strongly suggest following Microsoft’s guidance and applying the patch, even if this requires an out of band update. Given that real attack surface is the internal corporate network, it’s likely we’ll see this vulnerability weaponized as part of a multi-tier attack, similar to how EternalBlue has been being used.

Using uri_spider to parse file metadata

The uri_spider task, when given a Uri entity such as http://www.acme.com, will spider a site to a specified level of depth (max_depth), a specified max number of pages (limit), and if configured, a specified url pattern (spider_whitelist). When configured – and by default – it will extract DnsRecord types, PhoneNumbers and EmailAddress type entities in the content of the page. All spidered Uris can be can created as entities using the extract_uris option.

Further, the spider will identify any files of the types listed below, and parse their content and metadata for the same types. Because this file parsing uses the excellent Apache Tika under the hood, the number and type of supported file formats is huge – over 300 file formats are supported including common formats like doc, docx and pdf – as well as more exotic types like application/ogg and many video formats. To enable this, simply enable the parse_file_metadata option.

Below, see a screenshot of the task’s configuration:

uri_spider task configuration

Note that you can also take advantage of Intrigue Core’s file parsing capabilities on a Uri by Uri basis by pointing the uri_extract_metadata task at a specific Uri with a file you’d like parsed, such at https://acme.com/file.pdf