Ident Docker One-Liner

On a pentest or in a hurry and want to try out ident to fingerprint an application quickly? Use this one-liner which pulls the latest build from dockerhub and kicks off a run:

docker pull intrigueio/intrigue-ident && docker run -t intrigueio/intrigue-ident --url YOUR_URL_HERE

Also, handy, add a -v to check for top vulnerabilities in a given technology (as long as we have a version, we’ll match to known CVEs)

If you’re interested in the details of how it works, add a -d to see debug output!

See the checks in all their glorious detail on Github. We’re well over 500 and adding more on a regular basis. If you don’t see a technology you’d like fingerprinted, create an issue or send us a pull request!

Gitrob Integration

Gitrob is a handy open source utility by Michael Hendrickson to find secrets in public Github repositories. Gitrob works by downloading all repositories under a given Github account, and then scanning for strings that might be an accidental leak. Even if a given line or file has been investigated, it may still be in the commit log, so Gitrob will check all commits for these potential leaks. Learn more about Gitrob.

This new Core integration makes it simple to spin up Gitrob every time we find a Github repository, and by combining it with the search_github task, we can now scale our search for leaked secrets very quickly!

This integration and task are now on the develop branch. To use it immediately, build a local Docker image.

BlueKeep (CVE-2019-0708) – Fortune 500 External Exposure

Recently, Rob Graham shared a post detailing how he used Masscan + RDPscan to check for vulnerable hosts, finding that over 1 million hosts were vulnerable to BlueKeep (CVE-2019-0708). I was curious how many of these systems were corporate or enterprise systems, given that the awareness is often higher in organizations with dedicated patch and vulnerability management teams.

To explore this, using scan data gathered on the Fortune 500 from Intrigue.io, I pulled all systems with port 3389 open, finding a total of 1140 systems.

Using the same tooling (rdpscan) as Rob, i then checked to see if these hosts were were still exposed to Bluekeep. When attempting to connect to these systems to verify that they were in fact RDP, we found that only 286 responded with an RDP protocol . The difference can probably be attributed to firewalls and other network security devices that respond automatically (and erroneously) when scanned.

So, using the set of 286 systems verified to be RDP and returning results from RDPscan, we found that across 49 unique F500 organizations exposing a system, they could be broken into the following statuses:

71 Vulnerable
85 Mitigated (CredSSP/NLA required)
130 Patched (Target appears patched)

This is pretty good, in my opinion. Given that the vulnerability was announced on 05/14/2019, and this check was run on 05/31/2019, two weeks to patch or mitigate 75% of the vulnerable systems is incredible. I’d attribute this to the fact that there are often dedicated teams inside these large organizations that pay special attention to externally accessible systems, and often will apply a patch “out of cycle” in cases like this.  

Of those 71 vulnerable systems, they were spread across 17 organizations in the following sectors:

The organization with the most publicly-exposed RDP services was an Oil and Gas company, and it was interesting to see systems attributed to the same organization that were only partially patched or mitigated, with many still vulnerable. Patching, even in this case, where the update is available, and could theoretically be automatically applied, is still a time consuming and change-controlled process in larger organizations. Their systems were about 2/3 patched or mitigated, with 34 systems still externally exposed and vulnerable:

45 Patched – Target appears patched
34 Vulnerable
18 Mitigated – CredSSP/NLA required

The other 32 organizations with exposed RDP had clearly been working on the vulnerability with almost 2/3 patched.

130 Target appears patched
85 CredSSP/NLA required

Wrapping up, this was a quick look from a different perspective around this vulnerability, in an attempt to see how many of those million systems were “managed” systems, attributable to an organization. As suspected, there were few externally accessible F500 systems still vulnerable to Bluekeep two weeks out from the announcement of the vulnerability. This speaks to the processes inside these organizations to manage and remediate important vulnerabilities such as BlueKeep.

This data was gathered per-organization using Intrigue Core based on a set of “seeds” attributed to each organization, and thus may not be 100% complete. It does not attempt to account for internal hosts, where an RDP worm would likely wreak havoc in most organizations. I strongly suggest following Microsoft’s guidance and applying the patch, even if this requires an out of band update. Given that real attack surface is the internal corporate network, it’s likely we’ll see this vulnerability weaponized as part of a multi-tier attack, similar to how EternalBlue has been being used.

Using uri_spider to parse file metadata

The uri_spider task, when given a Uri entity such as http://www.acme.com, will spider a site to a specified level of depth (max_depth), a specified max number of pages (limit), and if configured, a specified url pattern (spider_whitelist). When configured – and by default – it will extract DnsRecord types, PhoneNumbers and EmailAddress type entities in the content of the page. All spidered Uris can be can created as entities using the extract_uris option.

Further, the spider will identify any files of the types listed below, and parse their content and metadata for the same types. Because this file parsing uses the excellent Apache Tika under the hood, the number and type of supported file formats is huge – over 300 file formats are supported including common formats like doc, docx and pdf – as well as more exotic types like application/ogg and many video formats. To enable this, simply enable the parse_file_metadata option.

Below, see a screenshot of the task’s configuration:

uri_spider task configuration

Note that you can also take advantage of Intrigue Core’s file parsing capabilities on a Uri by Uri basis by pointing the uri_extract_metadata task at a specific Uri with a file you’d like parsed, such at https://acme.com/file.pdf