Gitrob Integration

Gitrob is a handy open source utility by Michael Hendrickson to find secrets in public Github repositories. Gitrob works by downloading all repositories under a given Github account, and then scanning for strings that might be an accidental leak. Even if a given line or file has been investigated, it may still be in the commit log, so Gitrob will check all commits for these potential leaks. Learn more about Gitrob.

This new Core integration makes it simple to spin up Gitrob every time we find a Github repository, and by combining it with the search_github task, we can now scale our search for leaked secrets very quickly!

This integration and task are now on the develop branch. To use it immediately, build a local Docker image.

BlueKeep (CVE-2019-0708) – Fortune 500 External Exposure

Recently, Rob Graham shared a post detailing how he used Masscan + RDPscan to check for vulnerable hosts, finding that over 1 million hosts were vulnerable to BlueKeep (CVE-2019-0708). I was curious how many of these systems were corporate or enterprise systems, given that the awareness is often higher in organizations with dedicated patch and vulnerability management teams.

To explore this, using scan data gathered on the Fortune 500 from, I pulled all systems with port 3389 open, finding a total of 1140 systems.

Using the same tooling (rdpscan) as Rob, i then checked to see if these hosts were were still exposed to Bluekeep. When attempting to connect to these systems to verify that they were in fact RDP, we found that only 286 responded with an RDP protocol . The difference can probably be attributed to firewalls and other network security devices that respond automatically (and erroneously) when scanned.

So, using the set of 286 systems verified to be RDP and returning results from RDPscan, we found that across 49 unique F500 organizations exposing a system, they could be broken into the following statuses:

71 Vulnerable
85 Mitigated (CredSSP/NLA required)
130 Patched (Target appears patched)

This is pretty good, in my opinion. Given that the vulnerability was announced on 05/14/2019, and this check was run on 05/31/2019, two weeks to patch or mitigate 75% of the vulnerable systems is incredible. I’d attribute this to the fact that there are often dedicated teams inside these large organizations that pay special attention to externally accessible systems, and often will apply a patch “out of cycle” in cases like this.  

Of those 71 vulnerable systems, they were spread across 17 organizations in the following sectors:

The organization with the most publicly-exposed RDP services was an Oil and Gas company, and it was interesting to see systems attributed to the same organization that were only partially patched or mitigated, with many still vulnerable. Patching, even in this case, where the update is available, and could theoretically be automatically applied, is still a time consuming and change-controlled process in larger organizations. Their systems were about 2/3 patched or mitigated, with 34 systems still externally exposed and vulnerable:

45 Patched – Target appears patched
34 Vulnerable
18 Mitigated – CredSSP/NLA required

The other 32 organizations with exposed RDP had clearly been working on the vulnerability with almost 2/3 patched.

130 Target appears patched
85 CredSSP/NLA required

Wrapping up, this was a quick look from a different perspective around this vulnerability, in an attempt to see how many of those million systems were “managed” systems, attributable to an organization. As suspected, there were few externally accessible F500 systems still vulnerable to Bluekeep two weeks out from the announcement of the vulnerability. This speaks to the processes inside these organizations to manage and remediate important vulnerabilities such as BlueKeep.

This data was gathered per-organization using Intrigue Core based on a set of “seeds” attributed to each organization, and thus may not be 100% complete. It does not attempt to account for internal hosts, where an RDP worm would likely wreak havoc in most organizations. I strongly suggest following Microsoft’s guidance and applying the patch, even if this requires an out of band update. Given that real attack surface is the internal corporate network, it’s likely we’ll see this vulnerability weaponized as part of a multi-tier attack, similar to how EternalBlue has been being used.

Using uri_spider to parse file metadata

The uri_spider task, when given a Uri entity such as, will spider a site to a specified level of depth (max_depth), a specified max number of pages (limit), and if configured, a specified url pattern (spider_whitelist). When configured – and by default – it will extract DnsRecord types, PhoneNumbers and EmailAddress type entities in the content of the page. All spidered Uris can be can created as entities using the extract_uris option.

Further, the spider will identify any files of the types listed below, and parse their content and metadata for the same types. Because this file parsing uses the excellent Apache Tika under the hood, the number and type of supported file formats is huge – over 300 file formats are supported including common formats like doc, docx and pdf – as well as more exotic types like application/ogg and many video formats. To enable this, simply enable the parse_file_metadata option.

Below, see a screenshot of the task’s configuration:

uri_spider task configuration

Note that you can also take advantage of Intrigue Core’s file parsing capabilities on a Uri by Uri basis by pointing the uri_extract_metadata task at a specific Uri with a file you’d like parsed, such at


Docker One-Liner

Assuming you have Docker installed, this will pull the latest all-in-one image from DockerHub, and start it listening on :7777.

##### The following command will pull and run the latest stable image. 
### Remove the -v option if you do not need to preserve your projects between runs
docker run -e LANG=C.UTF-8 --memory=8g -p -it "intrigueio/intrigue-core:latest"

Once downloaded, you’ll see the startup happen automatically, and a password will be generated for you. You’ll be able to access the interface on :7777 on your local host, since you forwarded the port to localhost.

Now that you have an instance running, check out the Up and Running with Intrigue Core guide.

Quick Docker image FAQ:

THE SERVICE IS NOT LOADING ON :7777: Did you forward the port when starting the container (as the example above does?) Make sure to access it using HTTPS, there is a certificate generated at start time! If you’re still having trouble, try opening a shell on the docker container using docker exec

docker exec -it <CONTAINER_ID> /bin/sh)

and make sure the service is actually running on :7777 (using something like netstat -lnt). If it is running in the container, than everything is working as expected, and the problem is very likely with your host system’s networking settings.

I’M ON A SMALLER SYSTEM AND AM EXPERIENCING INSTABILITY: Running Core is memory and CPU intensive, and you’ll want to make sure your Docker daemon has access to enough RAM. While you can start the container. with the –memory= flag above, if you have not given the Docker daemon itself enough RAM, it wont work. See this link for more assistance on how to accomplish this.

I’D LIKE TO PRESERVE DATA BETWEEN RUNS, CAN YOU HELP?: In order to preserve your instance data between container runs, you can use Docker’s volume support. If you’d like to do this, simply add the following -v option in the docker run command above, using the syntax LOCAL_FOLDER:/data. An example is below.

-v ~/intrigue-core-data:/data

I’M HAVING ODD PERMISSION ISSUES WHEN TRYING TO LAUNCH THE CONTAINER ON RHEL: If you’re running Docker on a RHEL system as a user other than root, you may need to add the —privileged flag

If you’re still having trouble, jump into our Slack channel and ask questions in #core-help!

Using Intrigue Ident for Application Fingerprinting

Chatting with folks at RSA and BsidesSF, i realized it’d be helpful to share more information about the new application fingerprinter behind Intrigue Core, Ident.

Ident is a new standalone project, with a clear focus: be the most complete, flexible and most extensible software for fingerprinting application layer technologies and vulnerabilities. Given that it’s launching with over 300 checks reflecting current and widespread technology, and it’s simple to craft a new check (see details below), it’s on the way toward fulfilling this mission.

You might wonder… why not just integrate with nmap, recog, wappalyzer, or others? A couple key reasons… 1) A focus on freedom (the code is BSD licensed) and 2) Razor sharp focus on app-layer technology. To give you a more detailed view, here’s what I highlighted at BsidesSF this year – these are the key qualities I was looking for:

Compared with Recog – another excellent fingerprinting library – which is more static in its format (not a good fit for ident but also a strength) focused on infrastructure. Recog’s focus on infrastructure actually makes it a great complement to Ident, and thus it’s been integrated into Intrigue Core as well.

And while there are many tools and libraries out there, each had a licensing or technology limitation against this criteria, making them incompatible with the focus of the project.

In addition, spinning up a standalone fingerprinter has a number of benefits:

  • It makes it easier to use, and to contribute to the overall project, checks are pretty simple to create and test.
  • It opens up new use cases … If you have a set of known applications, but want to know if they’re running a given version, or if they’re configured properly, Ident’s CLI can be an excellent fit. You can just run it against a list of urls (see below).
  • Automation of Ident can be a lot easier than automating against the whole Intrigue Core platform. Feel free to drop the library into your project, and reach out if we can help you do so!
  • By building from the ground, we can integrate CPE support, ensuring vulnerability inference vs the CVE database “just works” and we don’t need to do anything special to determine vulnerabilities for a given version.

To give you a quick run through and some examples of what it can do, here’s an example of the CLI running against a single URL:

And here’s one against a file of URLs (one per line), which automatically saves results into a CSV file:

A cool thing about the CLI tool is how it handles “content checks” – a special type of checks that will always run and print output vs “fingerprint checks” – which will also run, but will only ever show up in output if they match. The CLI generates an output.csv file that makes each content check a column and is smart enough to know if a new check is added! Simply drop a new check into the “checks/content” folder if you want to get the output in the CSV.

Here’s an example content check, this one checks for directory indexing in the content of the tested page:

Fingerprint checks are also pretty simple to write, this one matches to Axis Webcams, and as you can see, it checks the body contents for a unique string. You can regex against the contents of the body, headers, cookies, title, and generator.

Ident is also tightly integrated with Google’s “Chrome Headless”, so if you add the ‘-b’ flag, you’ll notice that some additional checks are run (and it may run a little more slowly … ~10s per url on a recent machine), but this is because it’s parsing and fingerprinting against the full DOM. Very handy!

In order to keep the library speedy and minimize the number of queries that are made as a given application is fingerprinted, each Ident check takes a “paths” parameter that is used as a template for the requested pages, and this is pre-processed at runtime to ensure only ONE request is made for each unique path. This keeps the fingerprinting FAST and so we will endeavor to minimize the number of unique paths going forward! Fortunately the standard “#{url}” path is often still VERY verbose about the running software.

If you’d like to get started using it right away, you can pull the latest Dockerhub image and start testing using the following command:

docker pull intrigueio/intrigue-ident && docker run -t intrigueio/intrigue-ident --url [YOUR URL HERE]

That’s it for now. Reach out if you have questions! You can check out the full set of checks here: If you’re interested in helping out, or have ideas on how to improve the project, certainly pop into the Slack channel and say hello, or reach out on twitter: @intrigueio.

Intrigue Core v0.6 Released!

Today marks the release of Intrigue Core v0.6, bringing a bunch of new functionality including: automatic inference of CVEs and vulnerabilities on discovered applications, new entities such as “Finding” and “Domain” types, and usability features such as the ability to import a list and new analysis views to easily see expired certs or out-of-compliance cipher suites.

See below for details on how to get it, and enjoy!

Major Features 

  • Added a vulnerability inference capability based on ident’s fingerprinting
  • Added support for a “Domain” entity, representing a top-level domain (vs a standard DnsRecord)
  • Added initial support for “Finding” entity, enabling tasks to easily surface actionable findings
  • Added the ability to import and run a set of tasks on a list (thanks @hollywoodmarks!)
  • Added new analysis views (ciphers, javascript, cves)
  • Added support for task “Notifiers” & an initial (Slack) notifier
  • Adjusted application fingerprinting to a new standalone library, “intrigue-ident”
  • Adjusted Enrichment tasks to run in-line, eliminating a variety of race conditions when running machines

Minor Features

  • Added support for go-based utilities in the image via util/ setup script
  • Added support for RDAP, enabling new RIR Whois lookups (Afrinic, Lacnic, Apnic)
  • Adjusted handling of network services – all types are now subtype of “NetworkService”
  • Adjusted handling of saas services – all types are now subtype of “WebAccount”
  • Adjusted base image to Ubuntu 16.04 (util/ setup)
  • Upgraded to Bundler 2
  • Upgraded to latest GeoLite2-City

Major Bugs

  • Fixed bug causing the system to throw a runtime error when an API key is missing
  • Fixed bug in util/ script that would cause a hang due to grub-pc (thanks @bpmcdevitt!)
  • Fixed bug that would cause memory leaks in Chrome headless browser teardown
  • Mounted an out of control rollercoaster of regex bugs and arrived victorious
  • … literally hundreds of other minor bugs

New Tasks:

A huge thanks to the following folks who submitted PRs and/or contributed to this release:

You can download and run Intrigue Core v0.6 immediately using one of the following guides:

If you’re interested in contributing to the effort to make Intrigue Core the best OSINT and security intelligence gathering framework around, please jump in our chat and say hello!