Announcing two new features called “entity enrichment” and “entity aliasing”.

Entity enrichment allows us to get a more complete picture of an entity. It is a process that happens automatically for certain entities upon their creation. Today IpAddress, DnsRecord, and Uri are supported. For each of these entity types, one or several enrichment tasks will be run as soon as the entity is created, allowing us to discover additional facts and alternate names (“aliases”) for the entity.

How it works: Upon creation, an enrichment task (lib/tasks/enrich/*) will be scheduled and run. In the case of an IpAddress, the name will be resolved to A, CNAME, or PTR records, DnsRecord entities created,  and finally “aliased” to the IpAddress.

Let’s show a quick demo.

First, create a new project and select the “Create Entity” task with a DnsRecord entity. In this case, we’ll use the DnsRecord “intrigue.io” with no recursive depth:

1

Hit “Run Task” and you’ll see that it kicks off the task, creating the entity:

2

Now browse to the entities page. Notice that there are now 3 total entities, one DnsRecord, and two IpAddress entities. Note also that the IpAddress entities are both are aliased to the “intrigue.io” DnsRecord. In this way, we can quickly find load balancers and other interesting DNS configurations.

3-entities_table

Now, let’s try with a larger iteration strategy (3):4

Give it a few moments, and now, on the Entities view, filtering for IpAddress only, we can see the correlation of IpAddress to DnsRecord:

5

This is also a good way to find DNS entries that are no longer active or resolving to an IP, but this is left as an exercise for the reader.

 

So you’ve gotten an instance of intrigue-core up and running using the AMI or Docker guide, but what now!? Give scans a try. Here’s now.

Create a new project, let’s run this one on Mastercard (They run a public bounty on Bugcrowd):

create_project

Now, run a “Create Entity” task to create a DnsRecord with the name “mastercard.com”.

This time, however, let’s set our recursive depth to 3. This will tell the system to run all viable tasks when a new entity is created, recursing until we reach our maximum depth:

iteration.jpg

Hit “Run Task” and you’ll see that our entity was successfully created:

create_entity.jpg

Now, let’s browse to the “Results” tab and get an overview of the “Autoscheduled Tasks” that have been kicked off automatically:

results-autoscheduled

Wow, 83 tasks in just a few seconds! Core is FAST, thanks to Sidekiq and Sequel. Now we can browse over to the “Graph” tab, and get an overview of the entities (nodes) and the tasks (edges) that created them.

mastercard

Note that the graph is generated every time you load the page, so you will need to refresh a couple times to get the graph to show. You can zoom in and out to get details on the nodes:

zoom-graph.jpg

Browsing over to the “Dossier”, you can see that there’s some fingerprinting happening on the webservers, based on the page contents. Note that there’s nothing invasive happening here, this is simply just doing page grabs and analyzing the results:

dossier-2

One neat feature is that core actually parses web content – including PDFs and other file formats to pull out metadata. More to come on this!

All this in just a few minutes: attack_surface

To get started with intrigue-core using Docker, you’ll need to install Docker on your machine.

Next, pull down the intrigue-core repository to your local machine with a git clone and jump into the directory:

$ git clone https://github.com/intrigueio/intrigue-core
$ cd intrigue-core

Then use Docker to build an image:

$ docker build .

Finally, (this is pretty easy, huh?) run the image with Docker!

$ docker run -i -t -p 0.0.0.0:7777:7777 [image id]

This will start the docker image with the intrigue-core services, giving you output that looks like the following (shortened for brevity):

Starting PostgreSQL 9.6 database server                                                                                                                                                           [ OK ] 
Starting redis-server: redis-server.
Starting intrigue-core processes
[+] Setup initiated!
[+] Generating system password: hwphqlymmpfrqurv
[+] Copying puma config....
[ ] File already exists, skipping: /core/config/puma.rb

* Listening on tcp://0.0.0.0:7777
Use Ctrl-C to stop

As it starts up, you can see that it generates a unique password. You can  now log in with the username intrigue and the password above at http://localhost:7777 on your host machine!

Now, you’re up and running,  see: Up and running with Intrigue-core

UPDATE: The latest test image can be found by searching ‘intrigue-core-latest’ in Community AMIs. It is currently only available in the Northern Virginia (US-east-1) region on EC2.

I’ve made an EC2 instance available for testing if you’d like a simple way to try it out. Here’s a simple demo of how to get started.

The current AMI name is: intrigue-core-latest-20190218 and the ID is: intrigue-core-latest-20190218

Once it’s up & running, update by logging in and running:

$ cd core && git pull && bundle install && ./util/control restart

Congrats, you’re up and running. Access the interface at http://%5Bhostname%5D:7777.

Intelligence Gathering, Reconnaissance, Targeting, or Pre-Collection… No matter what you call it, it’s an important component of any security assessment project.

Intelligence Gathering:  The collection of intelligence both overt and covert to aid in the decision of a course of action.

Intelligence Gathering (IG) is often viewed and approached as the first step of an assessment project. A penetration tester will diligently scan the target’s website, gather DNS information, check Google for email addresses and they might even check SHODAN for exploitable hosts.

Unfortunately, this is often where the Intelligence Gathering stops. The assessor now has enough information to move on to the “Active Scanning” or “Exploitation” phases, suddenly ignoring that they will need to continuously perform IG on new information throughout an assessment.

… So what is is Intelligence Gathering at it’s core? There are a number of recognized disciplines within the scope of Intelligence Gathering. The most recognizable of these is Open Source Intelligence (OSINT), or intelligence gathering performed on publicly available sources. In the Intelligence Community (IC), the term “open” refers to overt, publicly available sources (as opposed to covert or clandestine sources);

We often focus on OSINT, but there are others such as SIGINT and HUMINT that are often left untouched when assessing security of an entity since they may not be relevant, in scope, or within the control of the entity that commissioned the assessment.

The process can be difficult to scope – until you’ve gained enough information to capture your goal, you’ll continue to gather intelligence and analyze it, filtering it into a model of the target. “Enough” IG largely depends on the goals of the application for which its used. If you’ve not been successful at gaining your target, then you have more to do.

Performing Intelligence Gathering at scale can also be challenging. A small business or organization can consist of thousands of entities which may, or may not be relevant during an assessment. An enterprise, made up of thousands, if not millions of entities and the relationships between them is simply mind-boggling and impossible to process with traditional techniques. This is truly a “big data” problem.

Our mission is to make Intelligence Gathering and Analysis simple, and support the assessment efforts of security professionals.