Monday, March 26, 2018

Even More DFIR Brain Droppings...

Analysis
Something I've seen over the years that I've been interested in addressing is the question of "how do I analyze X?"  Specifically, I'll see or receive questions regarding analysis of a particular artifact on Windows systems, such as "...how do I analyze this Windows Event Log?" and I think that after all this time, this is a good opportunity to move beyond the blogosphere to a venue or medium that's more widely accessible.

However, addressing the issue here, the simple fact is that artifacts viewed in isolation are without context.  A single artifact, by itself, can have many potential meanings.  For example, Jonathon Poling correctly pointed out that the RemoteConnectionManager/1149 event ID does not specifically indicate a successful login via Terminal Services; rather, it indicates a network connection.  By itself, in isolation, any "definitive" statement made about the event, beyond the fact that a network connection occurred, amounts to speculation.  However, if we know what occurred "near" that event, with respect to time, we can get enough information to provide come much needed context.  Sure, we can add Security Event Log records, but what if (and this happens a lot) the Security Event Log only goes back a day or two, and the event you're interested in occurred a couple of months ago?  File system metadata might provide some insight, as would UserAssist data from user accounts.  As you can see, as we start adding specific data sources...not willy-nilly, because some data sources are more valuable than others under the circumstances...we begin to develop context around the event of interest.

The same can be said for other events, such as a Registry key LastWrite time...this could indicate that a key was modified by having a value added, or deleted, or even that the key had been created on that date/time.  In isolation, we don't know...we need more context.  I generally tend to look to the RegBack folder, and then any available VSCs for that additional context.  Using this approach, I've been able to determine when a Registry key was most likely modified, versus the key being created and first appearing in the Registry hive.

As such, going back to the original questions, I strongly recommend against looking at any single artifact in isolation.  In fact, for any artifact with a time stamp, I strongly recommend developing a timeline in order to see that event in context with other events. 

LNK Shell Items...what's old is new again
It's great to see LNK Shell Items being discussed on the Port139 blog, as a lot of great stuff is being shared via the blog.  It's good to see stuff that's been discussed previously being raised and discussed again, as over time, artifacts that we don't see a lot of get forgotten and it's good to revisit them.  In this case, being able to parse LNK files is a good thing, as adversaries are using LNK files for more than just simple persistence on systems.  For example, they've have been observed sending LNK files to their intended victims, and as was described by JPCERT/CC about 18 months ago, those files can provide clues to the adversary's development environment. LNK files have been sent as attachments, as well as embedded in OLE objects; both can be parsed to provide insight into not just the adversary's development environment, but also potentially to track a single actor/platform used across multiple campaigns.

Another use for parsing LNK files is that adversaries can also use them for maintaining the ability to return to a compromised environment, by modifying the icon filename variable to point to a remote system.  The adversary records and decrypts authentication attempt, and gets passwords that may have changed.

Something else that hasn't been discussed in some time is the fact that shell items can be used to point to devices attached to a system.  Sure, we know about USB devices, but what about digital cameras, and being able to determine the difference between possession of images, and production?

EDR Solutions
Something I've encountered fairly regularly throughout my DFIR experience is Locard's Exchange Principle. I've also blogged and presented on the topic, as well.  Applied to DFIR, this means that when an adversary connects to/engages with a system on a compromised infrastructure, digital material is exchanged between the two systems.  Now, this "digital material" may be extremely transient and persist for only a few micro-seconds, but the fact is that it's there.  As most commercial operating systems are not created with digital forensics and incident response in mind, most (if not all) of these artifacts are not recorded in any way (meaningful or otherwise).  This is where EDR solutions come in.

For the sake of transparency, I used to work for a company that created endpoint technology that was incorporated into its MSSP offering.  My current employer includes a powerful EDR product amongst the other offerings within their product suite.

For something to happen on a system, something has to be executed.  Nothing happens, malicious or otherwise, without instructions being executed by the CPU.  Let's say that an adversary is able to get a remote access Trojan (RAT) installed on a system, and then accesses that system.  For this to occur, something needed to have happened, something that may have been extremely transient and fleeting.  From that point, commands that the adversary runs to, say, perform host and network reconnaissance,  will also be extremely transient.

For example, one command I've seen adversaries execute is "whoami".  This is a native Windows command, and not often run by normal users.  While the use of the tool is not exclusive to adversaries, it's not a bad idea to consider it a good indicator.  When the command is executed, the vast majority of the time involved isn't in executing the command, but rather in the part of the code that sends the results to the console.  Even so, once the command is executed, the process block in memory is freed for use by other processes, meaning that even after a few minutes, without any sort of logging, there's no indication that this command was ever executed; any indication that the adversary ran the command is gone.

Now, extend this to things like copy commands (i.e., bad guy collects files from the local system or remote shares), archival commands (compressing the collected files into a single archive, for staging), exfiltration, and deletion of the archive.  All of these commands are fleeting, and more importantly, not recorded.  Once the clean-up is done, there're few, if any, artifacts to indicate what occurred, and this is something that, as many DFIR practitioners are aware, is significantly impacted by the passage of time.

This is where an EDR solution comes in.

The dearth of this instrumentation and visibility is what leads to speculation (most often incorrect and over exaggerated) about the "sophistication" of an adversary.  When we don't see the whole picture, because we simply do not accept the fact that we do not have the necessary instrumentation and visibility, we tend to fill the gaps in with assumption and speculation.  We've all been in meetings where someone would say, "...if I were the attacker, this is what I would do...", simply because there's no available data to illustrate otherwise.  Also, it's incredibly easy under such circumstances for us to say that the attacker was "sophisticated", when all they really did was modify the hosts file, and then create, run, and then delete an FTP script file.

Why does any of this matter?

Well, for one, current and upcoming legislation (i.e., GDPR) levies 'cratering' fines for breaches; that is, fines that have what can be a hugely significant impact on the financial status of a company.  If we continue the way we're going now...receiving external notification of an intrusion weeks or months after the attack actually occurred...we're going to see significant losses, beyond what we're seeing now.  Beyond paying for a consulting firm (or multiple firms) to investigate the breach, along with loss of productivity, reporting/notification, law suits, impact to brand, drop in stock price, etc...now there are these huge fines. 

Oh, and the definition of a breach includes ransomware, so yeah...there's that.

And all of these costs, both direct and indirect, are included in the annual budget for companies...right?  We sit down at a table each year and look at our budget, and take a swag...we're gonna have five small breaches and one epic "Equifax-level" breach next year, so let's set aside this amount of money in anticipation...that actually happens, right?

Why not employ an EDR solution?  It's something you can plan for, include in your budget...costs are known ahead of time.  The end result is that you detect breaches early in the attack cycle, obviating the need to report.  In addition, you know have the actual data to definitively demonstrate that 'sensitive data' was NOT accessed, so why would you need to notify?  If client data is not accessed, and you can demonstrate that, why would you need to notify? 

Recently, following a ransomware attack, an official with a municipality in the US stated that there was "no evidence" that sensitive data was accessed.  What should have been said was that there was simply no evidence...the version of ransomware that impacted that municipality was one that is not email-borne; delivery of the ransomware required that someone access the infrastructure remotely, locate the servers, and deploy the ransomware.  If all of this occurred and no one noticed until the files are no longer accessible and the ransom note was displayed, how can you then state definitively that sensitive data was not accessed?

You can't.  All you can say is that there is "no evidence".

It's almost mid-year 2018...if you don't already have a purchase of an EDR product planned, rest assured that you'll be part of the victim pool in the coming year.

No comments: