Microsoft have recently announced their cloud-native SEIM tool with built in AI capabilities with a public preview now available. It certainly feels fresh out of the oven as documentation around the product is still lightweight (but being added to at a rapid rate!) and not a lot of collateral just yet around the mechanics of how one should leverage this tool.
We’ve been busy investigating the Azure Sentinel capabilities since the release and in particular trying to build a view of how one could start adopting it’s use within the business and really start making use of its AI capabilities. Within this blog post we’ll describe some of our findings and provide some insights on how you can start using it for yourself.
Getting Started
The documentation around getting started with Azure Sentinel is fairly good, so we won’t regurgitate it here – but here are our notes across each of the three main steps of
- Collecting Data
- Creating Alerts
- Automate & Orchestrate
Collecting Data – Which Log Analytics Workspace to use?
If you’re new to Azure and its logging, think of the Log Analytics Workspace as the ‘repository’ for all your logs and telemetry data. Think of Azure Sentinel as the service that sits on top of that workspace to glean insights and intelligence out of that data. Some of that logging/telemetry data might be raw data (e.g. event logs from a VM) and some of that logging data might be actual events and alerts (e.g. security events triggered in Microsoft Cloud App Security). The purpose of Azure Sentinel is to centrally collect all that data together and start correlating and gleaning insightful information from it all.
In the scenario where you may already have one or more log analytics workspaces (e.g. you’ve decided to deploy a workspace in each of your Azure subscriptions), you need to decide on which workspace to associate Sentinel to, or whether to deploy a new one. Since Azure Sentinel needs to see a centralised view of data, it might make sense for you to deploy a new workspace dedicated for this purpose, and start thinking about how to connect or consolidate your disparate workspaces together. Microsoft provides some good guidance on this thinking in this article, as this is a common problem for Service Providers.
In general (and I’m talking broad strokes here), we expect that not many organisations have started collating their logs from Azure AD, Cloud App Security, AIP etc into a single workspace yet, but some may have already started with a single workspace for all VM diagnostics. If that’s the case, I suggest just building on top of that workspace and running Sentinel on that (so you don’t have rework and you have a ton of data already to work with), but otherwise, simply start afresh and focus on centralising your logs into this new workspace and know that you’ll be running your SEIM and correlation out of it.
Creating Alerts – Ok I have collected all the data, now what?
One of the first things you’ll notice is that in the current iteration, out of the box, no Azure Sentinel alerts are actually created for you. While there are great dashboards included in the built in connector flows, which provides slick visualisations across your data, the associated alerts are not included, even for some data sources where there already predefined alerts (which are captured as a log event and sent to Azure Sentinel). A good example of this is Microsoft Cloud App Security and Azure Information Protection, which can detect and log events like “Multiple failed user logon attempts to a service” or “Activity from infrequent country” and will be already raising alerts within its own portal/service. In order to surface these alerts in Azure Sentinel, right now you actually need to create a specific query into the workspace to detect for that event and raise it as an Azure Sentinel alert.
For example, here’s how I’d create an Azure Sentinel alert for the MCAS event – Multiple failed user logon attempts to a service
Display Name: A descriptive name, I’ve tagged the source that this alert is for since it can come from multiple places
Severity: I used the same rating that MCAS marked for this event
Alert Query: MCAS alert events are stored under the SecurityAlert Table under Security Insights. To get the specific MCAS alert, we write a Kusto Query that searches against the ProviderName and the AlertName fields. The AlertName fields you can find by looking at existing logs, or, in the scenario of MCAS, we can look up the relevant Policy Names as they are aligned.
Entity Mapping: Entities are useful when you do investigations as it gives you a point of reference for correlation (e.g. find all activities associated to an Account or IP address). This part of the config helps modify your query so that it maps fields together in scenarios where there is a mismatch in fieldnames between systems. The entities are stored under a new column called {entityName}CustomEntity. The drop down boxes let you select fields direct from the result of your query, which is handy if there is a 1:1 field mapping, however I’ve found that for MCAS and Azure Identity Protection alerts, this is less straight forward. In Identity Protection events, the account name is stored in ExtendedProperties, in MCAS, oddly there is no dedicated field, but rather it’s placed inside a description! So to do that you’ll need to manually edit the queries and do the mapping yourself.
An example of how I’ve done the entity mapping for the two scenarios are show below:
Here the value as mentioned is stored a text within a description. It’s a bit of a hack, but what we’re doing is extracting out the UPN (which is email address format) by using a regular expression.
Alert Trigger: Because this is a known event you can set the trigger as soon as it exceeds 0. The fact that ‘multiple failed attempts’ were needed to trigger this event is abstracted out because MCAS was responsible for doing that analysis.
Alert Trigger
Since we’re looking for a single event log occurring (rather than a trend), we can have Frequency and Period be the same value (5 mins in this use case).
Seeing the results
Once the alert is created, you’ll start seeing hits and matches in the portal and dashboard. Note that the time filter in the right will alter how many matches you’ll have.
Alert Rules and Matches
The alerts themselves will then trigger Azure Sentinel to create cases for each one. Again, be aware that the time filter is used to scope and show what cases are applicable – so say an alert from 48 hours ago was triggered, creating an open case, and was not subsequently closed, if your time filter is only ‘Last 24 hours’ you won’t actually see that still open case.
Now, if you’ve done the entity mapping correctly, you’ll also start seeing the entities being picked up in the bottom. In the below example, I triggered the alert to occur on two different accounts.
Open Cases
You can get more information around the alert that was created through View Full Details and from there under the Entities tab you can also get a view of how many alerts those entities are being associated to.
Case Details
The next natural stage is to then Investigate into those cases to help diagnose what is happening. Of course, this blog post is probably a bit too cutting edge, since that feature hasn’t been turned on for my tenant yet. Once I get access, I’ll do some more deep diving into that as well. But the visual graph nature of this looks exciting!
Lastly, the most interesting part is this element of Azure Sentinel:
The enablement of FUSION to start running Microsoft’s ML models across your alerts and cases to start doing automated correlation. Again, we’re still in ‘fresh from the oven’ stage of this product, but some information around what Fusion does is described in this blog post. To paraphrase from the article, the use of ML models for automated correlation is to help admins reduce alert fatigue (which we’ve all encountered) and start surfacing cases that represent an event occurring taken from many different alerts and sources (and hence the real value of a centralised logging and SEIM service). There are examples provided in the blog post of what Fusion is capable today, such as picking up an “Anomalous login leading to O365 mailbox exfiltration” which is pretty neat. I haven’t quite managed to simulate the same behaviour myself just yet, but I suspect this is a combination of needing to get the alerts created in a certain way and also the public preview nature of the whole service.
Closing Thoughts
So this is just the first stage of our investigations into Azure Sentinel. As you can see, there are still many elements that are work in progress and I expect to see this rapidly change over the next few weeks to months. Some of the things I expect (or hope) to come along soon to make the use of Azure Sentinel much more practical for use are:
- More alert query examples in their GitHub repo. There are some there now but expect that to grow over time as the community gets involved.
- “Pre-built” alert queries being added, likely as part of the data connection onboarding process. Very similar to the old ‘SCOM Management Pack’ idea I hope (though perhaps a bit more refined)
- An API to allow for more automated creation of alert queries. Right now the only interface is the GUI, so even using the GitHub examples requires a lot of clicking and copying and pasting. Great for preview stage, but obviously not great for people like us where we might need to help many customers onboard this service.
There are also other parts of Sentinel we haven’t dived into yet, like Hunting, User Profiles and Notebooks (for custom ML models). As we investigate these we’ll provide more information in future blog posts, so stay tuned!