Auditing who checked your email metadata in M365

  1. Background
  2. Accessing email message trace
    1. Overview
    2. Access email message trace via Defender XDR (Explorer)
    3. Access email message trace via Defender XDR (Advanced hunting)
    4. Access email message trace via Exchange Admin Center
    5. Analyzing the logs
  3. Conclusion

Background

One of the things organizations typically gain when moving into cloud, is visibility. Especially, when you’re using a single vendor (such as Microsoft), you can get very wide visibility into the organizations cloud infrastructure, assuming you have the privileges required (and you know how to navigate 10 different portals). For security professionals this is typically positive, because you can see what’s happening and how things have been configured. But it can have a negative impact as well, especially when thinking of privacy.

In this blog, I’ll have a brief look at one specific use case, that is related to the issue of visibility and how it impacts privacy.

The use case is this:

Certain people (e.g., SOC analysts) need to be able to see certain information about all emails being sent and received. This information includes:

  • Sender email address
  • Recipient email address
  • Subject of the email
  • Time (when it was sent)
  • Verdict(s) from the security solutions being used (did the email contain malware, attachments, URLs, etc.)

The problem is, that this information can be very sensitive. Even when you’re not able to view the contents of an email, being able to see sender, recipient and email subject means that this can be subject to very strict data privacy regulations. The question is:

  • Do you know who has access to this information?
  • Are you able to track who has accessed this information?

In an on-premise environment, this would normally not be a problem. Typically, only very limited people have access to email infrastructure, such as Exchange servers or 3rd party mail gateways. In the cloud, however, this may not be the case. If you’re using Exchange Online (and Defender for Office 365), you have different ways of accessing this information. You also have a fairly complex permissions structure (Entra ID roles, Exchange Online role groups, Defender XDR unified RBAC, etc.) that determines who can access the information. Then you have various audit and activity logs, but do you know which actions are actually being logged?

Accessing email message trace

Overview

My goal here is to do a simple test, as follows:

  1. Create an account with Global Reader role in Entra ID. The reason for using this role is that in my experience it is quite widely being used in organizations, but it’s not necessarily being identified as being a very sensitive role, because it cannot make any changes to the environment.
    • Note: I verified the tests also with an account with Security Reader role, which is a role with much more limited permissions. While Security Reader cannot access some of the views in the Defender XDR portal, the overall result is the same, as noted in the conclusion section.
  2. Verify, if the account can access the information described in the use case (sender, recipient, subject, etc.) and try to access the information via different portals.
  3. Verify (with an admin account) what kind of audit trail is left behind in the following logs:
    • Activity log of Defender for Cloud Apps (in Defender XDR portal)
    • CloudAppEvents table in Defender XDR advanced hunting
    • Microsoft 365 audit log (in Microsoft Purview portal or Defender XDR portal)

Access email message trace via Defender XDR (Explorer)

Let’s start from the Defender XDR portal, under Email & collaboration. Global Reader can access more or less everything here (regardless of whether Defender XDR Unified RBAC has been implemented or not).

Now, let’s open Threat Explorer (Explorer in the navigation). Among other things, we can see the following information from all of the email processed by Exchange Online and Defender for Office 365:

  • Sender address (and domain)
  • Recipient address
  • Subject
  • Time
  • Delivery location
  • Threat type

By default, the view shows emails from the last two days (since the beginning of the day before). However, you can change the filter to search emails from further back.

Is there an audit trail for opening this view? Yes and no – the logs show a bunch of activities, most of which are related to RBAC. One of the log entries shows that the user opened the ThreatInstanceList, which relates to opening the Threat Explorer. However, it doesn’t give any information about what the user saw, i.e. what the filter was. So you cannot really determine, whether the user searched for emails for the last two days, or the last two weeks. Not very useful.

Now, let’s click one of the emails, which opens a more detailed view of an email.

Is there an audit trail for opening this view? Yes – in this case, the logs show that the user opened the specific email (the message ID is logged).

Access email message trace via Defender XDR (Advanced hunting)

Next, let’s look at Advanced Hunting. In the EmailEvents table, we can see all the information that we’re interested in right now (sender, recipient, subject, etc.).

Is there an audit trail for opening this view? No – I could not find any log entries of the query being made.

Access email message trace via Exchange Admin Center

Finally, let’s have a look at Exchange admin center (https://admin.exchange.microsoft.com) and go to Mail flow > Message trace. This allows us to query message trace logs, which contain the information that we are now interested in.

Let’s start a new message trace, and the results show again the same list of emails that we’ve seen before. Again, this is already sensitive information, because we can see the sender, recipient and subject of the emails.

Is there an audit trail for opening this view? No – using the message trace is not being logged. This is actually stated quite clearly also in the documentation (the message trace can be accessed also via Get-MessageTrace commandlet):

If we open one of the emails in the message trace, it will show us more information about the email delivery of that specific email.

Is there an audit trail for opening this view? No – I cannot see any log entries from opening the email, which is expected as the creating the message trace itself is not being logged.

Analyzing the logs

For each of the tests above, I’ve noted whether I was able to find anything from the audit logs or not. A few words about the logs themselves:

Defender for Cloud Apps (MDA) has a very useful Activity log feature (accessible via the Defender XDR portal), which consolidates logs from all connected sources. While the UI can be a bit slow sometimes, it is intuitive to use, and works well when there is not too much data.

In this case, the activity log contains a lot of entries for the test period. However, there is not a lot of information here, and you need to individually dig into the raw data to actually see some details.

Therefore, a better way is to use advanced hunting and the CloudAppEvents table, which I used for most of the analysis:

You can also use the Microsoft 365 audit log, which can be accessed either through the Defender XDR portal or through Microsoft Purview portal. The audit log search is a bit cumbersome, and when doing cross-checking I did not find any additional information in the audit log that wouldn’t be available in the CloudAppEvents table. This makes sense of course, because Defender for Cloud Apps pulls audit logs from Microsoft 365.

Finally, you can send the Microsoft 365 audit logs and/or the CloudAppEvents into Microsoft Sentinel and query the same information there. Because Sentinel runs on top of Log Analytics, you can actually audit all the queries being made there. However, that’s not what I wanted to test here (Sentinel also uses Azure RBAC permission model, and not Entra ID roles).

Conclusion

Overall, the audit logs are very limited when it comes to querying email message traces:

  • Queries made to the Exchange Online message trace through Exchange Admin Center are not being logged, even though the unified audit log ingestion has been enabled in Exchange Online.
  • Advanced hunting queries are not being logged, and you can see the message trace in the EmailEvents table (of course, you can a lot of other potentially sensitive information as well, such as device and network events from Defender for Endpoint).
  • Accessing the overview page the the Defender XDR threat explorer shows the email trace. While opening the page is being logged, it’s not clear what the user saw there. However, opening an individual email is being logged with the message ID.

Some final recommendations:

  • Global Reader has very wide read access (as its name suggests), but even Security Reader can use advanced hunting queries and access the Exchange message trace by default. Be mindful of this when assigning these roles.
  • Implement Defender XDR Unified RBAC, which allows more granular control of the permissions in Defender XDR. However, note that even after activation, users with the Entra ID roles will still have access to the data (you have to replace the Entra ID roles with the Defender XDR Unified RBAC roles).

Azure DevOps with Workload Identity Federation

  1. Introduction
    1. Why use workload identity federation with Azure DevOps?
  2. Converting an existing service connection
    1. Service principals that have been automatically created
    2. Service principals that have been manually created
  3. Creating a new service connection
    1. Creating new service connection (automatic)
    2. Creating new service connection (manual)
  4. Testing and conclusion

Introduction

Workload identity federation is a new feature in Entra ID that allows you to configure a workload identity in Entra ID to trust tokens from an external identity provider. In this blog post, I’m looking into how (and why) to use this feature with Azure DevOps service connections, which is a feature that was just announced to be generally available. If you need more information about workload identity federation in general, check out the Microsoft documentation. And if you need more information about workload identities in general, I highly recommend reading Thomas Naunheim’s series of blog posts on the topic.

Why use workload identity federation with Azure DevOps?

When you’re deploying resources into Azure from an Azure DevOps pipeline, you need to have a service connection to the target environment. Typically, the service connection is associated with a workload identity (service principal or managed identity) in Entra ID, which in turn needs to have the necessary Azure RBAC roles assignments to be able to make the changes to the target environment. The workload identities associated with the service connections are typically highly sensitive, because they need to have high privileges to the Azure resources. Earlier, this has posed some significant challenges:

  • If you’re using a service principal, Azure DevOps needs to be able to authenticate as the service principal, using a client secret or a certificate. If you let Azure DevOps to decide, it will default to client secret, although certificate would be more secure. You also need to take care of managing the lifecycle of these secrets and/or certificates, which can be challenging if you have an Azure DevOps organization with dozens or hundreds of projects, each having their own service connections and associated service principals.
  • The only way to use managed identities was to use self-hosted agents. Azure DevOps can leverage the managed identity of the resource where the agent is running on. For example, if you have an Azure virtual machine (or an on-premises server onboarded into Azure Arc) that is running your self-hosted agent, you can associate the service connection with the managed identity of the virtual machine (or Azure Arc server). However, you cannot use this approach with Microsoft-hosted agents. Also, if different development projects are sharing the same agent pools, then they would also be sharing the same managed identity to deploy the resources. In most cases you want different development teams to have dedicated workload identities associated with their own service connections, so that you can prevent the teams having privileges on each other’s resources. Whether you should share agent pools at all is another topic, but there are ways to do that securely, e.g. by using run-once container instances. That doesn’t resolve the issue with the shared managed identities though.

The promise of workload identity federation is that you can tackle both of these challenges:

  • You no longer need to maintain secrets or certificates, because Entra ID will trust the tokens issued by Azure DevOps. The tokens will be issued with properties that tie them to the specific service connection in a specific Azure DevOps project in a specific Azure DevOps organization. You simply need to link the service principal with the service connection by adding federated credentials to the app registration associated with the service principal, and you’re good to go.
  • You can also use federated credentials with user-assigned managed identities. While you’re not leveraging the typical benefit of managed identities in this case (= managing the credentials), the difference is that the federated credentials are managed through the Azure resource of the user-assigned managed identity, which means that you can delegate permissions to manage these credentials via Azure RBAC role assignments. Be careful though. Anyone with Contributor role to the managed identity can add federated credentials to it, which might be used by an adversary to move laterally or establish persistence.

Converting an existing service connection

If you have an existing service connection that is currently using certificates or secrets to authenticate, you can convert it to use workload identity federation instead. This is fairly straight forward, and because the workload identity used by the service connection will remain the same, you don’t need to change any of the permissions assigned to the workload identity. You just change the way it authenticates.

Service principals that have been automatically created

If you’ve originally created the service connection using the Service principal (automatic) option, then this is very straight forward. When you open the service connection, there is now an option to convert the service connection to use federated authentication, as shown in the picture below.

Clicking the Convert button will create the federated credentials for the workload identity, and after 7 days the original credentials will be removed from Azure DevOps. Note, that you still need to manually remove the secrets from the Entra ID app registration. But before doing that, make sure the federated credentials are working.

After the conversion, you can see that federated credentials have been added to the app registration associated with the service principal:

Once you’ve tested that the federated credentials work, remember to remove the secrets from the Entra ID app registration.

Service principals that have been manually created

If you’ve originally created the service connection using the Service principal (manual) option, then you have to do a bit more manual work here as well. But it’s still fairly straight forward.

You can see the difference, when you open the service connection in Azure DevOps. In this case, it shows you that you have to manually add the federated credential to the app registration.

To add the federated credential, you need the following information:

  • Organization name: this is the name of your ADO organization. You can see it in the URL, when you’ve logged into the organization:
  • Organization id: this needs to be in GUID format. One way to find it is to browse the marketplace for extensions. You can see the organization id in the URL:
  • Project: name of your ADO project:
  • Service connection name: name of the service connection you’re about to update

Once you’ve collected the information, you need to go to the app registration associated with the service connection. The easiest way is to use the link in the service connection configuration page. Note, that the link says Manage Service Principal, but actually takes you to the app registration, not the associated service principal.

Under Certificates & secrets, click Add credential.

Add the required information collected earlier:

  • Federated identity scenario: choose Other issuer
  • Issuer: https://vstoken.dev.azure.com/<organization id>
  • Subject identifier: sc://<organization name>/<project name>/<service connection name>
  • Name: name for the credentials

Once you’ve added the credentials, you can go back to the service connection configuration page, and click Convert to finalize the configuration.

Again, that after you’ve verified that the new federated authentication works, you should go back and remove any earlier credentials (secrets/certificates) from the app registration.

Creating a new service connection

You can create new service connections with workload identity federation either automatically or manually (just as with service principals).

Creating new service connection (automatic)

As you would expect, using the automatic approach is very straight forward. You just create a new service connection for Azure Resource Manager, and select Workload Identity federation (automatic) as the authentication method.

You need to select the scope and name for the service connection. Note! This will add Owner RBAC assignment for the workload identity to the chosen scope. If you use this method, I highly recommend removing the role assignment manually, and adding only the required role assignments for the service principal. Otherwise, you easily end up with service connections that have way too much privileges.

Another annoying thing with the automatic approach is, that the display name of the service principal (and app registration) is based on the organization name, project name and scope (e.g. management group or subscription). As result, you may end up with multiple service principals and app registrations with the same name, as seen in the picture below.

Luckily, you can change the name by opening the service principal (= Enterprise Application), and going to the Properties blade.

That’s really all there is to it!

Creating new service connection (manual)

Instead of using the automatic way, you can also manually create the workload identity first, and associate it with a service connection afterwards. You can either do this by creating an app registration (which also creates a service principal), or you can use managed identities. In this example I’m using managed identities, because as mentioned in the introduction, it has not been previously possible to use managed identities with service connections, except when using self-hosted agents. And if you skipped the introduction: be careful with delegating permissions to these managed identities.

Let’s start by creating the (user-assigned) managed identity:

Once you’ve created it, you need to assign the required RBAC permissions to it. Again, grant only the permissions required by this specific project and this specific service connection. However, you do need to grant at least Reader role to the subscription or management group that you specify later on when creating the service connection.

The next step is to create a new service connection with Workload Identity federation (manual) and give it a name.

After you’ve given a name for your service connection, you’ll see a the following screen displaying the issuer and subject identifier information that you need to use when adding the federated credentials.

Add new federated credentials to the managed identity:

Again, use the Other option on the Federated credential scenario drop-down. Then just copy-paste the information from the wizard to the Issuer URL and Subject identifier.

Take the client ID of the managed identity (and your tenant ID).

And add them to the wizard (again, it’s confusing that it says Service Principal Id):

Click Verify and save. And you’re done!

Testing and conclusion

Let’s do some testing. I’ll start by creating a very simple pipeline with the following definition, just to ensure we can connect to Azure Resource Manager using the newly created service connection (in this case I’m using the one I created using a managed identity).

The definition looks like this:

trigger: none

pool:
vmImage: ubuntu-latest

parameters:
# Service connection that is used for connecting to ARM
- name: serviceConnection
displayName: 'Service connection name'
type: string
default: 'ado-connection-test4'

steps:
- task: AzureCLI@2
displayName: 'Run script'
condition: succeeded()
inputs:
azureSubscription: ${{ parameters.serviceConnection }}
scriptType: pscore
scriptLocation: inlineScript
workingDirectory: $(Build.SourcesDirectory)
inlineScript: |
Write-Host "Let's get an access token!"
az account get-access-token --resource-type arm

When I run the pipeline, I can see that that the Azure CLI is indeed using federated credentials:

The sign-ins are also logged into AADManagedIdentitySigninLongs category in Entra ID. However, there’s no information about the credential type here.

The access token shows client certificate as the authentication (appidacr: 2), so we don’t really see any indication of workload federated authentication here either.

Overall, workload identity federation is a great new feature, as you no longer need to maintain secrets or certificates for your Azure DevOps service connections. For obvious reasons, you still need to be careful about delegating access to the service principals (or managed identities) used by the service connections. But this is no different from what is was before.

Detecting and remediating emails with Defender XDR correlation

One of my customers have seen an interesting campaign, and they wanted help detecting and remediating it. Here’s a short summary of what they had observed:

  1. An email is sent to a shared mailbox from a consumer email address, such as Gmail. The purpose of these shared mailboxes is to allow consumers to contact via email, and therefore the users reading the mailboxes are used to getting lots of legitimate email from consumer email addresses. In this case, the same email is actually being sent to multiple shared mailboxes, but of course the users reading the email do not know that. The first email itself does not have any links or attachments, but simply asks for something specific. From the user’s perspective, there’s nothing suspicious about this email at this point.
  2. Once the user replies, they get another email continuing the story, this time there is a link asking them to fill in a form (or something like that). The link is to some legitimate cloud service (DropBox, Google Drive, what have you).
  3. When the user opens the link, it downloads a malicious piece of software. However, the software is benign enough not to be detected by the anti-malware engine (the EDR may detect it afterwards).

As these emails are coming from consumer email addresses, they will pass all the basic email authentication requirements (SPF, DKIM, DMARC). The customer is using Safe Links from Defender of Office (MDO), but that hasn’t been helping either (probably because the links are pointing to legitimate 3rd party cloud services). We cannot block these cloud services completely, because they have legitimate use in the organization.

A few options come to mind, which are not mutually exclusive:

  1. Try to protect the endpoint, and prevent the user from downloading the malicious file (or at least detect/prevent the file during execution).
  2. Try to identify the link in the second mail being malicious. Move the mail to Junk folder.
  3. Detect, when email is being sent from the same consumer email address to multiple shared mailboxes of this type. Move this email to the junk folder. If we’re fast enough, the user never sees the first email, and won’t reply to it. Or, if they reply to it, maybe we are able to move the second email (with the link) to Junk folder.

We could try to use Defender for Endpoint (MDE) to protect the endpoint (option 1), and in any case having an EDR is important for many reasons. However, unfortunately not all of the users reading these email have MDE installed (these are typically shared workstations). And for this particular case, remediating this threat via MDE is challenging. The users may also be getting legitimate links to the same cloud service, so we cannot really block that (e.g., using MDE Network Protection). And if the downloaded file is not detected by the anti-malware engine, how to separate valid links from malicious ones. Again, I still highly recommend having MDE in place, but it’s probably not our best solution for this particular threat.

We might be able to resolve option 2 with Exchange Online Protection (EOP) and Defender for Office 365 (MDO), by using transport rules and/or anti-spam policies. However, the challenge with this one is that these rules are analyzed on a per-mail basis. Again, how do we differentiate between malicious and legitimate emails, if both might be sent from the same consumer email provider, and have a link to the same cloud service?

Hence, we decided to try the option 3 instead.

Option 3: Detecting and remediating malicious emails via Defender XDR correlation

Our use case is pretty simple: if multiple shared mailboxes receive email from the same sender (using a consumer email address) during a short time period, move the email to Junk folder.

We could use either Microsoft Defender XDR for the detection, or we could use Microsoft Sentinel (if we’re sending the MDO events into Sentinel). If we use Sentinel, we need to automate the remediation with playbooks (at least until Sentinel becomes integrated with the Defender XDR portal). There doesn’t seem to be an easy way to do this with the Exchange Online or Defender XDR APIs, so I decided to create the detection logic directly in Defender XDR instead.

First, let’s send the same email from an outlook.com address to three different recipients in our test tenant. In this case, I’m adding all recipients into the same email, but in the real scenario they would be separate emails (this is tested later in the blog):

Once the email is sent, I’ll use the following KQL query to correlate the emails:

// Threshold: How many emails are tolerated from the sender
let Threshold = 2;
// Timespam: How far back are we looking into
let TimeSpan = 1h;
// List of sender domains that we are interested in (mainly consumer email)
let SenderDomains = dynamic([
"outlook.com",
"gmail.com",
]);
// Recipients that will be protected
let RecipientList = dynamic([
"recipient1@yourdomain.com",
"recipient2@yourdomain.com",
"recipient3@yourdomain.com"
]);
EmailEvents
| where Timestamp > ago(TimeSpan)
// Take only emails that are from specific domains (use envelope sender address)
| where SenderMailFromDomain in~ (SenderDomains)
// Take only emails that are sent to the list of recipients we are interested in
| where RecipientEmailAddress in~ (RecipientList)
// Filter based on number of emails sent from a single address
| summarize TotalEmailCount = count(), MessageIdList = make_set(NetworkMessageId), SubjectList = make_set(Subject) by SenderMailFromAddress
| where TotalEmailCount > Threshold
// Join back with EmailEvents table to get more information about each email, filter out emails that have already been remediated
| mv-expand MessageIdList
| extend NetworkMessageId = tostring(MessageIdList)
| join (
EmailEvents
| where LatestDeliveryLocation == "Inbox/folder"
) on NetworkMessageId
// Project relevant columns (note, that we need ReportId for the custom detection rule)
| project Timestamp, SenderMailFromAddress, RecipientEmailAddress, Subject, TotalEmailCount, LatestDeliveryLocation, LatestDeliveryAction, NetworkMessageId, ReportId
| sort by Timestamp desc

By running the query, you can see the results. Each line represents a recipient, and the TotalEmailCount shows the total number of recipients that got the email from this sender. The query will filter out emails that have already been remediated by using LatestDeliveryLocation column (we’ll see how this works later in the blog).

Now, let’s create a detection rule based on the query:

Enter basic information for the alert, such as:

  • Frequency: Every hour (minimum)
  • Severity: Info (we will automatically remediate the issue)

Select RecipientEmailAddress under Mailbox as the entity. This way we can target remediation actions for that mailbox.

In the Actions section, you can determine what to do with the emails. In my case, I want to move them to Junk folder.

Finalize the wizard:

One you’ve submitted the rule, it will run immediately. When looking at the detection rule, you can see that it has submitted actions on the three emails we sent (or in this case one email to three recipients):

It takes a few minutes for the action to be finalized. Once it’s done, we can rerun the advanced hunting query used for the custom detection rule, and see that there are no results:

If we comment out the LatestDeliveryLocation filter, we can see that all three emails have been moved to the Junk folder, just as we wanted.

Now, let’s try again. This time I’ll send three separate emails to individual recipients:

As we can see, the email gets delivered to inbox, because the detection rule is only executed once an hour.

We can also see the entries in the EmailEvents table, and that the latest delivery location is inbox. Note also, that the TotalEmailCount is now 6, because it counts in also the ones that were already delivered to junk (they were received within the 1-hour time window).

After an hour, the detection rule is triggered again. In the Incidents page, there are now four incidents. The first email that we sent to multiple recipients triggered one incident an hour earlier (with 3 mailboxes), while the individual emails will each trigger their own incident.

From one of the corresponding alerts, we can see that the Move to mailbox folder action has been triggered (there are two actions, because it shows also the action done for the first email sent to all three recipients):

When querying the EmailEvents table again, we can see that also these emails were delivered to the Junk folder.

And lo and behold, both emails are indeed in the recipient’s Junk Email folder:

Overall, the custom detection rules are very handy, when you want to perform automatic remediations for certain scenarios, that might be complex with Microsoft Sentinel automation/playbooks. We could also leverage other information available in Defender XDR, e.g., use the EmailAttachmentInfo and/or EmailUrlInfo tables to further correlate information. But in our case, we wanted to catch the first mail, which doesn’t have any links or attachments. It’s still not perfect, as we have the 1 hour window, when the user may respond to the email, receive the reply, and then click the link. But it’s good first step, and we can easily use the query for threat hunting first, and tweak the query accordingly.

As always, feedback is more than welcome! And if you’ve used some other way of remediating these kinds of campaigns, feel free to share :).

Blocking desktop apps with M365 E5

Background

I recently came across a request from a customer to block specific applications on their Windows clients. More specifically, the requirements were as follows:

  1. We want to be able to block Java being installed on Windows clients
  2. We also want to block Java being used (if it’s already installed)
  3. We need to be able to make exceptions to the rule (some users still need it)
  4. We need to be able to test it first in audit mode (minimize disruption)
  5. We need to be able to monitor when Java is being blocked
  6. The solution should be easy to maintain
  7. The solution does not need to be perfect, i.e. it is targeted for casual, mainly non-technical users.

The customer had the following setup:

  • Microsoft 365 E5 licenses for all users
  • Microsoft Defender for Endpoint (MDE) deployed to all Windows clients
  • Clients have Windows 10 (or Windows 11) and are hybrid joined (Entra ID + Active Directory)
  • Most clients are enrolled into Intune, but the rest are managed via Group Policies (and SCCM)
  • Scale: thousands of users (and Windows clients)

So the question is: what is the best solution?

A few options come to mind, and those are explored in the sections below.

Option 1: Use MDE file indicators

The first idea that came to mind was to use the file indicators in Microsoft Defender for Endpoint (MDE). In MDE, you can define file hashes or signing certificates for files that you want to block in your environment.

File hashes are out of the question, because it would require constantly updating the list with different versions of Java installers, executables, etc. (requirement 6).

Signing certificates would be more promising, because presumably they don’t change as often. However, the issue with them is that strangely you cannot add them in audit mode (you can do that for file hashes).

There is actually another reason why MDE indicators are out of the question: the indicators are targeted to machine groups, and a client can only belong to one machine group. So you cannot really make any exceptions. Especially, if you want to add another application to the list of blocked application (which has different exclusions).

Therefore, option 1 is out.

Option 2: Use AppLocker

AppLocker has been there forever. I remember using it 10 years ago for whitelisting applications in a very restricted server environment. And I remember it was a pain to manage. While AppLocker is still part of Windows 10 and 11, this statement from Microsoft is quite telling:

Generally, it’s recommended that customers, who are able to implement application control using Windows Defender Application Control rather than AppLocker, do so. WDAC is undergoing continual improvements, and is getting added support from Microsoft management platforms. Although AppLocker continues to receive security fixes, it isn’t getting new feature improvements. [source]

Because the client base consisted of only Windows 10/11 clients, WDAC started to look like a much better approach.

So option 2 is out as well.

Option 3: Use Windows Defender Application Control (WDAC)

There’s an abundance of documentation about WDAC, so I’m not going to explain what it is. Instead, I will focus on the practical setup, and conclude by reflecting how well it satisfies the original requirements.

WDAC policies and rules

WDAC operates through policies, and each policy consists of rules (what is allowed, and what is not allowed). What we want to achieve, is to create a WDAC policy that has the following rules:

  1. Block Java
  2. Allow everything else

Each rule has a level, which can be one of the following: None, Hash, FileName, FilePath, SignedVersion, PFN, Publisher, FilePublisher, LeafCertificate, PcaCertificate, RootCertificate, WHQL, WHQLPublisher, WHQLFilePublisher [source].

Now we have several options to choose from (as opposed to just file hash or signing certificate). The question is: which of these options to use?

I started first inspecting different versions of Java installers, and compared them also to some random Oracle software I downloaded. We want to block the Java installers with as few rules as possible, without blocking anything else. And we want to have as few rules as possible, because we want to make it easy to maintain (requirement 6 again).

When comparing the JRE and JDK installers, we can already see that they are using a different certificate for the signature:

I also compared with some older versions of JRE, and it used a different certificate as well (which is natural, as they only have limited validity). The issuer for all these signing certificates is DigiCert, so there is no Oracle sub CA in between that we could use either. So, I decided to try other file properties.

When looking at the file properties, I noticed that the File description seems promising. There are basically just two variations in these Java files as you can see from the picture: Java(TM) Platform SE binary and Java Platform SE binary. The file description is clearly different for the Oracle Client for Microsoft Tools setup, which is expected, as the description for the Java files clearly points into Java platform. And as this is a property that is written when the binary is compiled, you cannot trivially change it. You can change it with developer tools, but that doesn’t really worry us (requirement 7). The Product name property could be useful as well. However, it has the version number in it, so it can only be useful if we can use wildcards with it.

The next question is, can we use the File description property in WDAC rules? Yes, we can, by using the -SpecificFileNameLevel parameter:

Next, let’s see how we can create the policy.

Btw: I also tried the product name, but it doesn’t work with wild cards (wild cards work only with file path).

Creating the WDAC policy

The easiest way to create the WDAC policy is by using PowerShell. Windows 10 (and later) includes the ConfigCI module, which contains all the commandlets needed for policy creation. There is also a UI tool that can be used, but I found it a bit confusing. And you really don’t want to make mistakes in the policy creation, because otherwise you may end up blocking all sorts of applications you didn’t want to block.

I used the following script to create the policy. It’s based on the following Microsoft documentation:

# Name of the file where the policy will be stored

$PolicyFile = ".\DenyJavaPolicyFileDescription.xml"

$DenyRules = @()

# Add deny rule based on the file description on JRE installer

$DenyRules += New-CIPolicyRule -Level FileName -SpecificFileNameLevel FileDescription -DriverFilePath ".\JavaSetup8u381.exe" -Deny

# Add deny rule based on the file description on JDK installer

$DenyRules += New-CIPolicyRule -Level FileName -SpecificFileNameLevel FileDescription -DriverFilePath ".\jdk-21_windows-x64_bin.exe" -Deny

# Allow all policy. Without this you may end up blocking all applications

$AllowAllPolicy = $Env:windir + "\schemas\CodeIntegrity\ExamplePolicies\AllowAll.xml"

# Merge the allow all policy with our rules and reset policy ID

Merge-CIPolicy -PolicyPaths $AllowAllPolicy -OutputFilePath $PolicyFile  -Rules $DenyRules

Set-CiPolicyIdInfo -FilePath $PolicyFile -PolicyName "Deny Java based on File Description" -ResetPolicyID

## To use this policy in audit mode, uncomment the following line

# Set-RuleOption -FilePath $DenyPolicy -Option 3

## Convert the policy to binary format (needed for Intune/GPO)

$WDACPolicyXMLFile = $PolicyFile  
[xml]$WDACPolicy = Get-Content -Path $WDACPolicyXMLFile
if (($WDACPolicy.SiPolicy.PolicyID) -ne $null) ## Multiple policy format (For Windows builds 1903+ only, including Server 2022)
{
     $PolicyID = $WDACPolicy.SiPolicy.PolicyID
     $PolicyBinary = $PolicyID+".cip"
}
else ## Single policy format (Windows Server 2016 and 2019, and Windows 10 1809 LTSC)
{
     $PolicyBinary = "SiPolicy.p7b"
}
 
 ## Export file

 ConvertFrom-CIPolicy -XmlFilePath $WDACPolicyXMLFile -BinaryFilePath ".\$PolicyBinary"


Deploying the WDAC policy

Once you’ve created the policy, you can deploy it to the clients. When you’re using Intune or Group Policies, you need to use the binary version of the policy (in this example I’m using Intune). Note, that you need to rename the binary file extension from .p7b -> .bin. There is good documentation by Microsoft available about the deployment, but I’ll summarize the process here.

You deploy the policy via Intune using a custom configuration profile.

In the Configuration settings section, add the following information:

  • Name and description
  • OMA-URI, which is ./Vendor/MSFT/ApplicationControl/Policies/<Policy GUID>/Policy (in my case ./Vendor/MSFT/ApplicationControl/Policies/F5575FD7-6684-43E7-9D2D-D809886F97BA/Policy). The GUID can be found in the file name of the binary policy file created by the script.
  • Data type: Base64 (file)

Upload the file (remember to change the extension to .bin).

Use the Assignments section to target the policy. I recommend using device groups, so you can easily deploy the policy in phases.

Once you’ve finalized the wizard, you just need to wait for it to be applied (or you can trigger Intune sync manually from the clients).

Verifying that the policy works

Once the policy has been deployed, it’s time to verify that it works. We’ll try three different Java installers, and one Oracle installer that has nothing to do with Java.

Latest JRE installer is blocked:

Latest JDK installer is blocked:

A very old version of JDK is blocked:

Random Oracle application is allowed, as it should be:

So it does indeed work! The policy also blocks the executables for an installed version of java (such as java.exe and javaw.exe).

Monitoring the policy, and using the audit mode

We now have a policy that works in block mode. But how can we use it in audit mode (requirement 4)? And how can we monitor when Java is being blocked (requirement 5)?

To use the policy in audit mode, you need to use the following command when creating the policy (this is also included in the comments of the script above).

Set-RuleOption -FilePath $DenyPolicy -Option 3 

The audit mode events are logged into Windows event log under Applications and Services logs > Microsoft > Windows > CodeIntegrity. Obviously, we want to be able to monitor the policy impact centrally, so this is not very useful, unless we have a means to centrally gather the event logs.

But luckily, we do have Microsoft Defender for Endpoint, and it actually gathers the WDAC policy events. MDE logs both the audit events, and block events, and they are logged in the DeviceEvents table.

Here’s a KQL query that can be used in Advanced Hunting to show all audit and block events from WDAC during the last 24 hours:

DeviceEvents
| where Timestamp > ago(1d)
| where ActionType == "AppControlCodeIntegrityPolicyBlocked" or ActionType == "AppControlCodeIntegrityPolicyAudited"
| extend WDACPolicyDetails = todynamic(AdditionalFields)
| extend WDACPolicyName = WDACPolicyDetails.PolicyName
| where WDACPolicyName startswith "Deny Java Policy File Description"
| project Timestamp, DeviceName, FileName, ActionType, WDACPolicyName, FolderPath, InitiatingProcessFolderPath, InitiatingProcessAccountUpn, InitiatingProcessVersionInfoFileDescription, SHA256
| sort by Timestamp desc

In my case, I had the audit policy and block policy deployed at the same time, but targeting different clients. The results look like this:

If you’re using Microsoft Sentinel (and you’re sending device events there), you could even build a nice workbook to visualize this data. But that wasn’t really in the scope here. The key thing is that we are able to centrally monitor whenever an application gets blocked by our policy (or would be blocked, if we’re using the audit mode).

Conclusion

Finally, I wanted to conclude, how well this solution fulfills the original requirements:

1. We want to be able to block Java being installed on Windows clients

Yes, we can block Java installers (old and new).

2. We also want to block Java being used (if it’s already installed)

Yes, our solution also blocks installed Java executables (and DLLs).

3. We need to be able to make exceptions to the rule (some users still need it)

Yes, we can use include and exclude rules in the Intune assignments for the configuration profile.

4. We need to be able to test it first in audit mode (minimize disruption)

Yes, we can use the policy in audit mode, and use MDE to monitor the possible impact.

5. We need to be able to monitor when Java is being blocked

Yes, we can see the block events in MDE.

6. The solution should be easy to maintain

Yes and no, I’m a bit mixed with this one. By using the file description property, we managed to create a policy that probably does not need to be changed very often (only in the case some future Java version uses a different file description). So this is definitely positive, compared to using file hashes or signing certificates. However, the policy creation itself is a bit of a hack to be honest. You need to be really careful when creating the policy (remember to include the allow all rules), otherwise you may end up blocking something you didn’t want to block. And this could have very disruptive results. I don’t like the fact that you need to use the binary version of the policy in Intune, because you basically have no way of seeing the policy contents from the configuration profile. It would be really nice if there was a ready-made template that you can use for these custom rules, or that you could at least use the XML file instead.

7. The solution does not need to be perfect, i.e. it is targeted for casual, mainly non-technical users.

Yes, the file description property is not trivial to change, so this solution meets the requirement.

That’s it! Any comments and feedback are more than welcome. And if you have other solutions that I didn’t consider, please let me know :).