ADFS behind Azure Traffic Manager

When you have ADFS hosted on Azure (as per my previous post), you might want to look at using Traffic Manager and then especially the probes and the endpoints..

So, this post is to help you to configure ADFS behind the Azure Traffic Manager and ensure proper failover on service unavailability.

First of all the configuration, the ADFS setup (should sort of ) look like the following picture:

adfs-a

Which means we have front-end WAP servers (ADFS Proxy) with backend ADFS servers in the back in two regions.

The first thing to configure is that the ADFS Proxies (WAP) have external IP addresses. These are easily configurable in Azure by adding a Public IP address to the NIC of the servers. The second part is that it would be much easier to also include public endpoints for these IP addresses. For example;

adfs-b

I added forestsso. As the DNS name and the .northeurope.cloudapp.azure.com is automatically added to the DNS name. This means we can now target our WAP servers on an FQDN. Each region WAP server should have its own FQDN.

Copy these FQDNs as we will need it when we configure the Traffic Manager.

The Azure Traffic Manager uses health probes to see which region is fully active. While the Azure Load Balancer can use TCP probes, the Traffic Manager uses a higher level probe that uses URL’s. These URL’s are related to the FQDN. Luckily, ADFS (since 2012 R2) has an http web page that can be used to validate if the ADFS service is healthy. When you open the ADFS URL http://<myADFSURL>/adfs/probe you will not see anything, but your browser will be able to connect to the URL and will receive an 200 OK status if ADFS is fully operational. And this is what we will use.

So in order to use the /adfs/probe URL we need to publish it through the WAP proxy service. In order to do this, please perform the following;

Open the WAP server and open the Remote Access Management Tool. Then click on Publish on the right top side in the console. Create the following rule:

adfs-c

Put a name in, and set the external URL to match the Azure cloudapp. name (one of the FQDNs you noted earlier). The backend URL should match your ADFS URL. And your WAP server should be able to resolve this to a single ADFS server (not a load balancer). You can even try to go to the backend URL from the WAP server to validate it.

Make sure to attach /adfs/probe/ to the URL’s and ignore the notifications about http only and that the backend URL does not match the External URL.

As you can see, the probe URL is only available on HTTP. Not HTTPS. This means that we need to open the HTTP protocol in the WAP firewall (closed by default). So go to the Windows Firewall and add a rule that allows port 80 inbound.

This needs to be repeated for EVERY WAP / ADFS server that you have based on the FQDN’s noted earlier.

Note: It is also possible to have this work on HTTPS as well. Note that the backend URL will still only work on HTTP only. But when we assign CNAME’s in DNS (for example region1.forestroot.net -> forestrootsso.northeurope.cloudapp.azure.net) and use these new names in the Traffic Manager profile, we could use HTTPS External (https://region1.forestroot.net/adfs/probe) and put a certificate on it, but do not select Enable HTTP to HTTPS as it will redirect the clients to fully utilize HTTPS and Traffic Manager will get confused (and put the state to degraded).

So next is the Traffic Manager profile. When creating the Traffic Manager, you will receive an endpoint FQDN based on name.trafficmanager.net. In your DNS point the actual ADFS URL to this name.trafficmanager.net by using a CNAME.

Secondly in the configuration, choose a routing method, a TTL (how much downtime can you afford?) and set the Endpoint Monitor Settings to:

adfs-d

Go to endpoints and add the endpoints based on the FQDN’s that you noted earlier. After a few minutes they should come online and show Enabled / Online.

adfs-e

And that is it.. you can have your ADFS farm separated in different regions, ensure the probe for health is based on the ADFS server (backend) status, and ensure full high-availability.