Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
À partir d’avant-hierFlux principal

Reviewing the GDAP Wizard in Lighthouse

Hey folks! In today’s article, we will be taking a closer look at Granular Delegated Admin Permissions or GDAP.  You can think of this feature as providing similar functionality to Privileged Identity Management (PIM), including “Just-in-Time” (JIT) access, but specifically with regard to your partner tenant as you “reach across” into customer tenants in order to manage their subscriptions and services. If you are a Managed Services Provider, and you are constantly switching between customer tenants throughout the day, then this article is for you!

In The Days of Old

In the past, Partner Center would have been your “gateway” to managing customer tenants. We had the ability to establish relationships with our customers via Delegated Admin Permissions (DAP). To say the least, this was a rather clunky experience that left a lot to be desired. For example, accessing customer tenants from your own partner tenant was fraught with known issues and limitations that were not well documented anywhere (therefore most of us still just signed into the customer tenant directly using different browser profiles or “private windows”). Plus, with the DAP relationship in place your native partner account would effectively have global administrator privileges to all tenants all the time.

The situation was less than ideal. Eventually we had Granular Delegated Admin Permissions or GDAP, which goes a long way toward fixing some of these problems. For example, GDAP can be leveraged to provide Just-In-Time (JIT) access to customer tenants, so that you can elevate permissions only when you need to go execute changes, and that access is automatically time-bound (for example you can set it up to expire after 2 hours).

In order to take advantage of the JIT access capabilities, the partner tenant will need to have Azure AD Premium P2 licensing. In case you weren’t aware, Microsoft announced back in October 2021 that they would be giving away two years’ worth of AAD P2 subscriptions for Partners (and no, I am not sure if there are plans to extend this or not).

GDAP Wizard in Lighthouse

GDAP used to be pretty difficult to set up, but in recent months Microsoft 365 Lighthouse has made it much easier to establish GDAP relationships, or to convert existing DAP relationships.  It is also worth mentioning that Microsoft 365 Lighthouse recently celebrated a birthday. If you haven’t looked at this free multi-tenant management tool yet, or if it has been a while since you’ve poked around in there, I would encourage you to check it out for the first time, or to revisit it again. It has come a long way recently, and it is certainly the best place to set up your GDAP relationships.

Note: In the official documentation it is (incorrectly) stated that you need to be a Cloud Solutions Provider (or CSP) to use Lighthouse. This is not 100% accurate; many MSPs are not technically CSPs but they likely have access to Partner Center. Some of them sell licenses through a distributor such as Ingram Micro or Pax 8, others just serve customers who buy direct from Microsoft. No matter how you are set up in your own practice, you can get into Lighthouse with nothing but Partner Center access and your existing DAP customer relationships which exist therein.

Before you start, make sure you have a Global administrator account in your partner tenant that is also assigned to the Admin agent role in Partner center, as these permissions are pre-requisites to being able to configure GDAP via Lighthouse. Using this account, navigate to https://lighthouse.microsoft.com and find the GDAP wizard right on the home page.

Begin GDAP wizardOne of the great things about this tool is that it comes with pre-defined “tiers” that you can use straight out of the box, or you can customize them to your liking.

Customize GDAP tiers

Tiers are collections of Azure AD roles that are selected for specific job functions like Account manager, Service desk, or Escalation engineer. There is also a “JIT-only” tier which would include your high impact roles such as Global administrator. More on the JIT role later. Note that you can also rename the tiers if you want to use different terminology in your own practice, such as Level 1 tech, Level 2 tech, etc.GDAP templates

The wizard will help you build “templates” that you can then assign to one or more customers. Each template can contain one or several of the tiers. For example, you may have “fully managed” customers who require, at different times, a variety of these tiers all the way up to Escalation engineers and the JIT-only role. But you might also have customers who are not fully managed, or only require Account management, or Service desk roles. In this case you could have two templates and apply only the roles you need to each respective group of customers.

GDAP security groups

For each role, you need to create a security group in your partner tenant. Note: I have experienced at times that adding users is difficult from Lighthouse; for example, sometimes it cannot see all of my users, so I will need to go edit these groups later in my Azure AD or Microsoft 365 admin centers. I am guessing there are still some bugs being worked out here.

JIT Approver security groupFor the JIT-only role you actually need to specify two security groups: the “JIT Eligible” group itself (i.e., those users who can be elevated to global admin in your customers’ tenants) and the “JIT Approver” group (those responsible for approving any requests to elevate permissions). Be sure to create your JIT Approver group in advance, before you run the wizard (the other groups can be created within the wizard). Also, make sure that this group is configured to be Azure AD role assignable.

Role assignable security groupOnce you assign customers to a template, they are converted from DAP to GDAP. To elevate into the JIT-only role, eligible users must navigate to https://myaccess.microsoft.com and request role elevation for the desired access package.

Request JIT access packageThe approvers will then be able to answer these requests from the same “myaccess” portal under Approvals. Once approved, the privileged roles will be activated for the duration specified in the wizard. This process is not using PIM but rather it is leveraging Access packages from Azure AD Identity Governance (see Entitlement Management).

Azure AD Identity Governance

Potential downsides

Now one of the criticisms I have of this tool is that every customer who is attached to a template is equally impacted by the JIT escalation requests. There is only one access package created, so when you elevate an account to the JIT-only role, that account will gain superuser privileges in every tenant attached to the template for the time period specified. This implies that if you wanted to isolate your JIT access requests (which I think would be the most ideal scenario), you would need to create a unique JIT-only template for each customer.

That might be okay if you are only managing a handful of customers, but it makes this wizard a heck of a lot harder to use if you manage dozens or hundreds of customers. I would really like to see this improved in a future update, but for now I just want you to understand what the limitation is. Most of us will probably not want to run this wizard 100+ times and step through every customer every single time; it ends up (at least partially) defeating the purpose of the simplified wizard to begin with.

The other criticism I have is that we don’t (yet) have a great way to delete these relationships from Microsoft 365 Lighthouse. You can delete the templates, but this just removes the template object from Lighthouse itself, leaving all of the associated security groups, access packages, etc. in place. Therefore, to accomplish a “real” deletion, you would need to:

  • Delete the template in Lighthouse
  • Delete the corresponding access packages in Azure AD Identity Governance
  • Delete the security groups in Azure AD
  • Delete the GDAP relationship from Partner Center

But so far, these do seem to be the only real downsides to the solution Microsoft has come up with. At least we can say it is a good improvement on the previous architecture, where we only had global admin all the time. The questions that remain in my mind are:

  • Will we be able to keep our Azure AD P2 licenses at no cost to the partner after the initial two years?
  • Will we ever get the ability to request JIT elevation only for specific customers, or will it always remain “per template” (and if so, can we at least make it easier to clone templates or something, for applying per-customer JIT requests?)
  • Let’s say we have to remove customers or even entire templates, which could mean we also want the corresponding access packages (and in some cases even the relationships) terminated as well; shouldn’t this “offboarding” process be a bit easier?

Time will tell! But otherwise, I am pleased with the progress that has been made to date. Good job, Lighthouse team!

This article was written by the new Bing chatbot.*

*…April Fools! This article, like all content on ITProMentor.com, was written by a Real Human™

The post Reviewing the GDAP Wizard in Lighthouse appeared first on ITProMentor.

But have you turned multifactor authentication ALL the way on?

Do you remember just a short time ago, Microsoft would claim that switching on Multi-factor Authentication (MFA) prevents 99.9% of identity-based attacks? Well, the times they are a-changin. I do not know what they would report today for a percentage of attacks which are thwarted by MFA alone, but I can tell you it wouldn’t be 99.9%. I think most of you reading this blog have probably even experienced or heard by now of an attack where MFA was enabled, but the bad guy got in anyway.

The current state of affairs was inevitable of course: when we move our defenses up, the evildoers don’t just throw in the towel and go away, they simply adapt their methods. Thus, we have seen a steady rise during the pandemic years of more sophisticated phishing techniques, where users are tricked into giving up, approving, or passing time-bound access codes onto a third-party. We also see a rise of Man-in-the-Middle (MITM) attacks where a user interacts with fake (but very convincing) login pages that include MFA prompts and everything.

So what are we to do?

First, do not be discouraged: this process of “one-upmanship” is only natural. The good news is that having more than one proof of identity in place is still the foundation from which you must build. Moving away from passwords and towards MFA or even passwordless authentication is still the right path, but you have to be willing to stay nimble and introduce additional iterative changes as we move forward in time.

The tools to do the job are already available and waiting for you. In the world of Azure AD and Microsoft 365, this means revisiting our Conditional Access (CA) policies and reconsidering our authentication methods.

I assume most of my readers already know about the Security Defaults, or these four equivalent CA policies:

Once you have these basic scenarios covered, we have number of other holes to plug. In the following paragraphs, I will recommend some additional settings and policies to further cement your foundation and prevent some of the latest attack methods we have been seeing in the wild, with a bit of commentary explaining each.

1. Update your authentication methods (number matching, etc.)

Microsoft recommends updating your policies for the Microsoft Authenticator app so that users are required to do number matching when logging into cloud resources. In other words, instead of just tapping “Approve” when the app notification comes up (which many people will do quickly by automatic reflex, or eventually after a flood of continuous prompts), they will be forced to identify the correct number which is being displayed to them.

Number matching

This helps prevent what Microsoft calls “MFA fatigue,” or illegitimate authorizations due to automatic muscle memory.

This setting will become the default experience on February 27 of 2023, but you can turn it on sooner if you like from Protect & secure > Authentication methods > Microsoft Authenticator in the Entra portal. On the Configure tab, you can move from Microsoft managed to Enabled.

Enable number matching

You can also choose to turn on Show the application name…, as well as the option to Show geographic location… These toggles will give the user more context about the sign-in attempt whenever an authenticator prompt occurs. Be sure to save any changes you make on this screen.

Back on the Enable and Target tab, you can optionally move to passwordless Authentication mode, where the authenticator app is the primary authentication method instead of the password. This experience will use the number matching challenge by default, and it will also reduce password prompts in general.

Move to passwordless

2. Enable Temporary Access Pass

I also recommend turning on Temporary Access Pass, which is also found under your Authentication methods. This allows administrators to grant time-bound access codes for sign-in purposes, particularly when the end user is unable to use their multi-factor device, or if they need to update their authentication methods at https://aka.ms/mysecurityinfo.

Temporary Access Pass (TAP)

For example, imagine that one of your users had to get a new phone and they no longer have access to the authenticator app from their old phone anymore.

Once this policy is configured, administrators can go issue TAPs to any user right from the Azure AD or Microsoft Entra portal. The same process can also be used during the initial onboarding, when users go to set up their authentication methods for the first time.

Issuing a TAP

3. Protect registration of security information

If you enabled TAP as I suggested above, then you should also enable a Conditional Access policy called Securing security info registration, which means that in order to access the security info registration page, a user will need a valid TAP issued by their administrator. I suggest you also have a process in place for requesting and distributing these TAPs securely, in order to prevent illegitimate requests from going through; for example, confirmation of identity via a phone call or video chat with the helpdesk.

This policy is also available from the CA templates (under Identities):

Securing security info registration

Note that the templated policy also excludes any trusted locations that you specified (so that users could set up their authentication methods from the corporate offices, but not from home or some other public wi-fi, for example).

4. Require MFA to register or join devices

Certain scenarios are not covered by the CA policies outlined earlier. One such scenario is the registration or joining of devices to Azure AD. There is a special policy just for that purpose that you must deploy.

This can also be found as a setting under Devices > Device settings in the Azure AD admin center. But these days Microsoft recommends using the equivalent CA policy in its place (therefore the option on this page should be set to No rather than Yes).

Device setting to Require MFA (deprecated)

For some reason the required settings for this CA policy are not detailed on Microsoft Learn, even though Microsoft recommends moving to it, but here are the settings you will need:

  • Users: All users, exclude emergency access accounts
  • Cloud apps or actions: User action > Register or join devices
  • Conditions: None
  • Access Controls: Grant > Multi-factor authentication

5. Require MFA for Intune enrollment

MFA for Intune enrollment is a separate requirement and not something that is completely covered by any of the above policies. For example, a device which has already been authenticated for another application may be able to enroll without being prompted again unless this policy is in place.

I spoke with someone on Microsoft’s DART team recently, and he explained how this loophole had been used in the wild: in many organizations where CA has been previously implemented, a managed device tends to have greater access than unmanaged devices, with fewer prompts for MFA. But if an unmanaged device has even a little bit of access already, it is possible in some cases to elevate the device by enrolling it without encountering another MFA challenge. At this point the sphere of access has been expanded. Keeping this next policy in place will prevent this unauthorized ‘escalation’ scenario.

  • Users: All users, exclude emergency access accounts
  • Cloud apps or actions: Cloud apps > Microsoft Intune Enrollment
  • Conditions: None
  • Access Controls:
    • Grant > Multi-factor authentication
    • Session > Sign-in frequency set to Every time

6. Add device-based CA policies

This is something I have long advocated for. I recommend turning on a device-based access policy for at least Office 365. This way, access to corporate resources such as email can become contingent on registering devices with Azure AD or even enrolling your devices with Intune. The two primary benefits here are:

  1. You get pretty decent assurances that the inventory of devices you see in the portal is reflective of the actual physical devices out in the world (having an accurate and up-to-date inventory is necessary for good security), and,
  2. many of the current Man-in-the-Middle attacks are instantly thwarted, because the “middle” devices that are being used by attackers are not part of your inventory of pre-registered or enrolled devices.

Therefore, even if an attacker successfully phishes someone in your organization and tricks your end users into round-tripping an MFA code or approval notification, the unauthorized access request access would be denied by the device authentication requirement.

There are a couple of different approaches to accomplish a device-based authentication policy, but most organizations will aim for “Require compliant devices,” which looks like this:

  • Users:
    • All users (or a targeted group of your choice)
    • Exclude emergency access accounts
  • Cloud apps or actions:
    • Cloud apps > Office 365
  • Conditions:
    • Device platform: Select the platforms you intend to protect
  • Access controls:
    • Grant > Require device to be marked as compliant

With this policy in place, it is also necessary to prepare Compliance policies within Intune for each device platform you intend to support. End users must then download and sign-in to the Company portal app in order to complete device enrollment. The details around setting up Intune and enrolling devices is beyond the scope of this article, but I can recommend my courses or written guides on these topics for more information.

However, we must recognize that some organizations are not yet ready to implement Intune, or even if they are, they will not be ready to require device compliance across the board right away, and that is okay. In this case, I can recommend another policy which will prevent unauthorized device access based on device filters. We call this policy “Block unregistered devices.”

Block unregistered devices using filters

  • Users:
    • All users (or a targeted group of your choice)
    • Exclude Emergency access accounts and all Guest & External users
  • Cloud apps or actions:
    • Cloud apps > Office 365
  • Conditions:
    • Device filters:
      • Exclude devices where trustType Equals Azure AD Joined, Azure AD Registered, or Hybrid Azure AD Joined
    • Access controls:
      • Block access

In this case you do not need to have devices enrolled with Intune, however, the devices must be registered or joined to Azure AD before they can gain access to data in Microsoft 365 services such as Exchange or SharePoint Online.

I also recommend blocking device platforms that you do not intend to support, which I have outlined here (Microsoft has also since added this to their “common” CA policies on Learn); this policy does not require enrollment or compliance checks, either. These policies are sometimes an easier place to start out.

7. MFA for guests

Generally speaking, I like to keep my “guest-specific” policies separate from my internal user policies. Therefore, any policies targeting internal users will normally exclude guest & external users. If I want to deploy policies specifically against guests, those will be their own policies that I can turn on or off without impacting my “standard user” CA policies.

MFA for guests

You will notice that there is one such policy available via the templates provided by Microsoft: Require multifactor authentication for guest access.

However, before enabling this policy, I tell all my customers to enable the cross-tenant MFA settings. In case you didn’t know about these, navigate in the Microsoft Entra portal to External Identities > Cross-tenant access settings. Click Edit inbound defaults then go to the Trust settings tab.

Cross-tenant MFA settings

By checking these boxes, you are telling your tenant to respect MFA claims that have already been validated in other Azure AD tenants. In other words, if you deploy a Conditional Access policy in your own tenant that requires MFA for guests, those guests will not be double prompted if they have already satisfied MFA claims in their own (home) tenant. Completing this step also happens to be a pre-requisite for our last recommendation (though I have no idea why this is so).

8. Require stronger authentication

If your organization is ready to adopt passwordless methods of authentication using the Microsoft authenticator app, and/or FIDO2 keys such as Yubikey, then you have another option to consider. This past fall just prior to Ignite, we gained the ability to distinguish between authentication methods based on authentication strength.

Previously, any type of MFA was treated equally by Conditional Access requirements: so an SMS code was considered just as good as the authenticator app or even a FIDO2 key. But in reality, not all authentication methods are created equally. With a FIDO2 key for example, the key material is non-exportable. In other words, an attacker would have to physically steal your key in order to use it to gain access as you. It is therefore considered “phish resistant.”

I suggest taking a crawl-walk-run approach; if you are considering switching to stronger authentication you may want to identify specific use cases or groups to pilot the experience before pushing it out org-wide. For example, if you have to distribute physical keys, how does that process work? What happens if someone loses a key? Etc. These questions will be easier to sort out on a smaller scale, which will help you develop a system for more widespread adoption.

Here is an example of upgrading a policy where you require stronger authentication for specific admin roles:

  • Users: Select users and groups > Directory roles (select any groups or roles you require)
  • Cloud apps or actions: All cloud apps
  • Conditions: None
  • Access controls: Require authentication strength (select your desired strength)

Upgrade your authentication strength

Note that you may have additional steps to configure the passwordless or FIDO2 experiences before enabling these CA policies.

9. Fancier subscription, fancier options

If you are the lucky owner of the more expensive E5 subscription, then you also have access to “risk-based” Conditional Access policies, as well as a bunch of other upgrades that are well beyond the scope of this article. Once again, the Conditional Access templates are the easiest way to get moving on some of these features.

E5 risk-based policies

Note: If you buy licenses to support these features for just your administrator accounts (as some organizations do), just be sure that when you deploy the policies, they are scoped to only those users who are licensed for the features. This way, you stay in compliance with Microsoft’s licensing guidelines.

Conclusion

The principles of Zero Trust remain unchanged. In the past, we would have simply enabled MFA, or the equivalent of Security Defaults, and felt that we had fulfilled the spirit of the “Verify Explicitly” pillar, but as we have seen, that may not be enough anymore on its own.

Zero Trust Principles

As the game has changed, so have our tools. In order to have more confidence that our “Verify Explicitly” principle is being met, we just want to put in place a few additional measures, for example:

  • get users to slow down by adding a number-matching requirement on the Authenticator app
  • better protect the MFA registration process itself with Temporary Access Pass
  • require a strong authentication challenge anytime a device is registered, joined or enrolled for management
  • evaluate the device as part of your authentication challenge
  • even require a stronger level of authentication such as phish-resistant, hardware-based FIDO2 keys

And of course, do not forget to address the other two pillars of Zero Trust, either! I will soon release updates to my famous Best Practices Checklists and other written guides to reflect more of what we learned over the last year. If you already own a copy, then congrats! Your free updates will be arriving sometime in the next month or so. If you want to join thousands of other happy readers, I encourage you to subscribe, check out the store, or even consider joining our SquareOne community.

We are living in a different world now than the one we had 10 or even just 5 years ago. I wonder what it will look like 5 or 10 years from now? It’s part of what makes our jobs stay evergreen, I suppose. Staying up to date in the day-to-day and month-to-month, of course, is going to be the key challenge for most of us. I suppose this article, too, could go out of date pretty quickly after its printing. But do not be discouraged: it just means that we must always be aware, ready, and willing to make iterative changes over time.

If you see any omissions in the policies or settings I discussed in this article, be sure to comment below! We would love to learn from you out there in the audience, as well!

The post But have you turned multifactor authentication ALL the way on? appeared first on ITProMentor.

Alternatives to OneDrive and SharePoint (and when to consider them)

One of the things I often get asked about is how to deal with various limitations in OneDrive and SharePoint Online. For those who don’t know, SharePoint Online is the file storage & sharing solution underpinning the Microsoft 365 universe of applications, including the popular Teams application, while OneDrive for Business provides for personal file storage (i.e., modern replacement for “My Documents”) as well as a client application for keeping all your cloud-based documents synchronized to your local device.

Our SquareOne peer group recently had an informal, ad-hoc meeting about this problem: Where do you turn when OneDrive and SharePoint are (seemingly) unable to meet the needs of the business?

This can happen for a few different reasons. So, before we talk about solutions, let’s examine the most common limitations that organizations can run into when using SharePoint and OneDrive.

Not enough (shared) file storage

Every single user in Microsoft 365 gets a minimum of 1 TB of personal data storage (OneDrive space). This is not usually a bottleneck for most organizations. However, SharePoint Online (where you would put any of your “shared” Company data), is limited to 1 TB + 10 GB per licensed user.

For an Enterprise organization with thousands of users, those seats add up quickly, and you will easily have several terabytes of storage available. For example, 10,000 employees x 10 GB each = ~100 TB. Small business subscriptions unfortunately share the same limitation as Enterprise, so that means a 30-person organization only gets a measly ~1.3 TB of storage total for all shared documents in SharePoint Online.

This is a problem. Particularly if there are very many files, or very large files such as architectural or engineering drawings, or high-density images, or anything like that. That meager storage will be consumed very quickly indeed. Yes, it is possible to buy additional SharePoint storage, but at USD $0.20/ GB/month, it is some of the most expensive storage space in the cloud.

My personal wish here is that Microsoft would just change the storage limitations for “Business” plans so that instead of 10 GB/user, we get something better like 100 GB/ user (at least). Or, better yet, just give us like a “Business Ultimate” plan that includes unlimited email and file storage and charge a premium price like USD $35 or $40/user/month.

Too many files to sync, or other limitations

OneDrive includes a client app that will automatically synchronize your personal files to the local desktop (we have a similar app to make them available on personal mobile devices, as well).  You can optionally choose to sync shared locations in SharePoint Online in addition to your OneDrive files. However, when you attempt to sync too many files, you can cause problems for the sync application, and then your employees fall into the Pit of End User Despair™.

How many files is too many? Well, that’s a complicated question. Microsoft recommends syncing no more than 300,000 files and folders total to your computer. But that is somewhat misleading, because I have seen the client sustain more files than that (especially since the release of the 64-bit client), but I have also seen the client bomb out under even less stress (more like 90-100K files). If memory serves right, this limitation actually comes from a .DAT file stored somewhere in your local app data folder.

As well, larger files such as architectural and engineering drawings will sync (and support for large files has improved in the last year since the release of the 64-bit client), but it still is not the same experience as working with general Office files. For example, co-authoring is not a thing here, and syncing large files is more demanding; upload times can be very slow, especially over budget links such as DSL.

Therefore, certain SMB organizations that regularly use larger file types (e.g., construction, engineering, architecture, attorneys who deal with patents which include engineering drawings, etc.) may still find the sync experience is less than ideal for their requirements, especially if they are used to having SMB shares available on their LAN.

There are a few other limitations on file structure (such as depth of folders/length of file path), and number of files per folder or view (5,000), but these are not run into quite as frequently as the other problems I just now touched on. Plus, they are generally more “correctable” than running up against storage quotas or sync issues, which are less in your power to control. Nevertheless, several other limitations do exist, and you should be aware of them.

What do we do about these limitations?

Historically the way we dealt with these problems was to tell the customer, “Well of course it isn’t working, because you aren’t doing it right!

We would scold them for needing access to so many objects on every client device, all the time. “Don’t you know that it is impossible to work with that many files in any reasonable timeframe? Imagine trying to contribute to more than 300K files in a month, or even in a year! Nobody actually does that, so why sync all the data to begin with?!

Or, “Look, you can’t expect every third-party file type to be supported equally: if you work with some larger file types, do not expect co-authoring on them, instead plan to download/upload your changes like you would have for all types of files 10 or 15 years ago.

While these statements may be true, and difficult to argue with, the simple fact is that back in the olden days when customers just had a primitive NTFS file server with SMB file shares, users could keep whatever they wanted, for however long they wanted, and have access to it any day of the week. They didn’t have to obey the seemingly arbitrary laws of the Microsoft Cloud.

In an ideal world, we could just easily migrate all files and folders, as they exist today, from point A (usually an on-premises file server) to point B (the cloud), and have the experience be pretty much the same for end users. The problem is that file servers and SharePoint sites are apples and oranges. So, it’s not realistic yet to put those expectations out there (those who have, have paid dearly for it).

Yes, it is true that SharePoint does a bunch of cool stuff that your local file server cannot (e.g., metadata, search and indexing, retention labels, sensitivity for sites, etc.), but the reverse is also true: your old file server did some pretty basic things really well, some of which are still impossible for SharePoint and OneDrive.

Alternative #1: Use another popular cloud storage provider

I can’t speak for the Enterprise, but at least in the SMB market, the most popular alternatives to Microsoft’s “built-in” ecosystem for file sharing remain Dropbox, Box, and Citrix Sharefile (roughly in that order). Maybe Google Drive ranks in there as well, however, I know a lot of folks on Google’s platform also supplement with one of these other providers for file sharing, too. My personal favorite of these options is Box, but that’s just based on my own familiarity with it (others may feel strongly about one of these others—and that’s fine).

If you are going to supplement your Microsoft 365 subscription with one of these other solutions, I would recommend ensuring you get a real business plan, not a personal or “basic” plan. Generally speaking, this means you will be spending something like $25/user/month or more for a complete feature set, usually including unlimited storage space and Enterprise-grade security options. At the time of this writing, in the Dropbox world, this means aiming for at least the “Advanced” offering (for Teams). If you choose a Box subscription, this could mean the Business Plus or even Enterprise tier, and for Sharefile, you should be evaluating their Advanced or Premium options.

Why we do not have an “unlimited storage” plan in the Microsoft cloud is beyond me. If it were up to yours truly, Microsoft would have an unlimited offering that can compete with these other big hitters. The option for limitless capacity is probably the number one driver that pushes people into a third-party cloud when it comes to file storage. Note: You should not expect a switch into one of these other ecosystems to be a panacea: to eliminate all downsides, fix or prevent all sync issues, etc. However, when it comes to overall storage capacity, every other provider out there has Microsoft beat.

Anyway, if you decide third-party is the way to go, always set up Single Sign-On with Azure AD so that you can apply the same identity-based protections, such as Conditional Access, that you already enjoy with Microsoft 365. Also you should know that Box and Dropbox have integrations available with Microsoft Defender for Cloud Apps, so that you can monitor activity and create alerts and rules around these applications, just as you do for Microsoft 365, using the Activity log.

Alternative #2: Check out Azure File Sync

If you would rather not leave the Microsoft cloud, and especially if you want to maintain an experience as close as possible to your current Windows-based file server, then Azure Files is another solution worth taking a closer look at. This is basically SMB file shares in the cloud. However, the best implementation of it to replace existing file servers, in my opinion, would be Azure File Sync. This premium solution allows you to seamlessly extend your existing on-premises file server into the cloud, and the users generally cannot even tell the difference.

Basically, your existing file server gets an agent installed on it, which then synchronizes your shares into Azure Files. Client computers continue to connect to the local file servers, but the data can be migrated on the back end into Azure. Eventually the server just serves up cached copies of the most frequently accessed datasets. Better yet, you can choose to take the most infrequently accessed data (think: archives, etc.) and move those to “cooler temperature” storage in the cloud. This is cheap storage, which is slower to access, but less expensive to maintain. Active files can remain on “hot” storage so that access stays quick and reliable. This feature is known as “cloud tiering” and it is one of those things that makes the solution extra attractive. For backup, you simply deploy Azure Backup and configure a backup of Azure file shares on a schedule that works for your organization.

Now, let’s say that you need to replace your existing physical server, either because it is just time for a refresh, or because you had a sudden crash or hardware failure in your datacenter. No problem. In the short term, your end users can connect to Azure via VPN and get access to the cloud-based shares quickly. In the long-term, you would replace your physical server with something cheap and affordable: just install the agent to present the shares locally out to the network, and away you go.

Thus, Azure File Sync turns your local server into something resembling “Branch Cache” (if you were familiar with that Windows Server feature, it’s a very similar idea). It is not unreasonable to assume your current server capacity could be scaled back to maybe 20% of the storage requirements (most data lives in the cloud, with only the most frequently accessed items available on the local disks).

The big benefits to this service are that legacy applications generally still work with it (since it is still just SMB shares), and it tends to be more affordable per-user or “per-gigabyte” especially with cloud tiering enabled. Note also that both domain and workgroup environments are supported with this solution.

Alternative #3: Split the difference

The last option is to forge ahead (mostly) with Microsoft solutions: usually in a “hybrid” configuration where the on-premises server is going to be around for at least one more refresh cycle while your organization figures out the rest of the puzzle on its own.  Note: you can still start to relocate certain Office documents into OneDrive, Teams, and SharePoint as well, but you don’t have to go “all in” either. Take the time to learn how your organization can work around the current limitations in various ways. This makes for an easier end-user transition while still taking advantage of the elasticity and flexibility of the cloud where it makes sense.

For example, we direct people to use the SharePoint web interface and/or the Teams client for most shared repositories, and only sync very few data locations that contain smaller numbers of files (like a specific project folder), or other areas where the users work daily. We generally also recommend enabling the groups expiration policy and retention policies to keep content fresh and current (removing old, dead data regularly).

In a big migration project, we may even recommend migrating only those datasets which are considered “active” working data, versus all the “archival” stuff which may not need to exist in the cloud, at all. This helps cut down on clutter and overall storage demand. Some of these legacy items might end up on a separate network segment somewhere, on a legacy file server, SAN, or NAS device (where they go to die a slow death). Or maybe this is where we find a separate cloud storage account and place it under the care of a specific individual or individuals with access to those particular locations.

I should perhaps mention, there is also an alternative OneDrive/SharePoint sync client out there called Zee Drive, which some people have reportedly found success with (I cannot say much about it other than what others have told me—in other words, this is not an endorsement by any means).

Conclusion

Keep in mind that many organizations fit nicely within the existing limitations and have no problem moving 100% into the Microsoft 365 cloud ecosystem. Especially “Microsoft Office-centric” professional services that work primarily in the Office apps, perhaps with a splash of Adobe on the side, etc.

At the same time, there are many, many companies who run into these barriers due to legacy apps, large file types, larger file sets, etc., and therefore, these folks often wander down a different path. Sometimes, this means going to a third-party cloud, or it means remaining in a hybrid situation, or patching together some other alternative. This is not a new problem, either. Honestly it is a bit surprising that even now, in the year 2022, we are still left wanting in certain areas, and there isn’t always just one satisfying “right” answer. But, that’s where your consulting comes in, isn’t it?

What else have you been deploying for your customers when Microsoft doesn’t quite fit the bill? Let us know in the comments, below!

The post Alternatives to OneDrive and SharePoint (and when to consider them) appeared first on ITProMentor.

Updated Migration Advice: Remove the last Exchange Server?

The last time I published articles on the topic of email migration was in the long, long ago: in the before time. Yes, before pandemics and novel coronaviruses, but also before we had the option to remove the last Exchange server. Some have asked me if I would change any of my instructions or advice for migrating from Exchange on-premises to Exchange Online in light of these recent developments.

My short answer is: it depends, and even then, only if you want to.

For the longer version, read on.

Do you really need hybrid?

The first thing to note is that the new process for removing the last Exchange server is only going to be applicable to a minority of SMB tenants who require long-term hybrid identities with directory synchronization. Why? Because the vast majority of SMB’s should be focused on removing traditional AD anyway, and migrating toward cloud-only identities in Azure AD (as many have already done).

When someone says they absolutely cannot get rid of the local AD, that usually means there is either some legacy thinking, or a legacy Line of Business application standing in your way. This blog has often dedicated articles to dismantling the barriers related to the former problem, but when it comes to the latter (LOB apps), how should we address them?*

First, determine if there are actually dependencies here or not. For example, there may be web-and-mobile friendly alternative apps of which the stakeholders are unaware. If not, and you have to stick with the existing app, next you must ask: Does the application rely on Active Directory or Exchange mail attributes in any way? If so, you may have a legitimate reason to keep these systems around, if not, then proceed accordingly. Most of the time, the perception is different than the reality: most apps do not actually have a hard requirement for AD.

In some circumstances (where supported), legacy apps can be hosted in a virtual desktop environment by a service provider, or in Azure, leveraging Azure AD DS or a standalone pair of small-sized VM’s promoted as DC’s, along with Azure Virtual Desktop or similar. And of course, there is always the old refresh your server on-premises if none of these other options appeal to you.

Assuming you have exhausted your other choices (e.g. drop-and-shop) and you’re still stuck with a legacy AD (either on-prem or in the cloud), then your next step is to decide how important it is for you to keep the same credentials for this legacy app as you have for say, your email and cloud-based applications. Very important? Then consider keeping a hybrid connection. Not so important? Perhaps it is time to isolate this app from the rest of your (more modern) environment.

And what about legacy file shares?

Another place people get stuck is on larger-sized file migrations, particularly where there are lots of really large files like CAD drawings, etc. In this case you have similar choices to make. Just think of this requirement no differently than other legacy LOB apps.

How much of this storage is “current” and how much is just archival and can be pushed to an alternative cloud platform such as Azure Files or even a third-party cloud storage solution? Or, if you are going to elect to keep a local file system, will it be Windows-based and connected to the same identity/credentials as your email and other cloud apps? Or should you isolate this on a separate, purpose-driven system or alternative solution?

These choices will be up to you. Again I want to point out that this a niche case, and that the demand for this kind of solution is going to be an exception, not the rule. Most SMB’s with typical information workers can simply move files to OneDrive/SPO/Teams, and/or Dropbox, Box, Citrix ShareFile, or similar. In other words, cloud-based apps that can be connected to Azure AD for SSO and better security.

The hybrid path (only if you need to or want to)

Most of the time, small organizations are coming from older systems, such as Windows Small Business Server 2011, or Windows Server Standard 2008R2, 2012, 2012R2, or 2016, with Exchange Server 2010, 2013 or 2016 installed on top of one of those systems. If you are coming from anything older than that, then I would recommend a third-party tool to assist in the migration process. Otherwise, hybrid or “remove move” migrations will be the best migration option for you (or you can still use third-party tools if you prefer).

Once you are done with the migration, then you can either keep an Exchange 2016 or 2019 server around for hybrid purposes (like we have always done), or, now, you can choose to get rid of it. But for this option you will need to have Exchange Server 2019: so if you came from say 2016, add a 2019 server and run the latest cumulative update before executing the process to remove the 2016 as well as the last 2019 server. Remember that even after you “remove” the last Exchange server (really you’re just shutting it off forever), you are still dependent on the local AD for your identities and specifically for all the mail related attributes: the source of authority is still on-premises, and the Azure AD Connect synchronization must still remain in place just as before (so not that much has changed, really).

Review this Microsoft docs article for more details on how this “Exchange Server Free” hybrid environment looks in practice. Two main differences:

  1. You will no longer have to maintain a server with Exchange installed on it for hybrid management purposes
  2. You will no longer have the Exchange management web UI, and instead you will only have some PowerShell cmdlets with which to manage the attributes

Actually, Steve Goodman over at Practical 365 has provided a graphical web-based tool for managing the Exchange attributes after removing the last 2019 server. See this article for more details.

Okay, so the proper migration steps would be:

  1. Make sure your source environment has latest available cumulative updates
  2. Add your domain name(s) to Microsoft 365, verify (add TXT) but do not cut over MX yet
  3. Configure Azure AD Connect to sync identities & on-premises passwords to the cloud
  4. Run the Hybrid Configuration Wizard (HCW) / alt: third-party tool setup
  5. Create your remote move migration batches / alt: third-party migration batch setup
  6. Migrate public folder data (if applicable, usually try to replace w/ Groups, Teams, etc. instead)
  7. Finalize your migration batches
  8. Cut over MX records, SMTP relays, etc.
  9. New post-migration and clean-up tasks:
    1. If needed, add or upgrade to Exchange server 2019 (latest CU)
      • Remove older Exchange servers as applicable
    2. Follow the process to shutdown the last Exchange 2019 server permanently (optional)
      • Moving forward, update processes to manipulate mail attributes using the new cmdlets
    3. Consider configuring Hybrid Azure AD Join for your PC’s and configure device-based Conditional Access to improve security

Similar to how we have always done things, with a few extra items tagged onto the end.

Cloud-only identity (the preferred path)

As always, I encourage you to strongly consider severing your ties to that old beast known as Active Directory. Most folks can get by just fine (better actually) without it. In this case your migration looks very similar to the above, except that your post-migration tasks will include a different subset of items:

  1. Remove Azure AD Connect + Exchange hybrid
  2. Move shared files & other apps to the cloud or cloud-based alternatives (and setup SSO to Azure AD wherever possible)
  3. Join PCs to Azure AD and configure device-based Conditional Access
  4. Move DHCP from AD to firewall or router
  5. Shut down the AD domain forever

In this configuration you are in the best possible position to implement additional security & compliance capabilities, and take full advantage of the other goodness the cloud has to offer, without carrying any of the baggage of days past.

I really do think that the hybrid path appeals to a dwindling subset of SMB organizations: the vast, vast majority of you should be looking to cut ties and go toward cloud-only identities, even if it means managing a separate credential set/security boundary for a legacy app in some cases (especially if it is going to be temporary). But, if you are one of those “special case” customers, we now have the option to maintain a hybrid environment, without necessarily keeping that old Exchange server around. Some may still prefer to keep the Exchange server, and that’s totally fine as well.

If I were forced to keep a legacy AD around, I personally would choose to kill the directory synchronization and just maintain a separate security boundary for that application specifically. But that is due to my own preferences and risk tolerances. You may come to a different conclusion. I won’t shame you for it, as everyone has a right to be wrong. At least now you know how to do the wrong thing the right way.

:)

The post Updated Migration Advice: Remove the last Exchange Server? appeared first on ITProMentor.

What are the limitations with Microsoft Defender for Business Standalone?

Most of my readers will already be familiar with Microsoft Defender for Business (MDB), which is included with Microsoft 365 Business Premium. And a majority of those will be deploying MDB as one part of a broader security solution which includes other services within the Business Premium bundle. But a subset of folks have asked about the “Standalone” version of Microsoft Defender for Business.

Yes, it is true, there is indeed a standalone version (USD $3/user/month), which was announced last month. The use case? Consider a scenario where the customer is using a different productivity platform such as Google Workspace, or they haven’t yet made the transition to other Microsoft 365 services. Using the standalone SKU, you could theoretically onboard devices and start providing protection, ahead of deploying other services, and with far less upfront licensing commitment.

Some of the MDB-related services will function much in the same way as you are used to with the full product, however, you should be aware that certain services would only be available with an Intune license (Microsoft Endpoint Manager). For example, the “Automatic onboarding” option during the first-run wizard experience requires devices to be enrolled with Endpoint Manager already. As well, certain functionality in the Microsoft 365 Lighthouse product may rely on the presence of the Intune licenses in order to work. At the same time, some functionality within Endpoint Manager will still be available, even without the “complete” license set. In fact, just enough of the MEM product is activated to make basic policy deployment possible for the “standalone” scenario. Clear as mud, right?

Show me

Let’s take a look at an example where I have onboarded a new “standalone” device into a tenant where I also happen to have some “fully licensed” Microsoft 365 Business Premium users.

In the first place, I need to actually purchase and assign the standalone license product to the correct users. For this purpose, I created a new user named “Mark Twain” in my tenant, and assigned the MDB standalone product.

Assign the MDB standalone license

Next, we want to check on a couple of settings related to this scenario. Begin by navigating to Settings > Endpoints from the Microsoft 365 Defender Security Center, and click on Enforcement scope.

Enable the features in Defender security center

You will want to turn On the setting called Use MDE to enforce security configuration settings from MEM and select the OS choices below (and yes: Windows Server support is coming soon to the Business product).

Then, check Microsoft Endpoint Manager by navigating to Endpoint Security > Microsoft Defender for Endpoint.

Enable the features in MEM

Be sure that the option Allow Microsoft Defender for Endpoint to enforce Endpoint Security Configurations is switched to On, and Save settings if necessary.

With those settings in place, let’s onboard a device named “Workstation10” using the local script method (you could also use GPO or other methods, but just note that you cannot use MEM to onboard the device in this scenario since the requisite license is not available and the device is not enrolled into the service).

Run the local onboarding script

Okay, now that the script has been run, we expect the device to show up in our inventory. Let’s take a look. We should be able to see it from the Defender Security Center:

See the device from the Defender Security Center

Yep. And as well, from Endpoint Manager:

See the device in MEM

You will notice in both cases that there is a column called Managed by which will indicate whether the device is being managed by Intune or MDE (which is the Enterprise term for MDB). Those devices which are managed by MDE are the so-called “standalone” devices. You will also notice that not all the data are available for standalone devices, because they are not enrolled with Intune (therefore things like Compliance cannot be evaluated).

Finally, you will notice that we can still take all the same actions against standalone devices, such as Isolate device, Restrict app execution, Run antivirus scan, Collect investigation package, Initiate Live Response Session, etc.

Same device actions are available for standalone devices

I will also add that in addition to the device inventory and device actions, the Vulnerability management functionality that we have via the Microsoft 365 Defender Security Center is still available and visible for standalone devices.

TVM data is available for standalone devices

Assigning policies

Let’s say you want to assign policies to your standalone devices. We can either use the Microsoft 365 Defender Security Center (you will find it under Configuration management > Device configuration), or we can use MEM. Since the purpose of this blog is to highlight the boundaries and limitations of MEM with regard to these standalone devices, let’s examine the option to assign policies from Endpoint Manager.

Start by creating a Dynamic device-based security group. Go to Groups, and create a new group. Name it something descriptive like “MDB Standalone Devices” or similar. Then, use the following expression to capture the devices managed by MDE:

  • (device.systemLabels -contains “MDEJoined”) or (device.systemLabels -contains “MDEManaged”)

Create a dynamic device group

(Note: I have also observed that using the “All devices” option works as well when making assignments, but it can be useful to have a group that can identify for you which devices are managed by MDE/MDB, and not yet onboarded to MEM.)

Next we can create a policy and assign it to our new security group. The following policy types are supported currently:

  • Antivirus
  • Firewall
  • Firewall rules
  • Endpoint Detection & Response

I suspect we will see additional policy types supported in the future (e.g., Attack Surface Reduction), but at the time of this writing, the above is all that is included.

I created a simple Antivirus policy. Again this could also be achieved from the Microsoft 365 Defender Security Center, but I have elected to manage my policies in MEM instead for the purposes of demonstration.

Antivirus policy is applied successfully

Now, if I try to create and assign a policy that isn’t yet supported, such as Attack Surface Reduction rules, what happens?

ASR policy stuck in pending state

As of now, we see that it just remains in a perpetual “Pending” state. I hope to see support for more policies soon, though. Fingers crossed.

Takeaways

So can the standalone product do everything that the MDB product can when bundled with a more complete subscription set such as Business Premium? No.

Certain policies and functionality would require the “full” license bundle including Azure AD Premium and Intune/MEM. For example, if you want to unlock features like the Conditional Access integration, and measure device Compliance, or if you want to view and managing additional device attributes. But it appears that Microsoft is attempting to open “just enough” functionality here to support a sort of “lite” management scenario of the MDE/MDB product via MEM, even if you don’t have an Intune license. (It is always best of course if you can move into the full experience with the complete license bundle).

In my opinion, we should at least get support for Attack Surface Reduction rules added both to the MEM for standalone scenario, as well as receive a new way to deploy these policies from the Defender portal (like we have with Antivirus and Firewall policies today). I do not know if/when this will happen, but my hope is that we will see it yet this year.

And that is basically the whole story in a nutshell, as of right now. Hopefully that cleared up some of the more confusing points. If we get additional functionality in the future, I will be sure to report back.

The post What are the limitations with Microsoft Defender for Business Standalone? appeared first on ITProMentor.

❌
❌