Yesterday, I gave a presentation at Microsoft Fabcon in Atlanta titled “Building the Well-Architected Framework for Fabric”. I’ve been a public speaker for nearly 20 years, and I don’t usually get nervous before a talk. Not that I was shaking yesterday, but given this was a new topic for me, and a big crowd (I had over 350 people sign up for my session), I wanted to make sure I nailed the talk. Based on all the feedback and questions I got yesterday, the talk was very well received by the audience.
Let’s take a step back and talk about why I built this session. Like it or not, Microsoft’s intention with Fabric (and Power BI before it) is to make it easier for less-technical business users to build and consume data-driven reports. While I understand this mission, and it has been wildly successful in spreading love for Power BI, despite Fabric’s software-as-a-service branding, it’s actually a fully fledged data engine that needs to be well-managed to ensure data governance, security, and adherence to general best practices. In building my demos, I created a sample workspace with a couple of objects.

Several attendees asked if how I had gotten access to their tenants. As you can see, this has a horrendous workspace name that has no meaning, and I have a notebook that somehow spans four departments and has its own version control. This can happen for a number of reasons, but the big two are that Fabric is mostly built for those “citizen developers” who build things in a browser with a mouse, start with the default names, and run with it from there. The other problem, which I highlighted throughout my talk, is the lack of a policy engine in Fabric. The corollaries to this in Microsoft-land are group policy objects (GPOs) in Active Directory and Azure Policy, both of which allow administrators to manage how and what gets built in those environments. Sorry to harsh the mellow of anyone on the Fabric team, but publicly traded companies have audit requirements (at least while the SEC and FTC in the US are still operational).
While building the talk, I created a GitHub repo (https://github.com/jdanton/FabricWAF) to start putting my thoughts on a naming standard for Fabric. Additionally, I created some Terraform code on the Azure side (Fabric capacities are deployed through Azure), which limits who can deploy a capacity, which regions they can be in, and which users can be specified for Fabric Capacity Admins.
I’ve been working with Azure since it started, and long enough to remember when there were like two roles: Global Admin and Co-Admin (I still can’t, for the life of me, remember how Co-Admin was different from Global Admin). However, as larger corporations began adopting the cloud, Azure had to mature quickly and build a robust security model, along with a policy engine. The other aspect of that maturity was that larger and more security-minded organizations blocked all manual deployments of Azure resources. If you wanted to deploy cloud resources, you had to write Terraform, commit it to a repository, and then the build server would deploy the code, which had either an identity or credentials that allowed it to deploy those resources on your behalf. This ensured your configuration was in source control and also allowed for some code checking against the standard, either using policy or during the build phase of your project.
While prepping for this presentation, I was deploying some Terraform at my current client and noticed their extensive GitHub Actions check process. When I submit code, another GitHub action workflow is triggered that checks my code for security vulnerabilities, secrets, not meeting best practices, etc. I considered whether I could work with Fabric deployments and use GitHub Actions as my policy engine, since Fabric doesn’t currently have one.
This solution is imperfect, but I also think it’s better than anything else I’ve seen for Fabric. Mainly because, for a more perfect solution, Fabric needs more controls, and the big one is a very granular security model. Fabric is somewhat similar to Azure in 2010, with just four roles, which, frankly, in 2026, is unexcusable. There should be a fully baked out role-based access control model in any sort of enterprise software, especially one that is so API driven.
So what can we do to work around those platform limitations? Here are my thoughts:
- No user is designated as an owner or contributor in any Fabric capacity for production.
- We can somewhat enforce that by applying a naming standard for capacities using Azure Policy.
- We grant the build server’s managed identity (in this case, a GitHub runner) Owner access to all of those capacities.
- We use a rules engine and check on the GitHub actions side to enforce things like naming standards (which I’ve written), potentially item types, etc. Non-compliant builds will fail.
- We periodically audit to look for non-compliant resources. In my audit code, I’m comparing against our naming standards and looking for items/workspaces owned by individual users.
The one big warning I’ll give you about this audit code is that I was rate-limited after running this several times. I waited 30 minutes, and on the third attempt, things seemed to be better.
It’s been amazing to be at Fabcon this week—the energy users have around Fabric are evident everywhere. However, IT’s job is to ensure security, availability, and auditability, which sometimes means saying “no,” or, better yet, “we’ll work with you to achieve what you want in a secure manner.”
In starting to build this framework, I’m hoping to draw on my knowledge of how the cloud has evolved and help organizations apply it to make their Fabric deployments more mature. I’ve also been an architect long enough to know that standards that don’t have a programmatic enforcement method, aren’t worth the bits and pixels they consume. (hi, former boss, you were wrong) This is an attempt to remediate that in Fabric. If anyone at Microsoft is reading, it would also help if you could give us more controls and RBAC. Thanks, Joey.











