
Dynamic Filter Controls in Sigma
January 12, 2026
Intro to Multi-Tenancy in Sigma
February 1, 2026Thinking about migrating from Power BI to Sigma? Here are 5 things to consider.
I’ve spent years building analytics products in Power BI. I’m a former Power BI super user and have implemented Power BI Service instances at Fortune 100 companies. I’ve taught Power BI courses at the University of Cincinnati. I’ve written more DAX than I care to admit, optimized overly complicated data models, dealt with composite models, DirectQuery quirks, Fabric roadmaps, and every “creative workaround” (aka hack) that comes with using Power BI.
And yet, a huge portion of my work today lives in Sigma.
This is not because Power BI is a bad tool. It’s actually a pretty good tool in my opinion. The reason why most of my work is shifting to Sigma is not because of the tools themselves – it’s the environment that’s shifting around them. The way that data is stored, queried, consumed, embedded, and leveraged for AI have all rapidly changed in the ten years I’ve been working in data – and Sigma is built for this new world.
This post is a technical deep dive for organizations currently using Power BI that are evaluating a migration to Sigma.
1. Hidden Costs
Power BI Licensing
“Power BI is free.” If I had a nickel for every time I’ve heard clients say this, I wouldn’t have to be running a consulting business.
On paper, Power BI can seem cost effective, especially if your organization already has a Microsoft E5 license, which includes Power BI Pro licenses for “free”. Even this notion that Power BI Pro licenses are free with E5 licenses is a little misleading.
Yes, Power BI Pro licenses are technically free with an E5 license, but what is the main differentiating feature between an E3 and E5 license? “Free” Power BI Pro licenses. An E3 license costs $20.75/u/m v E5 which costs $35.75/u/m – a difference of ~$15/m. So the main feature you get with a $15 increase in licensing cost is a “free” license that generally runs users $10/mth/user. Free is relative I guess.

Gateway VM Costs
An additional hidden cost that organizations often forget to factor into TCO conversations is the cost of Azure VMs required for running gateways. You might be saying “well, we are on the Cloud so we don’t need gateways.” You’re right in that you don’t need gateways for an on-prem connection, but there are many use cases where organizations still leverage gateways even when they’re in the Cloud. Data that uses private endpoints or IP whitelists that restrict access are classic examples of gateways still being leverage in Cloud to Cloud connections.
This cost is not to be underestimated, especially if you’re pushing a large volume of queries through these VMs. One of our client’s estimated that 25% of their monthly spend on the Power BI platform was associated with VM compute associated with their gateways.
Operational Overhead
Administering a large Power BI deployment is not for the faint of heart. I know because I’ve done it.
Even in cloud-native setups, administering Power BI at scale requires a surprisingly large cast of characters. Ownership is scattered across Azure tenant administrators, Power Platform administrators, capacity owners, semantic model developers, workspace admins, and downstream product or app teams building on top of Power BI. As a result, even small changes require coordination across multiple teams, and what should be a simple report refresh issue can quickly escalate into a multi-person investigation spanning Azure networking, gateways, capacity throttling, dataset permissions, and Power BI service limits.
As adoption grows, the operational burden grows with it. More users means more workspaces, more datasets, more refresh schedules, more security rules, and more embedded use cases across the Power Platform, all of which need to be designed, monitored, and supported. Without constant oversight, organizations accumulate duplicated semantic models, fragile refresh pipelines held together with duct tape, and an ever growing backlog of “urgent” fixes. Over time, analytics teams spend less time building new insights and more time administering the platform itself, turning Power BI into an operational dependency that demands ongoing headcount just to stay stable.
2. Live Query Performance
What if I told you that Power BI isn’t SQL native? Hint: it’s not.
Once you realize that Power BI isn’t SQL native, then you understand why users lose certain DAX functionality when use DirectQuery instead of Import mode, as well as why performance suffers. Here’s the breakdown.
Power BI’s semantic layer is powered by Analysis Services and the Tabular Vertipaq engine, which relies on DAX and an MDX-based query engine rather than SQL. When Power BI connects to CDWs like Snowflake or Databricks, those MDX/DAX queries must be translated into SQL before they can be executed by the warehouse. The issue here is that MDX does not map cleanly to SQL – hence the loss of some DAX functionality when using DirectQuery. This is also why Power BI’s performance in DirectQuery mode is not optimal. Maybe bad is too strong a term, but its performance is most certainly “lost in translation” – ba dum tss.
For the DirectQuery architecture showing how DAX/MDX queries translate to SQL, see the diagram in the SQLBI article on DirectQuery limits.

Not to mention, Power BI’s DirectQuery results are still limited to the 1M row return limit.
This isn’t an edge case; it’s a well-documented limitation. Power BI’s live query engine was not built to act as a thin, high-performance query layer over massive cloud data warehouses. Sigma was built from the ground up to query large datasets directly in the warehouse. Instead of importing or reshaping data to fit the BI tool, Sigma pushes computation down to Snowflake or Databricks, letting those platforms do what they do best. The result is consistent, predictable performance, even on very large tables.
3. Data & AI Apps
The world of static reports – aka static consumption – is so 2010. Today’s data consumers depend on the ability to not only receive data, but to contribute data – aka dynamic consumption. People do this in their daily lives via apps on their phones, and they expect to be able to do it with their work as well.
Sigma makes the ability to write data back (“write-back”) to your CDW extremely easy. By allowing users to write-back data, you are able to create data applications so easily that you do not have to leave the Sigma platform during development.
Can you do this in Power Apps? Yes. Is it efficient and easy to do? No.
In order to achieve the same functionality in Power Apps, a developer must utilize at least THREE different apps within the Power Platform suite – Power Apps, Power Automate, and Power BI. Each of these tools takes time to learn, and none of them are all that simple to begin with.

In May 2025 Microsoft released Translytical Tasks Flows (TTFs) into public preview. This feature also allows users to complete a workflow similar to a Sigma write-back, but again it requires a great deal of additional knowledge and workflow building. You can see the Microsoft documentation of what those workflows look like, below.

Sigma makes it incredibly easy to build productionalized data and AI apps directly on top of your warehouse. Write-back, user inputs, and AI-assisted workflows live in the same platform as your analytics products. There’s no need to stitch together multiple tools just to move your organization from static consumption to dynamic consumption of data.
4. Embedded Strategy
Embedded analytics are no longer a nice to have functionality for your data platforms – its a must have.
As organizations increasingly share data with customers, partners, and internal product teams, analytics are moving out of standalone BI tools and directly into applications, portals, and workflows. In many clients that we work with at Maverick Data, the analytics are the product.
This shift fundamentally raises the bar for embedded analytics products: they must be performant, secure, scalable, and easy to maintain over time. What once worked for internal dashboards with a few hundred users starts to break down quickly when those same assets are embedded into customer-facing products with thousands of concurrent users across different consumption bases.
This is where the differences between embedding Power BI and Sigma become very apparent. Power BI embedding can be powerful, but it comes with real architectural and operational complexity. Embedded Power BI reports typically require dedicated capacities, careful capacity planning, Azure AD app registrations, token generation, row-level security synchronization, and close coordination between application developers and Power BI admins. As usage grows, teams often find themselves tuning capacity SKUs, managing throttling, troubleshooting refresh contention, and debugging performance issues that only appear under embedded load.

Sigma approaches embedding from a fundamentally different angle. Because Sigma queries data directly in the warehouse and does not rely on an intermediate semantic engine or capacity layer, embedded experiences scale naturally with the underlying cloud data platform. There is no separate capacity to manage, no import vs. DirectQuery decision, and no fragile synchronization between embedded apps and BI infrastructure. Security policies live in the warehouse, logic lives in SQL, and the same assets can power both internal analysis and external-facing products. This simplicity matters. Analytics teams can focus on delivering value rather than running a BI platform inside a product company.

5. Locking into Fabric
If you’re currently leveraging Power BI and other Microsoft tools, you already know that the Microsoft roadmap is entirely focused on Fabric and Co-pilot. For organizations that are using Snowflake, Databricks, or another CDW with Power BI, this introduces some potential conflict. While Power BI can still connect externally, the platform is increasingly optimized for data that lives inside the Microsoft ecosystem.
Vendor lock-in becomes a real risk when your analytics stack is tightly coupled to a single vendor’s roadmap for a few main reasons.
- Innovation Risk – when all of your eggs are in one vendor’s basket, your organization’s capability to modernize is directly tied to one company. In an era of extremely fast innovation from tech companies, lock-in could cause you to miss out on leveraging new features in the marketplace.
- Pricing Risk – when a vendor has a monopoly on an organization’s data stack you are at the mercy of their pricing changes because of decreased competition. Often these increases of prices come with no new functionality, and they know that you are more likely to pay the increase due to lock-in.
- Integration Risk – when a vendor controls the full end-to-end analytics stack, integration outside of that ecosystem becomes increasingly difficult. Over time, connectors that were once “supported” receive fewer optimizations, lag behind native features, or require additional configuration and infrastructure to function reliably.
For Power BI organizations evaluating Sigma, it is important to consider that your analytics future is being shaped more by Microsoft’s priorities than by how their data platform actually operates/performs. Their roadmap start driving your architecture decisions and while integrations with other CDWs besides Fabric technically still work, they no longer feel optimal. Decoupling analytics from a single vendor’s stack restores flexibility which allows you to choose tools that align with your performance, embedding, and AI goals.
Wrap-Up
This blog post is not meant to be an anti–Power BI rant. Power BI remains a strong platform, especially for Microsoft-centric organizations and internal reporting use cases.
But analytics has shifted. Cloud data warehouses are now the center of gravity. Users expect AI integration and interactive, embedded, writable analytics – not just dashboards.
Sigma is built for that reality.
And as someone who knows Power BI deeply, that’s why I – and many teams like mine – are increasingly choosing Sigma.
Contact Us
If you would like to talk to someone at Maverick Data about maximizing your usage of the Sigma platform, please email us at spencer@maverickdata.io for more information!



