High Capacity Units
As part of the process of implementing a Fabric environment, and in particular for our data load and storage, we had an additional P1 capacity enabled.
Initially, the ELT and model refresh was keeping the %utilisation at around the 20 - 25% utilisation range. This all that is occurring on this capacity. Not other models, no other Fabric objects using it.
Our ELT runs every 15 mins. Currently it is looping over 15 tables. The data volumes are not very large my biggest fact table is 90 million rows, but a 15 minute delta for this table is rarely over 20k rows (but it is a very wide table, and is full of text columns). The rest of the tables have significantly less rows/columns.
Our IT department have decided to move all IT resources to AWS, this includes our 3 node OPDG cluster.
Since this move, I have noticed a significant increase in Capacity utilisation, such that I am now around 90% all of the time. This includes the runs all over the weekend, where the data volume is significantly smaller.
Is it just a coincidence that since the move to AWS there has been a significant increase in my CUs? Or is there something else I need to look at/look out for. Within the space of less than 2 weeks I have gone from a very comfortable position to squeaky bum time re % utilisation.
Any help/pointers/suggestions gratefully received.
0
2 comments
John OCallaghan
2
High Capacity Units
Learn Microsoft Fabric
skool.com/microsoft-fabric
Helping passionate analysts, data engineers, data scientists (& more) to advance their careers on the Microsoft Fabric platform.
Leaderboard (30-day)
powered by