User
Write something
DP600 Exam Prep (Meetup) is happening in 18 hours
Enriching Data from a Database with User Generated Content
Wondering what strategies people like to use for merging data created from users with data pulled from a database. For example, if there is a list of transactions that your team provides comments or other info about and you want to merger the user comments with the transaction data, what is the best approach?
1
7
New comment 2h ago
Data Pipeline Truncation
I'm setting up a pipeline in Data Factory and use On-Prem Data Gateway connection and I'm getting an error that the data would be truncated even though it's smaller than the field length: Copy Command operation failed with error ''String or binary data would be truncated while reading column of type 'VARCHAR(50)'. Here is the line it's erroring out on: column 'LastName'. Truncated value: 'Meunier (ミュニエ・ã�'. How do I get around this problem? Since it's OnPrem it's making me use a staging environment, which I setup a blob in Azure for this. Obviously cleaning up the data fixes it, but there will always be dirty data in the future. Any suggestions?
0
1
New comment 2h ago
How do you handle Type 2 Dimensional Tables
Hi, what is the best way to creating a relationship between a Type 2-Dimensional table and a fact table. Should this be addressed at the data ingestion stage into the lake house or at the Source level, Is current=1? or can it be addressed at the semantic layer. Because M:M permissible as expected but not 1:M which is what is required. A goal in mind not to lose historical data as well
1
0
Access keyvault from notebooks using Workspace Identity
Hi everyone! Is it possible to access a keyvault secret using a workspace identity from which the notebook is executed? I mean, a WS identity has access to the AKV, which I know is possible, but will the notebook that runs from the same WS inherit its access? Should I do that, develop using a service account that has a individual access or use a managed identity? I'm a bit lost on this one What would be a good practice here? Thanks!
1
1
New comment 9h ago
Help Needed: Pipeline->Dataflows->Lakehouse->PowerBI
In the pre Fabric days is when I was fairly good with PowerBI and would use the Desktop for all the steps of importing data, transforming and then creating reports. The client I am working with has Fabric and we want to do it "properly" but find I am getting lost with a few stages. I have a workspace with the premium feature enabled with the diamond icon Can someone explain, if this is possible? I may have the steps or technical terms mixed up, but this is my general understanding of what I'm trying to achieve: 1. Import an on premise SQL into Fabric (Datapipeline?) 2. create a Lakehouse for this data 3. Transform and clean the data (Dataflow) 4. have a custom (or default) semantic model attached 5. Import the Lakehouse as a data source into PowerBI Desktop so that it inherits the semantic model AND data 6. Create reports/dashboards in Desktop 7. Publish: Once reports/dashboards are published they are refreshed based on the Lakehouse (frequency set by the Dataflow?) 8. Be able to modify the entire workflow as the needs evolve At the moment this last step (modify the workflow) seems to be the hardest part... If this is too vague then I can provide some specific examples of the steps in which I feel like I am close to achieving this but am blocked. Thanks!
2
4
New comment 24h ago
1-30 of 568
Learn Microsoft Fabric
skool.com/microsoft-fabric
Helping passionate analysts, data engineers, data scientists (& more) to advance their careers on the Microsoft Fabric platform.
Leaderboard (30-day)
powered by