Activity
Mon
Wed
Fri
Sun
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
What is this?
Less
More

Memberships

Learn Microsoft Fabric

Public • 5.7k • Free

Learn Power Apps

Private • 2k • $3/m

6 contributions to Learn Microsoft Fabric
Help Needed: Pipeline->Dataflows->Lakehouse->PowerBI
In the pre Fabric days is when I was fairly good with PowerBI and would use the Desktop for all the steps of importing data, transforming and then creating reports. The client I am working with has Fabric and we want to do it "properly" but find I am getting lost with a few stages. I have a workspace with the premium feature enabled with the diamond icon Can someone explain, if this is possible? I may have the steps or technical terms mixed up, but this is my general understanding of what I'm trying to achieve: 1. Import an on premise SQL into Fabric (Datapipeline?) 2. create a Lakehouse for this data 3. Transform and clean the data (Dataflow) 4. have a custom (or default) semantic model attached 5. Import the Lakehouse as a data source into PowerBI Desktop so that it inherits the semantic model AND data 6. Create reports/dashboards in Desktop 7. Publish: Once reports/dashboards are published they are refreshed based on the Lakehouse (frequency set by the Dataflow?) 8. Be able to modify the entire workflow as the needs evolve At the moment this last step (modify the workflow) seems to be the hardest part... If this is too vague then I can provide some specific examples of the steps in which I feel like I am close to achieving this but am blocked. Thanks!
2
4
New comment 4d ago
2 likes • 10d
I think the main issue here is confidence. You have a pretty good understanding of what you need to do to have a good report for your client and from what you have outlined as your steps, I think they are quite alright. For importing on premise data via a data pipeline, the copy data activity should come in handy, where you should be able to create a new lakehouse(say lakehouse_1) and new tables. You will be able to monitor the output and also create a schedule on when/frequency to run the pipeline. Before diving to lakehouse_1 to create a dataflow gen 2, I would suggest you create lakehouse_2 in your working space first. Head back to Lakehouse_1, create a dataflow gen 2 and Get data from Lakehouse_1. Do all your transformations using power query and set the destination for the transformed data to lakehouse_2. You can create a refresh schedule here too. In Lakehouse_2, manually add your table to the deafault semantic model. In the SQL analytics viewpoint, head to lakehouse settings and turn on sync for the default PowerBI semantic model. Then head to Your PowerBI desktop and get data via Power BI semantic models and select Lakehouse_2 and create your report. I don't quite understand No. 8 maybe you can elaborate more so we can help to figure out how to go about it.
1 like • 10d
Yeah. Identify the number of data sources you have, and map out the point at which you want to ingest them from and when you plan to merge the data and you will be fine. Then later, you can sit down with the team and identify how to optimize the process.
Data/Analytics engineering collaboration
Hey everyone, so I am looking forward to collaborating with anyone on a project or two on data/analytical engineering, where we can do meetups, discuss on the deliverables and timelines then hit the road. I think it would be a good experience.
2
1
New comment 14d ago
Deploy Retail data solutions in Microsoft Fabric
Hello Everyone, I hope you are having a great time with Fabric! I'm trying to deploy Retail data solutions in Microsoft Fabric but it's not working or Im missing something. Have anyone used it before? Thanks!
2
3
New comment 16d ago
Deploy Retail data solutions in Microsoft Fabric
1 like • 17d
What is the main challenge? Not opening at all or stuck after opening?
0 likes • 16d
Tried it a while back but I got stuck.
Little bit of inspiration
Good morning, I have started my first job in IT this summer as a Power BI developer. Looking around myself during August-September what to do next - I have started to learn Python and started with Data Science course (7 months long, now 2 months in already). Little bit looking into Fabric environment. Eventually, all the knowledge from past months started to clip in and the synergic effect takes place. I am quite thrilled about this! Now I have started with SQL basics and I am really determined to take DP-600 within the next 3 months, hopefully! Keep up the great job, determine yourself and keep going!
10
4
New comment 10d ago
1 like • 16d
Keep it up @Richard Pařík
Need help on lakehouse incremental update
Hi Everyone, I'm trying to figure out what's the best way to update some data in a Fabric lakehouse. Unfortunately I'm not able to paste a sample grid table here so I pasted a screenshot explaining the problem I'm having. Basically I would like to run an incremental update to a Fabric lakehouse table based on data I'm receiving from the source system, but the 2 methods I can think of (merge statement, and delete/insert wrapped in the same transaction) does not work. EDIT: For the delete/insert option I did think of using delta table history to handle failure and rollback but that could make the solution quite complex, so that would be my last resort option Is there any other solution that can help achieve this update? Thanks in advance
0
2
New comment Jul 12
Need help on lakehouse incremental update
0 likes • Jul 12
Are you getting data from a dataflow-import from CSV for your lakehouse?
1-6 of 6
Wilfred Kihara
2
9points to level up
@wilfred-kihara-4301
All about Data

Active 26m ago
Joined Mar 9, 2024
powered by