To connect with SSMS, I have discussed in this video ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-Mn1GB8rSeoQ.html Take url, use database engine, url and login with User id password with MFA
Hi Amit, thank you for this introduction. Now, for the shown approaches I need a web-browser and do manual upload each time I have new data. If I want to automate an update schedule for local file with Fabric, do I need Power BI Gateway like with Power BI or Azure Integration Runtime like with Synapse?
Thanks. For uploading, we have the option in Lakehouse itself. For downloading, we can use One Lake Explorer. Once installed, you will have access to the files similar to One Drive. Another method is to use Python code. PySpark on Notebooks can help, or the local Python code, which I have discussed in later videos of the series, can help to download files. I also checked that you can create a pipeline to move data from Lakehouse to a local folder using a pipeline. You need to have an on-premises gateway for that, and it will be downloaded on the on-premises gateway machine.
@amit , Great tutorial , how we can transform multiple csv in a single dataflow and then sink them all in a lakehouse or warehouse using the same dataflow is it even possible i tried it but when i select the destination it is only taking one CSV to sink at destination i am stuck here does that mean i have to create multiple dataflows to tackle multiple CSV transformations ? i know in pipelines we can handle this via for each activity but not able to sort this for transformation multiple csv using dataflows ? Scenario is I have , Emp.csv , Dept.csv provided by client and i want to transform them ( changing the ID column from string to Whole number ) and then sink them to lakehouse , the problem is that i can ingest the files automatically using pipelines for each but cannot perform the same for transformations
You can add multiple files in the same data flow and set a destination. When you start a dataflow from the Lakehouse Explorer. It will automatically set the destination for all tables. In the above case, you should be able to add all the files to one dataflow and transform and load to the lakehouse and warehouse.