This is super help. Thank you!!! Below is solution for Duplicate headers. Try the below formulae: Do Until loop -> Append to string variable if(equals(variables('LoopsVar'), 2), body('Create_CSV_Table'), join(skip(split(body('Create_CSV_Table'), decodeUriComponent('%0A')),1), decodeUriComponent('%0A')))
For those wondering how to stop the headers from being printed every time the loop runs you can do this: 1. Initialize the AppendCSV variable using your headers in CSV format followed by a newline character (instead of it being initialized blank) 2. When creating the CSV use 'advanced' mode and set the values one by one to be item()?['ColumnHeader'] (leave the header blank)
Can you show me an input example for AppendCsv variable? I'm trying to do this, but it's not recognizing as row. I think I'm writing the notation incorrectly.
For nulls and Keeping column order : Use a SELECT action to map the column names to the data values - this takes care of the nulls and the column order ... I know - you have to map all the columns to the incoming powerBI values. This worked for me with over 100 columns. @Curbal Thank you for this Tutorial!!!!
@@chirschmann3 The Select action in Power Automate can be used to map column names to data values. On the "From" input use the "firsTableRows" output from your "Run Power BI Query " action, then in the "Map" parameter which is displayed as a table, on your left hand side, type in the column names you want in your csv e.g FirstName , then on the right hand mapped to the FirstName will be the result from the query something like item()?[TableName['ColumnName']] ... you have to type this in as an expression. Then do the same for your other column name and the value you want them mapped to from the Power BI Run Query action. Power Automate doesn't automatically let you select an item from the firsTableRows output so just ensure you use the expression
This is an excellent solution for smaller data sets and really helped me get on the right path to making this a workable solution, but I found that adding the select action exponentially increases the run time for the flow the bigger the dataset is. To get around this, instead of adding a select action, I added placeholders for any null values in the Power BI table I am pulling from, so that there are no null values being pulled by the flow from the table. To do this I created a simple DAX table that pulls all the values from the table I want to export via Power Automate. I then went through the DAX table calc and added code for each field being pulled into the DAX table to return the string "~EMPTY~" whenever a value is blank. Example for pulling the [Address] field from the main table into my DAX table: "ADDRESS", IF(ISBLANK([ADDRESS]), "~EMPTY~", [ADDRESS]), If the field type is not text, say a date or a number, you'll need to convert it to a text format like General Date so you don't end up with a type mismatch between the values that exist and the "~EMPTY~" text values you're getting for blanks: "DATE_ACTIVATE", IF(ISBLANK([DATE_ACTIVATE]), "~EMPTY~", FORMAT([DATE_ACTIVATE], "General Date")), After doing this for all the fields, there will be no null values in your DAX table, so you can pull from that table into your flow and the column order will always be the same for each loop. Then when you pull the data from your CSV files in the future (in my case I'm pulling them into Power BI), you can use the query editor to convert all "~EMPTY~" values back to blanks before assigned column types like Date, Whole Number, etc.
@@CurbalEN I just have one observation. I noticed that some of the fields are missing , around 10% of the original data is missing. Is there something that I can do to overcome this obstacle?
This works really well! The only thing I've noticed it that it includes the headers in every loop it does. However, I'm sure a little step to remove that should be easy.
@@CurbalEN Hello Curbal, I also got the same issue that the column names repeat multiple times. I couldn't find a good solution to that. Would you mind helping find solution if possible?
Excellent video - quick question What do you do if you don't have or cannot create an index column? I've recreated the report from a PBI data set (the data does not belong to me) and I don't have a unique identifiable column as the data changes on a daily basis - let's say my data has multiple claims (sometimes duplicate claim numbers) and duplicate dates. Any insight is appreciated.
@@andylewis2329 What I did was I created an array of months and broke up the file into 12 files, provided that each file did not exceed 100,000 rows or less than 1 million values. And then applied loop for each of these months. Hope this helps.
This was amazingly simple to follow and as you did suggest, it requires precision. I had one error that took me days to find, until I just copied all of your code. Thank you! Subscribed.
@@CurbalEN I see that you do enjoy when your subscribers ask you follow up questions. My end-users are in different countries and are only sanctioned to view their own work. I was able to apply a filter so that I can generate a CSV for each territory. Is there any way to generate multiple files (based on a Territory filter) from a single Flow?
Was able to do 186K rows with 17 Columns - just had to increase the do until limit. The column repeat isn't ideal but also not a true issue as I am only using this to create a repository of historical data.
This is extremely useful! helps me to resolve But am I the only one encountering the below notification & workflow stuck at running when exporting more than 600K rows "Your flow is consuming an excessive amount of data and reached 80% of the content throughput limit. If the usage grows, further actions may be throttled or slowed down."
Hi. This is a great video that you've posted. I learned quite a bit by following it step by step. It does exactly what I wanted to achieve. Works like a charm. Thank you very much for sharing your knowledge. This is super helpful. Kudos to you!!!
Great video as usual Curbal, thank you! BTW, the reason I may use this is not because I ran into a row limit on running a query against a dataset. Instead, it appears to be an "amount of data" limit. If I put fewer columns in the query, I get more rows. I'm not saying there isn't a 100K limit on rows, but with a decent number of columns, I couldn't get more than a few thousand back. I zapped all but two columns, and I got all 70K. Short version: At least with the Power BI run a query step, I wonder if it more of an amount of data issue than a row issue. But your video is extremely helpful, regardless!
Hi Curbal, thank you for this video which helped me to automate my download process.. I would like to know if its possible to export more than 100k rows without Index column as I am using live dataset
This is exactly the scenario I have in the company I work for, I need to export millions of lines per day to csv, but I need to insert start date and end date parameters, how could I do it in this case, defining (filtering) the time period in the power BI matrix or table to then start the power automate task or do I define the time period in the same way described above and consider these in variables? I would really like your help! Thanks for sharing your knowledge with us!
Hi! I see you are using your dataset as your report. In my case I have a dataset that feeds different reports, and I want to do this from a specific one, how can I do it that way? Because the dataset has a lot of tables inside it. Thanks! great video!
Great video.. This video is very helpful. Can you please add the same video with slicers/drop down filters for data? Or provide some tips as how to achieve the same with drop-down filters.
Thanks Curbal for this great series! One question, did you try to get those DAX query automatically from a report? (as sometime the query is not predefined, they must be captured with dynamic content and filters)
@@CurbalEN I got some clue one that, but most are complex; so curious if someone'd done that before; (obviously M$ had done that in PBI Desktop, but not sure how)
Thanks a lot for the video. It was really helpful .I could replicate it at my end. Just one question - can we call the same Power Automate flow in Power BI i.e. on a click of a button ? I tried it but could proceed only with few steps . Later it was not showing expression control . Is there any other way by which we can call flow ?
Fab video and concisely done, as said though you need to be exact with the scripting but once up and running its exactly what's needed. Shame it wasn't for Excel rather than csv, can this same process be done for excel and is there much deviation from this flow? Very helpful and can'r praise enough those that take the time to create these videos and share their skills and knowledge. Massively appreciated. Happy days
@@CurbalEN hello, I'm trying to export data from my Power BI dataset to a CSV file on SharePoint. My dataset contains over 1 million rows, and I've tried adding an 'index_' column using RANKX based on an ID column. However, when I export the data to a CSV file, I only get 2000 rows in the output. I need help with this issue. Can you please provide some guidance on how to export the full dataset to a CSV file on SharePoint? Thank you.
Hi, your videos are so useful, I've learnt so much :) One thing I cannot seem to solve is the repeating of column headers. I can't see a solution in the comments or power automate community. Could you help with this curbal?
Good job, thanks for the video. As per this flow the data appending with headers, could you please help me out how to remove the headers from appended data. Thanks
Great video, this will help me a lot. At the end where you combine into 1 csv, it looks like there are 4 extra rows added. The index is 102158 and the row number is 102163. Could it be the 4 joins are repeating a row?
@@CurbalEN I'm trying to export data from my Power BI dataset to a CSV file on SharePoint. My dataset contains over 1 million rows, and I've tried adding an 'index_' column using RANKX based on an ID column. However, when I export the data to a CSV file, I only get 2000 rows in the output. I need help with this issue. Can you please provide some guidance on how to export the full dataset to a CSV file on SharePoint? Thank you.
Hello @Curbal , I'm trying to export data from my Power BI dataset to a CSV file on SharePoint. My dataset contains over 1 million rows, and I've tried adding an 'index_' column using RANKX based on an ID column. However, when I export the data to a CSV file, I only get 2000 rows in the output. I need help with this issue. Can you please provide some guidance on how to export the full dataset to a CSV file on SharePoint? Thank you.
Hola! Gracias por los videos! Hay alguna forma de crear un botón en un dashboard que extraiga todos los valores que aplican a los visuales que se muestran? Muchas gracias!
thanks! From what I`ve seen, it starts from the other video (100k rows extraction), i`m wrong? I mean, some code was already there and not explained so i guess it
Hi Ruth - I've found this extremely helpful. Can you please tell me is it possible to have a button that exports to the current Users (report viewer) My Documents folder OR their own OneDrive, as automate seems to only allow it to export to the report creator.
@@CurbalEN I'm trying to export data from my Power BI dataset to a CSV file on SharePoint. My dataset contains over 1 million rows, and I've tried adding an 'index_' column using RANKX based on an ID column. However, when I export the data to a CSV file, I only get 2000 rows in the output. I need help with this issue. Can you please provide some guidance on how to export the full dataset to a CSV file on SharePoint? Thank you.
when I do the loop it appends data in different order- e.g. it appends columns ABC and in another loop it takes order BCD... Is there any solution for this?
This is Awesome. Thank you. Tried this and it works, however, I noticed from the 3rd loop (50000-75000 rows) that it messed up my data. For example, My column [account] has switched values from my [type] column. It only happens on the 3rd loop. 4th loop comes (75,000-100,000 rows) the data is fixed. Not sure what happened. I have a total row of 123k.
Hi Curbal, Your video is really helpful for me. I need to implement it, can you please send the pbix file. I need to implement it for my client. I tried implementing the same way, but i am having issue with measure. so, can you please share me the pbix file, i want to view how you have implemented all the dax measures used in this video.
Hi ma'm.. First of all i would like to thank you for all your knowledge sharing and videos. For 150k+ records i am able to process by this video, however i am facing issue with 850k+ records on production environment. Requirement is, end users want all data into a single csv file. Issue: By following the above steps , before creating a file we are doing append to a string variable right!!, there i am facing issue as string limit has reached showing a below error: The variable 'appendCSV' has size of more than '104937539' bytes. This exceeded the maximum size '104857600' allowed. Can you please suggest me please what actions i need to approach here to fix this. Thanks in advance.
HELP!!!! First of all, this has resolved a significant chunk of my needs, so thanks for this a lot. So I've completed doing all the steps and I get to have the combined data into a single csv file. However, I have a set of data where many columns may have blanks, and realizing that Power Automate is not too kind with blanks, I noticed that when any loop begins with a row where any column has blank, those columns are getting skipped and hence pushed back to the right as the last columns in the output csv. Does anyone know how to avoid this? My initial thought is to replace blank values as one of the steps in the flow. I was trying to read on how Power automate can replace blank values but all the tutorials were about replacing the value for only one column. In my case, almost all of the columns may have blank values, hence I need the replace formula to apply for every column. As a newbie in Power Automate, I can't seem to do that expression correctly. Another option I thought was to replace the blank values in the dataset itself. However, my data is connected via DirectQuery hence transformation is limited. Anyone who knows the fix, appreciate if you can share. And please bear with me as I am still learning. :)
Amazing.. what if I don't know how many loops I need to do incase we have a real time data set? Can you do a vedio when the trigger is power bi button and we need to get data from power bi to an excel report (not table ex an invoice template)
You could use empty(PowerBI data) which returns true or false. Then use that as your do until so that it carries on running until there’s no data to return. Although be careful because if you get the code wrong the do until will run for ever! If you click the 3 dots by the do until and open settings you can set the maximum times the do until runs. It defaults to 60.
Hello Curbal! Thank you once again for this great content. I have done this process but then I have come with one small question: every filter that is applied to the matrix/table on the report is ignored when generating the .csv file. It means that the export process ignores the filters applied to the report. Is there any way to solve it? Thanks in advance!
The Column headers repeats for each loop and could be due to nulls the columns arrangement changes some gets missed but those should not happen if there data is of non-null records
Yes that is not major issue, I skipped those columns currently. For now trying to get the header rows deleted. Maybe filtering out the non numeric from index column is easy to achieve.
I have an index because mu data is filteres and I cant use the full dataset for some dax expressions i created my final table has index but not complete because the filters, its like 2,3,5,7,8,12,16, can i run this code without problem using variables like you did?
Hi, please how can I do if I don't have an index column? I have orders and I want to export the last 80 days periodically. I was thinking looping through dates but I could be possible that one day exceed the max rows allowed by batch :/
This is great video. However when I tried it for some reason the structure of the columns is not the same in the each batch (loop), which result in the data structure that cannot be used.
If you use custom columns for the Create CSV Table action this can be worked around. Not ideal if the dataset structure is likely to change but good for some applications.
Hii! Isit possible to have a dynamic folder path when creating file in sharepoint/OneDrive? Let’s say I’m gonna create a new folder every month (naming it the latest month). Can I make my flow with a dynamic folder path to choose the folder to save in based on the existing month?
Hi, I'm curious as to where the some of the rows have gone? The index for the last row is 102158 where as the last row number is 102163, even if you take away 1 for the header row, 102162 it still doesn't match the index?
Hi @CurbalEN I got the flow working but I can't seem to replicate it for the Excel version download (XLSX)? The export automatically turns my dates to DateTime format, and it doesn't allow re-formatting in the CSV when I try to change it to a Short Date. Is there a way to replicate this exact flow but as an Excel file? Am I over complicating it? Thank you again for everything Curbal!
It's sound great. However, I don't know why I can't replicate your result. What I want to do is to export it by month (from Jan - Dec) and combine it into one CSV. Any hins?
@@CurbalEN I'm trying to export data from my Power BI dataset to a CSV file on SharePoint. My dataset contains over 1 million rows, and I've tried adding an 'index_' column using RANKX based on an ID column. However, when I export the data to a CSV file, I only get 2000 rows in the output. I need help with this issue. Can you please provide some guidance on how to export the full dataset to a CSV file on SharePoint? Thank you.
I am able to get the data but it its not respecting the RLS rules, its overriding the rules and exporting all rows in the table, can you please help me, what additional steps are needed to achieve this, Thank you very much
How do I determine if there is a syntax error in the query code? Also, is it possible to accept user inputs to pass into the filter criteria in the query? For example, to prompt for the start and end of a date range...
@@CurbalEN I found it. I do have a question though. I'm trying to use a min() expression to pull out the min of a field... min([SHIPPINGDATEREQUESTED])... the editor is refusing to accept that as valid input... if I click on "Update" it doesn't do anything but if I replace the field name with a constant it accepts that... but WHY??? I do not understand how the syntax on this is supposed to work. Thank you.
it works fine BUT unfortunally i cant trigger it from power bi bottom. Do you have suggestions..... if i use export csv, api create only a csv with 3000 rows on 8700 rows in total
I've seen several comments from readers that they don't know how to apply this logic if you don't have an 'INDEX' column available. In your response I keep reading "You don't need the id to export the rows, I used it only for checking". But with code 4, a filter is applied to this 'INDEX' column at the first VAR. How should this VAR be adjusted if you don't have an 'INDEX' column?
@@CurbalEN I am using a calculated table, and it has no INDEX column. I was already looking on the internet if I can add an INDEX column, but I didn't find a solution. But maybe you have a 'trick' to add an INDEX column to a calculated column. (And then not afterwards by adding a calculated column to the calculated table).
Hi Bas, Did you have any joy with this at all. I'm in a similar prediciment. All that is happening when i remove the filter is the same data is being populated into each file.
Hi @@ericholdforth4981 I solved it quickly now by creating the query table in the Power Query editor, applying the filtering there and adding an index. Only now unnecessarily duplicate data is being read into my data set, so this alternative is not desirable. I'm still looking for a way to add an index column in DAX when creating a new calculated table via DAX.
Curbal, how are you? I come again to ask for help. I was able to perform perfectly all the steps, it worked 100%. You can even see the results of the rows in the loop. However, when I attach the csv, the file comes with only 3720 lines out of a total of 171000. I have approximately 57 columns in the table.
@@CurbalEN Hello Curbal, how are you? I got an alternative to fetch all the lines, but now what stops me is the error: The request is larger than 94371840 bytes. I think that unfortunately there is no way to get around it, right?
Great video, i was wondering if i can make a table with combine cells, lets imagine we want a report for every sell we got this month, with name of the customer, product and an unique id, lets same a customer buys 10 things, i will like to see it with a combine cell instead of seeing 10 times de customer name, that would make it more easy to reed
Ciao, is it possible that you need to add 1 because you missed a range in the steps? It is missing the step between 50001 and 75000 because you have done directly 50001 to 100000
the problem is PowerBI Run Query will exclude the column if entire column is null or blank, appending above array will skip the column and in some scenario this will create the misaligned columns. I have tested this and getting above error in my data, do you have any suggestion ?
@@CurbalEN I remember Patrick confirmed once that this is possible, but never tried it out, however it would be worth exploring also the Power Automat action for Paginated report, however I do believe it will also have some limitations :)
@@CurbalEN In theory it should have a time limitation mentioned in the documentation, yet it still ran for me. I tried it out with a 15 columns, 290k records export and it worked. You can also loop through a parameter if you would have one and you can also simulate RLS. Seems pretty powerful... Only Microsoft needs to remember that pagination exists also in Pro now and update the power automate action as such :)
How to export different tables data form power bi in to one excel /csv using power automate at a time ex: i have 4 tables data needs to export one excel file with different sheets using power automate
Hello Curbal, I followed all the steps in the video but one I run my flow I got the error saying: PowerBI query error “the query referenced calculated column XXX which does not hold any data because evaluation of one of the rows caused error”. I am exporting over 700k rows. Any help will be appreciated. Thanks
It doesn't create the csv file in the right format, it just takes the string variable and put it as it is into the file, with all the headings repeated hundreds of times. Is there another way to append to csv?