Hey there, I am Greg, a certified Azure Solutions Architect, Data and Security Engineer, welcome to my cloud architecture, services, apps and database video blog! Here I try to pick out subjects that I think are important, as well as techniques you never thought about. The cloud has given everyone greater opportunities than ever before and it is an exciting time to be working in all areas of Information Technology.
I hope the information in these videos help you. Sincerely, Greg
Good video... until you broke the bad news that I can't use indexed views on std edition as we have no control over the T-SQL code so we can't add the (NOEXPAND). 😕 Some say it doesn't work well in Enterprise edition without the (NOEXPAND). Back to the drawing board for me. Maybe filtered indexes are my next best bet for performance.
I know I'm 5 years late to the party, but thank you so much for this. I went through exactly the same thought process trying to delete rows from a round robin distributed table in Azure Synapse. I was beginning to think I had forgotten how to write delete commands because I had spent so long mainly just querying databases in Synapse. In the end, I worked out this exact solution - after a good couple of frustrating hours - but I am glad to know that I am not stupid and it is an real issue. Deleting duplictae rows can be done using a data flow in Data Factory, ironically, using a window function.
Thanks, yes I remember this problem was so messy, that I felt I needed to make a video (just for me not to go through that pain again).. Really glad it helped you!!
@@CoopmanGreg It certainly did. I'm like you in that I, too, like concise, elegant solutions and this doesn't "feel" right even though it works. Also, it would be nice if Synapse enforced uniqueness in indices. I get why it doesn't - the data should be in perfect shape by the time it gets to that point, but it isn't always for a variety of reasons.
Hello Greg, I have one question, can we make this connection string for the container as dynamic? I'm working on a requirement where the storage account has multiple containers and these container names has to be passed from ADF pipeline to Power BI for to read multiple files from the container subfolders. Is this possible collecting container names dynamically from ADF to Power BI?
When I am using above api call it is giving me the partial result suppose if we 20 jobs ran it giving me only 10 jobs result and not giving the deatils about other jobs ex :job a ,job b , job c ran but it is giving me results for job a and job c it is skipping job b results
You will need to investigate the page_token, epand_tasks, start_time_from, and start_time_to paramaters in the api documentation. Those need to be configured in the payload then used in a loop to get 100 at a time. Hope this helps.
Its funny, I use snagit almost everyday and forgot about this particular technique. Thanks for your comment, it made me come back to watch this video and hopefully the technique I figured out 4 years ago when I made the video "sticks" in my mind; because this can be a very useful technique at times.
Lets say your company has 4000 jobs and you are assigned to find all the jobs that do not have Tags setup for some Category, but almost all of the jobs already have the tags populated. And you have to do this monthly. Well, if you have to open every one of the 4000 jobs and try to eyeball it. It would take you days. By using (automating with) the api, you can program the api output to be put into a database and you can then easily query in seconds to find all the jobs that are missing the category value. There are 100s of APIs with Databricks which you can get information from. But, of course if you have a small number of objects, you would be wasting your time to use APIs for automation (because it takes a lot of time and programming skill). I hope this gives you an idea how APIs maybe used. Thanks for watching.
Glad its helping others, god knows I get a ton of learning from other you-tube videos also. I would not know how to solve hardly any of these problems without help from youtbue.
I think I understand your question and I do not think you can manually change the data in the columns. What you could do is add a new column and refer to the column you want changed and use additional DAX expression to the column to modify it. You could also hold the shif key down and select all the columns then Ctrl+C, choose menu selection, "Enter Data", then paste into new table. Before saving the new table go to the cells you want edit and voila.
My concern is that it worked perfectly in within Azure Storage Explorer emulator. When I gave the SAS to an actual person and they used it, I got an authentication failed error as follows: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature. Signature did not match. String to sign used was cl 2023-03-10T03:20:48Z 2023-03-17T02:20:48Z... I removed sensitive info. Any idea how to address this
I know that error and have experienced myself many times in the past. It was not a hard fix, but right now I cannot remember what I did to get around it. I am not able to test it either right now. Maybe the IP address she is using is not white listed on the storage account, I will try to get back to you if you don't get the answer first, but it might be a while. Sorry.
@@CoopmanGreg Thank you for your response Greg. This is making me learn AIM and RBAC. I just assigned myself role access in AD 2 minutes ago. I am trying to access it as we speak
The first time I've heard about Index Views I was like, why have I never heard of this functionality? Your video just left me in awe over the possibilities. Thank you for taking the time to explain this so thoroughly.
@@CoopmanGreg I have these text files that come in every day and I need to import that to a specific table that is already created in the database(I got that part figured out). Now I need to know how to automate the process so that I don't have to do it manually every day (Like I have to change the file location every day in the SQL query to get to the new file. I also have to find and replace the " quotation because they don't want that in the table). So, I need to figure out how I can ignore that "quotation when importing to my table and how to do that daily. Schedule some kind of job so it can run daily. I am using Microsoft SQL server management studio. Thanks a lot for all your help.
Hi Solomon, I think I understand your problem, and it is the problem that is addressed in video. Talend is a good tool to automate what you are trying to do, you will not be able to automate in SQL Managment studio with out a different approach that would probably be harder. Anyways, all I can say is meticulously make sure that you follow the video perfectly (even create the same example I did, just to get it working.) If you still cannot get it to work, start a new package fresh and try it again. @Vid Podobnik above had some other suggestions that you can also try. Something might not be quite right with Talend in this regard, but after a lot of effort and re-trying we did get it to work. But, it was painful and that was why I made the video to show others how we finally got it to work. Good luck!
Thanks for this! I use CTE all the time in Postgres but found this issue in Redshift. I saw WITH NO SCHEMA BINDING option while skimming through Redshift developer guide, but I didn't connect the two. D'oh!
Hands down best tutorial on this topic. I searched all afternoon and finally bumped into this video. I'm a beginner with barely any C# skills and it worked. Thanks!
Thank you for showing me Window >>Show View. From all the other videos on this topic no one explained how Contexts(myJobName) tab showed up. I like your naming standards best practice.
Jordan, you might have to write a python script to pull your files from the storage account down to your local drive. I have not actually done that, but it might work. Sorry, I don't know a utility that does that, but it seems there should be something out there.
Nice video, very clear. A question I have is how is this done for multiple users to access the same VNet? Do we need a different Cert name for each end user when creating the certs in PowerShell? I see we can add additional certs in Point-to-Site, but I am not sure if the certs can share the same name for each user.
This was a great video! Everything was super clear and it allowed me to connect super easily. My remaining problem is that I can't connect with my FQDN, only my private IP. The software i'm deploying needs to connect via the FQDN, or the actual name of the server instead of IP. Minute 25:35, you highlight the issue. How can I get it to work with the name of the server?
In PowerBI there is a Parquest connector under File in the Get Data option. It gives you a URL to point to. That is as far as I can point you on that matter, though. Thanks