Businesses today must be able to quickly and confidently move from data to decisions to actions to maintain a competitive advantage and keep ahead of competitors.
Today, many still struggle to understand the basics such as ‘what happened’ with confidence. We build capabilities that enable businesses to mature beyond understanding 'why something happened' to 'what may happen' and eventually recommending 'what should be done'.
Ensuring you have effective and relevant analytical solutions is an investment in your success. We can help you attain this goal.
We offer a full suite of analytic services to ensure you have the capabilities needed when you need them so that you can focus on your clients and your business.
Thanks for the amazing video. Any idea how to extract the difference highlighted in red and green into an excel file in sharepoint? I think it's possible with a YAML code but looking for other approaches
This is great tutorial! I did find myself struggling for a bit, because my x-value was a Measure, while my Y value existed in a separate data table. I had to create a calculated table (my data set is relatively small in this regression analysis) with a Dax formula like: LR Table = SUMMARIZE( 'vw_PropertyDesignations', 'vw_PropertyDesignations'[PropertyKey], "Xvalue", [ServiceRequests_AVG_Days_To_Complete_This_Month], "YValue", AVERAGE('vw_PropertyDesignations'[UnitCount]) ) In my scenario, [PropertyKey] does not require an explicitly declared aggregation function because its already grouping, by virtue of SUMMARIZE() parameters, and for [UnitCount] which is static, I was able to aggregate with Average. I also had to create the measures and calculated columns in this table so they could be referenced within the same scope. Hope that helps if you find yourself in similar situation! 😊
This is one of the best tutorials i ve seen in a long time :) i did the formula in excel and i was sceptical to be be honest but everything worked exctely as you said :) thanks!
Great Video :) How would you include dummy factors in a similarly easy to use manner? If we use the housing example, for instance a Tick-Box for "Roof-Terasse" (that if ticked adds a simple summand value)...
Best source of information I have seen to describe the modern data architecture landscape and terminology. Thank you very much for putting this together.
No disrespect and with all sincerity 🙏, but it took me just 30 minutes to understand what a data mesh is trying to achieve. Zhamak's own presentation and her interviews at couple of other channels clarified it further. I don't think she says it is possible with today's technology. In fact she clearly says that some pieces do not exist today. But principally her thinking is in the right direction. The domains / functions know their data better than anybody else. And too much centralization is not helping. Rest all is "how" which will push us to new innovation, imo. Thank you for this great evolution describing video!
It is in fact possible with todays technology. Mesh is a concept - and it takes enterprise discipline to actually execute. It’s platformization but without organizational silos.
It's amazing. Something that takes us literally 30 seconds in Excel will now take us over 4 hours including time spent looking up for you tube tutorials in this "new software"
Why in the world isn't this just a built in feature? I mean... Excel has this. I'd bet more statistical software has this.... Thanks for showing us a way to do this!
Great class, as always. I'm looking for a solution at this level of magic. I have a report that ranks customers... The doubt: Dynamic customer segmentation based on criteria. We usually have a segmentation with just one or two measures to target customers, but I ran into a different need. Basically, this is a virtual relationship for each Customer Group (which may or may not aggregate one or more Customer_Id) based on certain Criteria. The first parameter is the Average Monthly Sales Value, which classifies the customers in A, B or C. Easy, a DAX measure comparing with a “single” table with Lower Bound and Upper Bound rules for the created measure. The problem is in classifying the customers by more criteria... Each Criterion is calculated with a DAX measure to generate a weighted Score, which are defined in additional tables with Low Limit and High Limit. The sum of the Score totals 100; Example: C1 - Mix of purchased products - Weight 10; C2 - Mix of purchased SKUs - Weight 5; C...; C9 - %Gryour videos.oss Margin - Weight 15; C20 - % Acquisition of products considered Launches in the period (1 year from the launch date) - Weight 5; The SCORE measure is the SUM of all the results obtained in each criterion; Customer Segmentation takes into account: 1 - On the Y axis, the Average Net Value of Orders in the selected Period, which segments customers into A, B and C; 2 - On the X axis, the score obtained from the sum of the results of the criteria, which results in >= 50 points (Clients 1) and < 50 points (Clients 2); 3 - The result identifies customers in A1, A2, B1, B2, C1, C2. I created a physical table with the parameters to be compared, with LimInf and Lim Sup for “Monthly Sales Value” and “Score”. But this calculation is very slow. Any tips to improve the performance of this segmentation? The analysis must take into account the selected period (Month/Year to Month/Year). After that, I still want to be able to compare results in 2 different periods, but this will be easier after I manage to improve the performance of the current period calculation. Thank you for some help! And thank you for your videos.
when I run the R script I get this error "Error in setwd("~/Documents/R") : cannot change working directory" , can you please help me with this problem? Cheers Jeff.
Ok to be fair on the “qualified workforce”… Microsoft is changing the platform and capabilities so often the workforce cannot keep up… and depending on the industry, there is possibly a delayed release on many capabilities…
You skipped what could have been the best part of the video, reformatting the raw forms data into a useful data model. That's actually the part I was looking forward to.
This may seem like a silly question, but I noticed you have histograms and other useful info at the top of each variable column--as an R user trying to learn BI (and wanting to pull my hair out at every step) I'm wondering how you enable this functionality? is it a plugin? Also, thanks for your helpful video!
What I do not get is why Microsoft makes it so hard to get the regression results. I mean in Excel there are many tools like the data analysis add in, where you get the intercept, slope and p-values with a few clicks. Why don't Microsoft just integrates that in Power BI?
Thanks for this video. Just curious---instead of going from MS-Forms to Sharepoint and then to PowerBI, can't we just use the web-version of PowerBI using the streaming-data option? Is there any advantage of going through the SharePoint route? (or maybe the PowerBI streaming-data option was not available at the time of developing this video?)
why do you put defined values and not variables of the vector of coefficients in the regression? this would allow you an automatic update of the coefficients as the data is updated. good video
Agree with you Maximiliano!! One can extract the parameters that the regression creates and use those in the formula. I guess that is more of a Power Query exerciser and it would extend the video too much.