Statistics Made Easy®...this channel is devoted to helping you use design of experiments (DOE) to succeed in making significant improvements to your products and processes. Software products referenced are Stat-Ease 360 and Design-Expert software published by Stat-Ease, Inc. For more information, go to www.statease.com.
hai.. The **Predicted R²** of 0.3685 is not as close to the **Adjusted R²** of 0.7063 as one might normally expect; i.e. the difference is more than 0.2. This may indicate a large block effect or a possible problem with your model and/or data. Things to consider are model reduction, response transformation, outliers, etc. All empirical models should be tested by doing confirmation runs. The example above was from the software. the software stated if the difference is greater than 0.2, there is problem with data or etc. However, as u said, greater than 0.2 is better. I also found in one journal stated that greater is better. Can you explain regarding this matter? Thank you so much.
Hi! It looks like there's some confusion here. What the software is talking about is the *difference* between the Predicted and Adjusted R^2 values: you want them to agree with each other, so the difference of the values is less than 0.2. In other words, the Predicted R^2 in this example should be closer to the Adjusted R^2 value of 0.7063. This is not the same as looking at a single R^2 value, where you do want the value to simply be as high as possible.
Thank you a lot for your illustration, really, outliers are a big problem! But also, I think there are problems that can come from multicollinearity and need to be solved.
hai, if the means of my experimental values are all already in the PI interval, should I do the two sample t-test also to see either there is significant diff. between the experimental and predicted values? Thank you.
Yes, by seeing if the surface bisects the points and fits them within normal variation (does not exhibit a significant lack of fit). To see what I mean, open the tutorial data Chemical Conversion (Analyzed), go to the Model Graphs, 3D Surface and click through the Jump to run points. This is an example of RSM done right. : ) You can access the Stat-Ease tutorials at www.statease.com/docs/latest/tutorials
Hi, I am working on an experiment using Response Surface Methodology (Design of Experiment). I use StatEase software for the same. During analysis the software shows that CUBIC model for my data is Aliased. However, the cubic model is significant (p<0.05) and has insignificant lack of fit (p > 0.05). Moreover R2 value is also very good 0.985. Can I use this model for prediction of optimized conditions, although the model is aliased, but statistically significant? Please give me a quick response, Thanks
Going any deeper into this is beyond the scope of an introductory webinar, but you're welcome to explore the other videos here on RU-vid or check out our eLearning on the Stat-Ease Academy: www.statease.com/training/academy
What does it mean when a coefficient of a component in optimal design model equation is negative while the response only takes positive value? My understanding is that the coefficient is the response value of the component when it is pure (100%), so it can't be negative.
We used some previously saved data for this tutorial. When you design your own experiments, you'll need to run the experiment and enter your observed responses.
This is an excellent lecture on a very important concept in DOE. I have 2 questions. 1. Adding additional model points can increase the desired FDS. What if there are some practical constraints in not adding additional model points like time, or budget? What trade-offs we need to make to ensure we get the same precision? 2. What if I don't have an historical data of the std deviation? Let's say the process is a new one. Which std dev should I use? Should a separate ANOVA study be conducted and then use this std dev?
If you do not want to add more runs, then finding ways to decrease the noise in the system, or increasing the acceptable "d" will both change the signal to noise ratio. A larger S/N ratio will increase the FDS calculation.
The 3D graph for categorical factors is a set of bar graphs, with the height of each bar representing the predicted response value for that combination.
I'm going to have mixture component of A, B and C. A+B+C = 100% and the ratio of A/B must maintain at 0.5. How do I set the constrain since I can't enter both lower and upper limit with the same value.
This imposes an extra equality constraint. The level of A is dependent on the level of B. Now there are only 2 components that vary, but all 3 are involved in the total 100%. Short answer - there is not an easy way to get this done.
Hi there, Can I use old data from somebody else and increase responses, and the design expert redesign another experiment for me so i do less experiment?
In theory yes, you can import the old data in the software and use Design Augmentation to add more runs to fit a higher order model. In practice, there are many questions starting with - is the process still running the same as it did previously? Are you measuring the same way? Was the previous data from a designed experiment or was it historical data from the system? This is really a practical experimentation question. You can try and it may be successful, or it may not work well and you would get better information from a new experiment.
Great question - this is a power/sample size calculation, based on the change in the response that you want to detect and the current standard deviation of the response. If you have a current license of Design-Expert or Stat-Ease 360, you are welcome to email more details to stathelp@statease.com. If you don't want to do the calculations, then often using a sample size of 4-5 per run, and then entering the average of those runs as the response, will give sufficient information.
28:30 I would suggest using 'Strength' as the metric, not Strength Reduction. Because the end customer is interested in strength. 34:10 Seems that component B is not adding value to the mixture from the response Strength perspective 37:04 Seems that component B is also not adding value with respect to the response Zeta Potential 57:00 Did you decide to leave the AB term in the model (despite p value > 0,10) because it felt not comfortable for you to tell the organization that component B can be removed (and that the company has been using it for no good reason)?
For strength reduction the BD interaction was highly significant. As mentioned, there were a number of other responses. B may have been involved in other interactions in those cases. Subject matter knowledge would always drive these decisions.
@@tafadzwankhoma1533 That is the calculated standard error after the experimenter inputs their "d" and "s". Continuing forward to 52:21, Martin is demonstrating changing these values, thus changing the signal/noise ratio. The blue horizontal line is the calculated value (the standard error). The values should be your desired precision, relative to the normal process variation. Refer to this video for more discussion: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-uY7OqM9awxE.html
Thank you for the recording. What is the relationship between Design Expert and StatEase 360. Can one use Stat-Ease 360 to completely design and analyse experimental data without Design Expert or StatEase is a build up on Design Expert