This FACETS video is clearly demonstrated. A very good example to follow step by step. Anyone who wants to learn Rasch should tag on these series of videos.
Thank you so much Prof Vahid. If there are more detailed videos about interaction/bias analysis using Facets, I would really be interested in them! Thanks so much! - Iman
Dear Professor Vahid, I really sorry to have this request, I hope one day you can help us to understand video how to do stacking and racking using Rasch software. Thanks Prof
Dear Vahid, This is Anton and thank you for your useful videos on Winsteps & Facets. If you don't mind I have a couple of clarifications regarding my Facet specification. I am currently doing my PhD in the UK. My study focuses on test-takers interactional competency in a paired oral test performance. My first question is, can I use four facets instead of three (test-takers+ raters+topic+performance)? Next question is, each candidate has been assessed by two raters and the average is the final IC score, in that case, can I include Examiner 1 & 2 in the data file? In terms of topics, 10 topics were used but each pair gets one, hope that is okay, finally, rating scale with 1-9 score has four criteria but I am taking only one (IC score), then can I put R9? Your expert advice will be a great help. I hope I don't waste much of your precious time. Thanks
Dear Professor Vahid, thank you so much for all of the wonderful videos. I am reading your 2017 paper and was wondering whether you have any videos explaining the Rasch measurement-based method for SEM. I'm attempting to learn more about the statistical analysis in your research. Thank you so much.
Just an update: Our comprehensive review of Rasch measurement in language assessment has been published: journals.sagepub.com/eprint/YXCQB3NYCICYVFQ4TSCG/full
Joan, dummies are simply those that do not affect the outcome. Use a 'D' after the name of the facet in Facets' input. For example, Facet 2, gender, is dummy;, thus: 2, Gender, D ... For anchoring variables, please watch my video on 'stacking'.
Dr. Vahid, I need to confirm the difficulty order of a set of tasks that are graded using an analytic scale. Is it valid to add the scores from each individual criterion together for a cumulative score (making it a holistic score), and running mfrm analysis this way to assign task difficulty ranking. Do you know of any research that has used mfrm to rank task difficulty when those tasks use analytic scales? I guess the other way would be to compare each task based on the rubric's individual criterion, but this seems like this way would give me a less definitive result. But, I worry about conflating the results. Thank you for your help. Dan
Dear Professor Vahid, I am trying to do multi-facets to determine the inter-acter agreement how do you see which rater is stricter and which is lenient and how to adjust either one of the rate? Should you be using a partial credit model for items which are rubrics?
@Ignatius Lien please see the rater tables in the output. These are explained in the video. Yes, you can use a partial credit model for items that have a polytomous scale. The last three videos in the following playlist discuss Facets in more details: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-JVIa8jszCDQ.html
Dear Dr. Vahid. Thanks a lot for your tutorial videos. I have got my data in an Excel file. How do we import our data to the Facet? I did not understand that part, sorry Many thanks.
Thank you so much for your video. I am trying to run the FACET with the essay scoring results that consist of a single item (a single prompt), but the program doen't seem to work. Should I have two items at least to run the FACET analysis? I would appreciate if you give me some tips on this.
@@VahidAryadoust thanks! Then if Item difficulty is not my interest, I can still still run the analysis with a single prompt to get two facet estimates (examinees and raters), right? I just did the analysis and it seemed working well.
@@daeseonghan7877 In this case, you have no item facet. Some readers would not prefer that sort of configuration, since the ability and severity facets are estimated based on only one set of scores.
@@VahidAryadoust Thank you for the explanation! I want to see the effect of the two marking schemes on ability and severity by creating two different models. The first model for holistic marking will have two facets (ability and severity), and the second for analytic marking with three facets (ability, severity, and marking criteria). In this case, do you think it can be problematic regarding estimation results? After the comparisons, I am also thinking of creating another model combining the two sets of scores from each marking scheme to find out which marking scheme is more difficult for examinees. The combined model will have three facets (ability, rater, and marking schemes). Sorry for giving you a lot of information. I hope to hear your ideas on this. Thank you.