Тёмный

SPSS Tutorial: Inter and Intra rater reliability (Cohen's Kappa, ICC) 

PsychED
Подписаться 19 тыс.
Просмотров 90 тыс.
50% 1

Опубликовано:

 

11 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 39   
@pheladimokoena3006
@pheladimokoena3006 Год назад
Thank you for this information. Much appreciated!
@aznurmmu9300
@aznurmmu9300 5 лет назад
I have a few questions: For my study i have 3 raters: 1. For 3 raters for continuous variables, which one should be use - Fleiss Kappa (> more than 2 raters) or ICC (more than 2 raters) ? In this website : www.statisticshowto.datasciencecentral.com/fleiss-kappa/ says: the assumption with Cohen’s kappa is that your raters are deliberately chosen and fixed. If the raters are fixed, but i have 3 raters with continuous data should i use ICC or Cohen 2. Is it a must to use ICC for 3 raters? or in some situations, Cohen's kappa can be used?
@phzar
@phzar 8 лет назад
Hi, if there are 4 observers amounting to 6 crosstabs (kappa scores), do I take the average of the 6 kappa scores? Or do I use the reliability to achieve the average measure? Sorry if my question doesnt make sense, I'm of no statistics background.
@mmds27jo
@mmds27jo 7 лет назад
Hi, I have sequence data in which I have categories that have certain values rated by two raters. As a simplified example, assume that I have a sequence of colors; each color appears on the screen for a specific duration. The rater task is to make a note of the appearing color and the duration in a list form. Here, I have 3 things to compare in the same time: the categories, the sequence and the duration. Is there anyway to compare the result of two raters?
@hamidD2222
@hamidD2222 7 лет назад
Thank you for the video. According to Gisev, N., et al. (2013). "Interrater agreement and interrater reliability: key concepts, approaches, and applications." Res Social Adm Pharm 9(3): 330-338. - ICC can be used for categorical, ordinal or continuous data, which is different to what you suggest here. Can you elaborate please ? Thanks
@PsychEDD
@PsychEDD 7 лет назад
I did a bit of reading on it, and it seems that some people say you can and some people say you can't. Perhaps you can try using both techniques and post the result? I would suspect that there would be less variance when using categorical variables in the ICC compared to interval/ratio type variables, whereas Kappa and Fleiss made their formulae specifically for categorical variables.
@prachimehta1945
@prachimehta1945 4 года назад
Can ICC be used if data is not normally distributed? If no, what test to use in such cases?
@Hello-dg6hu
@Hello-dg6hu 7 лет назад
How do you calculate intraclass correlation coefficient when the data is not normally distributed for intra-rater reliability?
@stevenalcala2844
@stevenalcala2844 6 лет назад
Is "Inter-rater Reliability" a synonym for "Inter-coder" or "observer"-reliability?? Definitely a point of confusion for my Research Methods class and I!
@ishaandolli792
@ishaandolli792 2 года назад
I am assessing how participants' self-report of change in substance use (on a 5 point scale from low - no change - high), corresponds to their self-report of the same change measured at a different time point. Could I use cohen's kappa? I guess it is still intra-rater reliability. But a little confused as to whether cohen's kappa is appropriate when seeing agreement between two ratings by the same rater
@PsychEDD
@PsychEDD 2 года назад
I'd say some nonparametric correlation would be more appropriate than rater reliability here
@ishaandolli792
@ishaandolli792 2 года назад
@@PsychEDD thank you, why is that? Would a spearmans rho rank order correlation be appropriate then? Cant really find any examples or resources where this is explained
@PsychEDD
@PsychEDD 2 года назад
@@ishaandolli792 Perhaps I misunderstood initially, but i thought you wanted to measure the relationship between the two time points -- if it wasn't ordinal i would suggest a repeated measured test, but it is ordinal, so that wouldn't be appropriate. But yes I would say a spearmans rho would be most appropriate if you're taking a correlational approach.
@lilymks
@lilymks 7 лет назад
I'm doing content analysis but I've already documented the items in Microsoft Word. I only have one rater and 500 samples with 7 categories of items. Can you suggest any ways that can calculate the reliability?
@PsychEDD
@PsychEDD 6 лет назад
You would need to calculate the intra-rater reliability.
@robr8947
@robr8947 5 лет назад
Here's the link from around 10:40 in the video: www.uvm.edu/~dhowell/methods9/Supplements/icc/More%20on%20ICCs.pdf
@pedrofranco3434
@pedrofranco3434 2 года назад
Cant get access to the link. Can anyone post a updated link pls?
@tomaszamora333
@tomaszamora333 8 лет назад
Can you give me the exact reference for P&W 2009 interpretation of ICC?
@PsychEDD
@PsychEDD 8 лет назад
It was 2000, not 2009 - apologies. Portney LG & Watkins MP (2000) Foundations of clinical research Applications to practice. Prentice Hall Inc. New Jersey ISBN 0-8385-2695-0 p 560-567
@chrisgamble3132
@chrisgamble3132 7 лет назад
Hello and thank you for the video, Do you know how to calculate a single inter-rater reliability ICC value when the 2 raters have both measured 2 or more trials? So rater1trial1 + rater2trial1, rater1trial2 + rater2trial2. Do you average the ICC values of those trials or is there another way?
@PsychEDD
@PsychEDD 7 лет назад
I'm not sure about averaging the ICC values individually, but it seems that 2 separate interrater values would be more appropriate for your design, or even 4 if you want to determine the intrarater reliability between two trials. Additionally, you may want to try ICC(3, k) as it gives the average reliability for a set number of raters.
@chrisgamble3132
@chrisgamble3132 7 лет назад
Thank you for the response. I think i need to clarify a bit. The 2 trials i am referring to have occured on the same day. So both my raters have tested the same patient twice, with some time (15 minutes) between. Articles with a similar setup allways refer to a single inter-rater reliability value, even if both raters (as in my case) have performed two measurements. I figured i had to analyse these measurements seperately in SPSS. Perhaps this is not the case? So in your example: Suppose rater 1 and rater 2 both performed 2 BRG trials (trial 1 and 2). Would you calculate the inter-rater reliability seperately for each trial? Or would you analyse the data simultaneously?
@PsychEDD
@PsychEDD 7 лет назад
When I did the analysis for the data set included in the video, I was asked to do separate ICC scores for each trial (rater 1 trial 1, rater 2 trail 1, and so on). However, if the papers you are referring to reported just one score, I'm quite sure they used the ICC(3, k) model, which gives the average scores for a set number of raters. So you would add all of your variables into the ICC box and choose that model. Hope this helps.
@putrinurshahiraabdulrapar3175
@putrinurshahiraabdulrapar3175 7 лет назад
Hye, I had read a journal that using cohen's kappa for multiple rater. Do you have any tutorial for cohen's kappa of multiple rater?
@PsychEDD
@PsychEDD 7 лет назад
As far as I am aware, Cohen's kappa is limited to 2 raters. However, if you are using ordinal data, check out en.wikipedia.org/wiki/Krippendorff%27s_alpha
@August_Hall
@August_Hall 7 лет назад
hi, i have 41 raters from a psychology class who rated 19 different images of mens torsos for muscularity using a likert scale of 1 (low muscularity) and 5 (hypermusculrity). I thought ICC was appropriate but now I ma not so sure as each model has confused me. thanks
@PsychEDD
@PsychEDD 7 лет назад
Perhaps have a read on the following: www.ncbi.nlm.nih.gov/pmc/articles/PMC3402032/ "The intra-class correlation (ICC) is one of the most commonly-used statistics for assessing IRR for ordinal, interval, and ratio variables. ICCs are suitable for studies with two or more coders, and may be used when all subjects in a study are rated by multiple coders, or when only a subset of subjects is rated by multiple coders and the rest are rated by one coder."
@muhammadanshory
@muhammadanshory 8 лет назад
#Ask what if I have 15 repondents and 3 trial with categorical variable...how to analyze that using kappa?
@PsychEDD
@PsychEDD 8 лет назад
Not sure you can - maybe chi-square test for independence rather?
@tanjica0102
@tanjica0102 6 лет назад
Is it possible to calculate Kappa if you have 5 different categorical variables and two raters?
@PsychEDD
@PsychEDD 6 лет назад
Yes, but you would have to calculate the Kappa for each variable separately.
@tanjica0102
@tanjica0102 6 лет назад
Thank you for the fast reply! I don't really get it. So, if we rate a certain characteristic with a scoring system from 1 to 4, where 1 is poor and 4 is excellent; how can I calculate it separately? Or did I explain it badly before and it is rather one variable with 4 categories?
@PsychEDD
@PsychEDD 6 лет назад
So you have one variable with 4 levels (Likert scale)? I think calculating the ICC may be a better bet than Kappa, based on the scale used.
@tanjica0102
@tanjica0102 6 лет назад
Yes, that's correct. All right. Thank you very much!
@akusukamu
@akusukamu 5 лет назад
@@tanjica0102 I thought ICC is for continuous and yours is categorical? Which means Kappa would be a better choice?
@mariociencia12
@mariociencia12 7 лет назад
The image quality of the video is too low
@indrayadigunardi4695
@indrayadigunardi4695 4 года назад
Can not see the number clearly. No usefull at all.
@PsychEDD
@PsychEDD 4 года назад
savage
Далее
Weighted  Cohen's Kappa (Inter-Rater-Reliability)
11:56
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Просмотров 1,6 млн
Reliability 4: Cohen's Kappa and inter-rater agreement
18:25
Cohen's Kappa (Inter-Rater-Reliability)
11:05
Просмотров 52 тыс.
Levels of variation and intraclass correlation
10:25
Просмотров 21 тыс.
Calculate rWG(j), ICC(1), and ICC(2) in Excel
20:01
Просмотров 3,4 тыс.
Fleiss Kappa [Simply Explained]
9:45
Просмотров 20 тыс.
Kappa inter rater reliability in SPSS
5:56
Просмотров 7 тыс.
А ВЫ ЛЮБИТЕ ШКОЛУ?? #shorts
00:20
Просмотров 1,6 млн