Тёмный

Complete single-cell RNAseq analysis walkthrough | Advanced introduction 

Sanbomics
Подписаться 13 тыс.
Просмотров 83 тыс.
50% 1

Опубликовано:

 

29 сен 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 316   
@mocabeentrill
@mocabeentrill 2 года назад
Shout out to you bro! After years of wet lab practice, I'm transitioning to bioinformatics and you're one of my inspirations.
@sanbomics
@sanbomics 2 года назад
That's how I started out too! about 5 or so years ago. I started as the one doing the library prep/sequencing. I still step into the lab every once in a while xD
@remia5
@remia5 Год назад
I'm glad you followed through with your promise to make this long video. It's already received thousands of view in just a few months. Cool! Please, keep them coming!
@sanbomics
@sanbomics Год назад
I've been in a bit of a hiatus, but more will be coming after the start of the new year!
@lizs7827
@lizs7827 27 дней назад
Incredible explanation on scRNA seq analysis!!! You are incredibly talented!! Thank you!!!!
@2kvirag
@2kvirag 2 года назад
Such a useful resource especially because the tutorials covering Scanpy are fewer when compared to Seurat. Great job and thanks a ton.
@sanbomics
@sanbomics 2 года назад
Thanks for the kind words! Seurat is great too, but I personally like scanpy much more.
@2kvirag
@2kvirag 2 года назад
@@sanbomics Absolutely. I think it is a question of Python versus R and not Scanpy versus Seurat :)
@sanbomics
@sanbomics 2 года назад
haha yes you are right
@daffy_duck_phd
@daffy_duck_phd 2 года назад
This video is so helpful, I'm just starting a PhD in bioinformatics and solid resources are scarce, so thank you! The only thing i would say is that a little more explanation as to why you are doing some of the things you are doing would be super helpful for a beginner like me. but honestly this is a great video and looking forward to binge watching the rest of your videos
@sanbomics
@sanbomics 2 года назад
Thank you! Unfortunately, I have to cut out a lot of explanations because nobody wants to want a 4 hour video haha. (I also don't want to edit a 4 hour video xD) It might be helpful for you to go through it slowly: understand what I'm doing in each line, bring up the docs for each command, etc.
@daffy_duck_phd
@daffy_duck_phd 2 года назад
@@sanbomics I totally understand! I went through the video again and made note of anything i didn't understand. Thanks again and looking forward to future videos!
@chrisjmolina
@chrisjmolina Год назад
Great video! I would also love to see the 4 hour director’s cut :)
@olyabielska764
@olyabielska764 Год назад
@@sanbomics I'd love to watch a 4hour video with a lot of explanations :) I am a molecular biologist and I have a difficult relationship with bioinformatics...
@sapienthought1103
@sapienthought1103 2 месяца назад
very very underrated channel
@Max-so2ij
@Max-so2ij Год назад
thank you for your amazing video!
@sanbomics
@sanbomics Год назад
No problem!
@demetronix
@demetronix Год назад
incredible resource thank you so much for this!
@sanbomics
@sanbomics Год назад
You're welcome :)
@AM-fw6jl
@AM-fw6jl Год назад
Thank you! Super appreciated.
@erinjane1200
@erinjane1200 11 месяцев назад
Thank you sooo much💗💗💗 It's so helpful!
@shaolinma1273
@shaolinma1273 2 месяца назад
Hello, when you run the analysis of counting cells, where is the doublet from? I thought the doublets were filtered in the preprocessing steps. Thanks
@laloulymounia9266
@laloulymounia9266 6 месяцев назад
Thanks',I had trouble at first because of the .T that transposed a file i downloaded from geo that was already in the correct format !
@sanbomics
@sanbomics 5 месяцев назад
Ahh yeah good catch! Unfortunately there is not standard so you one time you might have to and another you might not
@MySanthush
@MySanthush 5 месяцев назад
Thank you. The best tutorial. I am new to this field. Could you please tell me how to split the UMAP by condition (after integration) to see a particular gene?
@sanbomics
@sanbomics 4 месяца назад
You can do something like adata[adata.obs['Condition'] == 'Sick'] with Condtition being the column name in obs
@ryanreis5830
@ryanreis5830 2 года назад
Fantastic video -- thank you for what you do for the community! Any plans to do something similar using R or are you making the switch to python?
@sanbomics
@sanbomics 2 года назад
I don't know if i'll do a scRNA video in R. But I will probably do a scATAC or sc ATAC+RNA video in R in the near future.
@preciouschigamba1742
@preciouschigamba1742 Год назад
@@sanbomics please do for R too i'm having challenges
@irfanalahi380
@irfanalahi380 Год назад
I think this is the best video in scRNA-seq. Thank you so much. Wondering if it is possible to make a similar tutorial on WGBS data analysis?
@sanbomics
@sanbomics Год назад
Thank you! It is possible but I might not get to something like that for a while, sorry :(
@irfanalahi380
@irfanalahi380 Год назад
@@sanbomics Thanks. If it takes a while, can you suggest some books or other resources with practical end-to-end code examples (like yours) on WGBS data?
@Amanda-re2vt
@Amanda-re2vt 4 месяца назад
Hi Sam, do you have a video of how you’re downloading the data from NCBI (papers) because that part I don’t understand.
@sanbomics
@sanbomics 4 месяца назад
I don't have a video. But I tweet about it sometimes if you follow me on twitter. I may make something like this in the future. You can check out my most recent video series for another example from a different dataset
@laloulymounia9266
@laloulymounia9266 5 месяцев назад
Hi, thanks again for the tutorial, would it be possible for you to make a tutorial on how to annotate the umap clusters automatically on python ? I ended up having 40 clusters on my breast cancer scRNA seq data set which includes before and after treatment data. I'm having trouble annotating it manually, with the cd4/cd8 being the benchmark for the resolution. I tried loading the data in R so I can get an idea of what Single R would do, however I think I messed up during the conversion process. I'm not sure whether you'd have any useful tutorials online to help ? Thanks
@sanbomics
@sanbomics 5 месяцев назад
Actually, that will be in my next video. Sometime in the next couple weeks
@dardas15
@dardas15 Год назад
Thanks this is great! General question- is there a tutorial if you want to pick a specific cluster and reanalyze it specifically - for example isolating the CD8+ T-cell cluster and then identifying subpopulations of cd8+ T-cells within that cluster? Thanks again!
@sanbomics
@sanbomics Год назад
Yeah you can definitely do that. What I would do is just subset the adata based on the CD8+ label then reset it to adata.raw. Then you can reprocess it. Or if you need the true raw counts, reload the data and just use the cell ids from the CD8+ labeled cells to subset the fresh data. I have a video for filtering adata if you don't know how to do these
@dardas15
@dardas15 Год назад
@@sanbomics perfect thanks!
@Sub-C-160
@Sub-C-160 9 месяцев назад
@@sanbomics In the tutorial after concatenation, we saved the normalized and log transformed counts to adata.raw. So with reprocess it after subsetting the adata you mean starting with highly variable genes and train the scvi model again?
@katarinavalentincic9621
@katarinavalentincic9621 Год назад
great video, invaluable for beginners with coding like myself! What if you have adata which hasn't been filtered yet and you want to filter it to retain only cells that are present in another Anndata object previous_adata, because they have already been filtered with QC and to do that based on indices of those cells in previous_adata?
@sanbomics
@sanbomics Год назад
you can filter it directly by passing the list of barcodes: adata[barcodes_to_keep] or doing adata[adata.obs.index.isin(barcoes_to_keep)]. The latter wont reorder your data. In both cases they have to match 1:1 so if you did any concatenation you will have to remove the appended suffix
@mst63th
@mst63th 2 года назад
Thanks for the comprehensive video. I'm wondering if it is possible to use the output of other technologies such as Fluidigm C1, and do the same workflow you describe here?
@sanbomics
@sanbomics 2 года назад
I'm not sure it will work with things like C1. Those are relative qPCR values right?
@mst63th
@mst63th 2 года назад
@@sanbomics I'm not 100% sure. I want to reproduce the data from a study that previously has done with Surat, and the RNA-seq data is publicly available on GEO NCBI.
@mst63th
@mst63th 2 года назад
the GEO id is GSE81608
@sanbomics
@sanbomics 2 года назад
I took a look and it should work just fine! Basically anything that can be done in seurat will also work here.
@mst63th
@mst63th 2 года назад
@@sanbomics Thanks a lot for the reply.
@romansmirnov2531
@romansmirnov2531 Год назад
Thanks for the useful tutorial! I'm wondering, is it possible to use diffxpy for markers identification? if so, could you please give an example of how it can be done?
@sanbomics
@sanbomics Год назад
Yup! You will just have to compare the subpopulation to the rest of the cells.
@dabinjeong9560
@dabinjeong9560 7 месяцев назад
Thank you so much for your video! It was very helpful. I have one question about importing ribosomal genes from broad. After I read the genes with pandas, the output showed something about copyright from broad. Do you have any advice on how to resolve this issue? Thank you!
@sanbomics
@sanbomics 7 месяцев назад
No idea, I haven't seen this issue yet. You can skip the ribosomal part for now. I'll check it out
@efstratioskirtsios298
@efstratioskirtsios298 Год назад
Any videos/help with scRNA-seq DEG analysis in R? Seems that there is not a robust consensus on what packages are the best to use :( Any opinions + recommendations? Can someone use DEseq2 through seurat directly?
@sanbomics
@sanbomics Год назад
You can do pseudobulk and use EdgeR or Deseq2. I think there should be a decent bit of stuff online to help you with that. Those are what I recommend. Good luck!
@movex5822
@movex5822 5 месяцев назад
guys im business garduate studied finance and marketing and programming and worked at end as secretary for frozen products company export im very frustrated , im thinking to switch career i currently live in ksa i want to start work in this field , what do you recommend
@sanbomics
@sanbomics 4 месяца назад
Pick a project and start trying to do it. Great way to learn the basics. Then once you understand the field a little you can refine your goals and also learn more about the fundamentals
@ladenhudson2458
@ladenhudson2458 Год назад
does this routine take advantage from multicore processor?
@sanbomics
@sanbomics Год назад
How I have it here relies the gpu for computing. There isn't much need for multicore here unless you want to save a little time inputing a bunch of sample, which most people wont be doing.
@ladenhudson2458
@ladenhudson2458 Год назад
@@sanbomics Thx, I am building my workstation. it helps a lot.
@hrisivanov3150
@hrisivanov3150 3 месяца назад
Man, you absolutely saved me! Suddenly, everything makes sense now. Subscribed!
@sanbomics
@sanbomics 3 месяца назад
:)
@hyang333
@hyang333 2 года назад
This is the best tutorial of scRNA-seq analysis with python I've ever seen!
@sanbomics
@sanbomics 2 года назад
Thank you! :)
@vladi1475S
@vladi1475S Год назад
I agree 100%!! The best tutorial scRNA-seq analysis in Python!!!! THANK YOU!!
@sergestsofack3376
@sergestsofack3376 5 месяцев назад
nice video, where can I found these codes so I can just copy and paste ?
@mehdiraouine2979
@mehdiraouine2979 5 месяцев назад
it's in the description, there is a link to github
@Alano-mg6qh
@Alano-mg6qh 18 дней назад
Great tutorial. Will definitively be shared in a LinkedIn post. Thank you for the hard work and good documentation.
@saraalidadiani5881
@saraalidadiani5881 4 месяца назад
thank you for the nice video, regarding to the part for making the cell typ fraction plot (form this part of the code till end of this part: adata.obs.groupby(['sample']).count()) may you also please explain how to do it in R with the Seurat objecet? thanks
@SanzidaAnee
@SanzidaAnee Месяц назад
Hi, my doublet distribution graph is different than your one. I did not get any value >1. Why?
@young-kookkim5031
@young-kookkim5031 3 месяца назад
Thank you so much! This video is perfect for those who want to analyze scRNA-seq data!
@梁一鹏-v7n
@梁一鹏-v7n 2 года назад
It was 11: 00 p.m. in China when I clicked on this video, and it was already 2: 00 a.m., This is the most exciting single cell tutorial i' ve ever seen you are so good!!!
@sanbomics
@sanbomics 2 года назад
Thank you!!!
@梁一鹏-v7n
@梁一鹏-v7n Год назад
@@sanbomics hello,I have new qusetions . If I do a DE with SCVI ,how I can use this scVI result to do gsea which is metioned in your 'easy gsea in python' video
@AthensNwo
@AthensNwo Год назад
Is there a reason one uses filtered feature matrix vs raw feature matrix? what I understand the filtered feature file is already quality controlled data done by the cell ranger software, wouldn't it be better to use the raw feature matrix to do quality control?
@sanbomics
@sanbomics Год назад
The 10x raw feature matrix includes all droplets even ones that were not considered cells by cellranger. This is the first line of defense but the thresholds they use simple metrics that aren't going to catch everything. No point in using the much larger datafile for no reason when the cells are already considered garbage by cellranger. There may be very niche reasons to use it, but not for typical analysis.
@AthensNwo
@AthensNwo Год назад
@@sanbomics Got it, thank you so much for you reply! Looking forward to your future videos :)
@emanueleraggi272
@emanueleraggi272 2 года назад
Thank you for this amazing video. Is there any possibility for a PCA and ANOVA analysis for the next tutorials? Thanks for sharing your knowledge!
@sanbomics
@sanbomics 2 года назад
Sure! I'll keep this in mind for an upcoming video
@ismailgumustop7527
@ismailgumustop7527 9 месяцев назад
Thank you so much for the tutorial! It's quite useful for hands-on learning of scRNA Seq. I have some questions related to "integration" part. While I was working on my dataset, I realized the number of cells (observations) on some samples was quite different. For example: Sample 1 - 1600 cells Sample 2 - 560 cells Sample 3 - 3000 cells 1. I would like to know does this affects my analysis. 2. Do I need to apply something like scaling or oversampling or else? I would be grateful if you can help. Best
@sanbomics
@sanbomics 8 месяцев назад
For scvi the total number of cells in the training datasets matters more than the number of cells in individual samples. If you only have about 5k cells then you will have to keep the total number of features pretty low in the model (e.g.
@aytacoksuzoglu2975
@aytacoksuzoglu2975 3 месяца назад
Türkiyeden bu konuları çalışan insanları görmek de güzel :)
@GiwonCho-p7w
@GiwonCho-p7w 9 месяцев назад
The video is soooo helpful. you are my life saver. However, I wanted to try to using diffxpy especially 'wald test'. the error 'ZeroDivisionError: float division by zero' is happending when I use this code: res = de.test.wald(data = subset, formula_loc= '~ 1 + cell_type', factor_loc_totest='cell_type' ) I am using MacOS. and in Github, some people is undergoing same problem, guessing the problem occurs when we use MacOS. do you have any other solution?
@sanbomics
@sanbomics 9 месяцев назад
I've bassically given up on diffxpy because it always seems to throw errors for no reason some times. I reccomend doing pseudobulk instead. Check out one of my more recent pseudobulk videos
@YC-ut1ff
@YC-ut1ff Год назад
Thank you for this helpful video!!! I found that the loss of model(37:17) is actually quite high. Would this influence the performace of this model?
@sanbomics
@sanbomics Год назад
I wouldn't worry about it. I had a lot of cells and to conserve time it automatically decreases the number of epochs. If you were more patient potentially you could increase the number and see if the loss also decreases. There are other parameters you could fine tune as well. But again, the data seem integrated well and I have never seen any issues arising with default settings.
@CaveCrack
@CaveCrack 5 месяцев назад
Thank you very much for these tutorials. There seems to be a typo in the integration step (function pp) you are using mouse (mt-) mitochondrial genes instead of MT-, the same issue exists in the github notebook.
@sanbomics
@sanbomics 4 месяца назад
hmm let me look into this. Thanks for pointing it out
@aytacoksuzoglu2975
@aytacoksuzoglu2975 3 месяца назад
Well it was really awesome. Im still undergreadute so it was little bit hard to understand bioinfo part but python code part was clear. Can you or did you do other integration methods or can you record another video ?
@sanbomics
@sanbomics 3 месяца назад
Yeah I actually have a video that compares multiple integration methods: ru-vid.com/video/%D0%B2%D0%B8%D0%B4%D0%B5%D0%BE-NFA2YGshATs.html
@mhmmdbduh
@mhmmdbduh 4 месяца назад
great video, I learn a lot. But i was wondering what device you use? I did the same analysis but i could load the model because im out of memory.
@sanbomics
@sanbomics 4 месяца назад
this computer has 128 gb memory. But you can try the analysis with fewer samples if you want to follow along still
@gijs106
@gijs106 Год назад
Thank you, this video is of great help to me. I was wondering, in the initial processing of only one sample, you used sc.pp.scale to scale the data. However, when processing/integrating multiple samples, I did not see explicit scaling of the data, only normalization and log-transformation, is that correct? Or am I missing the step where this happens? Thanks again!
@sanbomics
@sanbomics Год назад
You are correct. It is because I used scvi to integrate which gives you normalized counts and embeddings for clustering. You will almost ever only use scaled data for clustering and UMAP. If you have embeddings form scvi (or something else) you will use that instead and don't need to scale.
@gijs106
@gijs106 Год назад
@@sanbomics That clarifies things for me. Thanks for taking the time to reply, I appreciate it!
@abhayrastogi590
@abhayrastogi590 Год назад
Hi Sam, ammazing video. Do you plan to make a similar walkthrough for spatial as well? It would be very helpful.
@sanbomics
@sanbomics Год назад
I have a brief video on spatial, but it is not nearly as in depth. Maybe in the future I can do a more comprehensive one
@刘奇-w5t
@刘奇-w5t Год назад
I have a problem in this step: SOLO = scvi.external.SOLO.from_scvi_model(vae). The error is: AttributeError: Can only use .str accessor with string values! I can't solve this problem. What is your suggestion? Thank you very much!
@mostafaismail4253
@mostafaismail4253 2 года назад
You are great , please don't stop, big support ❤️❤️❤️
@sanbomics
@sanbomics 2 года назад
Thank you!
@LiptonTiptonTea
@LiptonTiptonTea 2 года назад
Agreed. You`re a gifted teacher, keep it up.
@drumpdump1995
@drumpdump1995 Год назад
I feel R is a much handier language for scRNA analysis than python. Is it just me?
@sanbomics
@sanbomics Год назад
more of a preference i think. both can do just fine. I prefer python. There are more random tools available in bioconductor, but its not a limiting factor anymore
@aydin434
@aydin434 Год назад
Thank you for this very helpful tutorial. I have a problem that I want to ask. When I train the data, it takes too much time since I don't have a GPU. Is there any alternative step to remove doublets?
@sanbomics
@sanbomics Год назад
Great question. While it isn't as robust at catching the doublets, when you filter the outliers during preprocessing you are theoretically catching some of the doublets. You can increase the cutoff for n_genes_by_counts to the top 5%. While it's not ideal, a lot of people don't end up removing doublets specifically (even though they should).
@aydin434
@aydin434 Год назад
Thank you very much!
@blackmatti86
@blackmatti86 2 года назад
This is amazing! Is there a pipeline for scATAC-seq data analysis using Python? It seems there is a lot of info about scRNA-seq but not so much for ATAC.. Thank you! 🙏
@sanbomics
@sanbomics 2 года назад
I've actually been doing a decent bit of scATACseq analysis recently, but in R with seurat/signac. It can be done in scanpy too, but I haven't gotten around to trying it yet. I keep on wanting to do a scATAC video, but I wasn't sure if it would get many views because scATAC is still underperformed relative to scRNA. Though, I will probably do one in the next month or so.
@ilyasimutin
@ilyasimutin 2 года назад
Episcanpy for a python usage, but in R the greatest option is ArchR
@blackmatti86
@blackmatti86 2 года назад
@@ilyasimutin haven’t heard good things about episcanpy tbh. Tried Signac and it was really good. Would like give ArchR a go 👍🏼
@blackmatti86
@blackmatti86 2 года назад
@@sanbomics I think you’d get many views since there’s a lot of scRNA tutorials out there but nothing for scATAC 🤷🏻‍♂️ looking forward to it!
@sanbomics
@sanbomics 2 года назад
Thats a good point.. might just do one on signac. Maybe even a multimodal dataset. Looking forward to a better alternative in python.
@mehdiraouine2979
@mehdiraouine2979 5 месяцев назад
Is there a laptop you recommend for scRNA seq analysis without having to use a Cloud ? Is M1pro/max chip mac book a viable option ?
@sanbomics
@sanbomics 5 месяцев назад
I would say the biggest limitation is going to be RAM. Other spec will just increase processing speed. I am not sure I would recommend a laptop. But you can set up a server wherever and just put that and your laptop on the same zerotier network. Then just get a macbook air IMO. E.g., running jupyter notebook over the network is identical to running it on your local machine after you initialize it.
@mehdiraouine2979
@mehdiraouine2979 5 месяцев назад
@@sanbomics thx for the reply!
@asshimul1168
@asshimul1168 3 месяца назад
Please make this same tutorial for R🙏
@nadavyklein
@nadavyklein Год назад
Thank you for the video, really helpful and teaching :)
@sanbomics
@sanbomics Год назад
Glad it was helpful!
@elianegracielapilan5816
@elianegracielapilan5816 8 месяцев назад
I need to analyze a GEO dataset (GSE198896). How do I read the matrix from the GSE198896_raw file? Thanks
@sanbomics
@sanbomics 8 месяцев назад
What format are the raw files?
@elianegracielapilan5816
@elianegracielapilan5816 7 месяцев назад
When downloading the operating system, the files are compressed. When unzipping, several folders are opened, one for each sample, which is also compressed. 1 of each sample. After unzipping, we have the following files in each sample folder: barcodes (TSV), features (TSV) and matrix (MTX).@@sanbomics
@xishengliu5290
@xishengliu5290 Год назад
thank you for sharing, here I got a problem showing that module 'scvi' has no attribute 'model'. Do you know what could be the possible problem? I reinstall the scvi, and it is the newest version, but still the same.
@that_guy4690
@that_guy4690 8 месяцев назад
You are a legend! Thank you so much for your work
@giorgiatosoni2307
@giorgiatosoni2307 2 года назад
Thank you so much for the super clear videos, you are making my life much easier!! I was wondering, what are the (dis)advantages of scVI over, for example, Harmony? I find it very difficult to understand which tool would work better for data integration, especially for human samples in which the variability across individuals is huge!
@sanbomics
@sanbomics 2 года назад
It's hard to say without doing a direct comparison. And likely one might preform better in a given context than the other. Both are good. I prefer scVI because it is in python and does a lot more than just integration. scVI does a great job when there is big variation between individuals and even very large differences between the technology used to make the libraries. It will likely be a preference-based choice. Of course, if you want to follow along with the video you will have to use scvi xD
@lly6115
@lly6115 Год назад
let's just give this man a round of applause.
@sanbomics
@sanbomics Год назад
Thank you! :)
@wumutcakir
@wumutcakir 2 года назад
Thank you for this amazing video. I encounter problem in sc.pp.highly_variable_genes function. When I run the code, No module named 'skmisc' error has appeared. I tried to install the required package using pip install scikit-misc, but it does not solve my problem. What is your suggestion?
@sanbomics
@sanbomics 2 года назад
Try following this thread: github.com/scverse/scanpy/issues/2073
@sanbomics
@sanbomics 2 года назад
if you cant get it to work, I will run a pip freeze for you on my environment.. maybe its some weird version issue or something
@wumutcakir
@wumutcakir 2 года назад
Thank you very much. It solves my problem.
@jsm640
@jsm640 2 года назад
@@wumutcakir I met the same problem one day ago. The methods suggested by Sanbomics may be helpful, but I solved it by changing the order of the installations of the packages. I run the following: conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch conda install scvi-tools -c conda-forge conda install -c conda-forge scanpy python-igraph leidenalg It works fine now.
@sanbomics
@sanbomics 2 года назад
Nice! Thanks for posting your solution
@zeeboo-w1v
@zeeboo-w1v 9 месяцев назад
hi Can you help? I cant seem to make scvi work ...is there a way to be in touch
@repliedfob4165
@repliedfob4165 6 месяцев назад
i downloaded the tutorial from your github and am running it line by line but have ran into an issue. when running line 84 sc.tl.umap(adata) i get TypeError: unhashable type: 'list' i havent modified the code in any way. adata is of type ""
@Dalibenamor-j8f
@Dalibenamor-j8f Год назад
Thank you man you are such an inspiration , u saved my life. I have a question and I hope you answer me: in this script should I always use 1e4 as value . if it's not the case How I should modify this script what I should eliminate and keep exactly please : adata = sc.read_h5ad('/lab/user/notebooks/test_elyoum/combined_filtred.h5ad') adata.layers['counts'] = adata.X.copy() #Normlize every cells to 10.000 UMI sc.pp.normalize_total(adata, target_sum = 1e4) #convert to log count sc.pp.log1p(adata) adata.raw = adata adata.obs.head() total_num_cells = adata.n_obs total_num_genes = adata.n_vars if total_num_genes > total_num_cells / 2: n_top_genes = int(0.40 * total_num_cells) sc.pp.highly_variable_genes(adata, n_top_genes=n_top_genes, subset=True, layer='counts', flavor="seurat_v3", batch_key="Sample") scvi.model.SCVI.setup_anndata(adata, layer = "counts", categorical_covariate_keys=["Sample"], continuous_covariate_keys=['pct_counts_mt', 'total_counts', 'pct_counts_ribo']) model = scvi.model.SCVI(adata) model.train() #may take a while without GPU #scvi clustering adata.obsm['X_scVI'] = model.get_latent_representation() adata.layers['scvi_normalized'] = model.get_normalized_expression(library_size = 1e4) #find neighbors sc.pp.neighbors(adata, use_rep = 'X_scVI') sc.tl.umap(adata) sc.tl.leiden(adata, resolution = 0.5)
@sanbomics
@sanbomics Год назад
I am glad that I saved your life xD. Actually, it is better to not use any target_sum. Just remove the argument. Things are always evolving in the sc-sphere
@yaseminsucu416
@yaseminsucu416 8 месяцев назад
Hi Sam, thank you so much for the helpful tutorial! I am trying to replicate your analysis with using the same data and following the code cell by cell, and it's been a great learning journey so far! I do have a question though, while training the data for duplication predictions, and for producing the UMAP, I did get different results than yours, and as well as when I re-run the code same thing happens. I am assuming since there is random initialization while doing the training models, the values are gonna be slightly different each time and that will eventually cause different UMAP configuration, etc. I am curious though, how can I be sure that I am on the right track? 😄 Your feedback would be helpful! Cheers, and thanks so much for the tutorials!
@sanbomics
@sanbomics 8 месяцев назад
Hi! It sounds like you are doing things right. You are correct in assuming that each time you train the model it will be a little different. I think there is an option to set a specific seed, but I am not sure if that will keep it 100% consistent. You should hopefully see high overlap (>90%) when calling doublets multiple times.
@jliu1212
@jliu1212 11 месяцев назад
Hi! This is an amazing tutorial. Thank you for your comprehensive walk-through. I have one question: Why are we filtering the genes twice: In line 9 and 48? Is it because you accounted for the doublets? Follow-up question on that: I just recently started but have never considered processing for doublets. What impact does it have to do it/not to do it?
@sanbomics
@sanbomics 10 месяцев назад
The first filtering is just to make the data smaller for faster/better processing for double removal. I then reload the raw counts so I have all the genes instead of a small subset. Some data have more doublets than others. There are little clusters that will form of just doublets and you will have contamination in your other clusters of random spurious genes. Better to remove but not always necessary depending on your technology and doublet rate.
@3stoogettes
@3stoogettes 9 месяцев назад
you are a HERO
@AnkitYadav-v2v8z
@AnkitYadav-v2v8z 11 месяцев назад
"Hey there! I'm really curious about creating Seurat inputs without using Cell Ranger, as we've got our own sequencer and have done the sequencing in-house. Has anyone else tried this approach or can offer some guidance on how to go about it? Any help or insights would be greatly appreciated! 🙏🧬 #Seurat #SingleCellSequencing #DIYSequencing"
@sanbomics
@sanbomics 11 месяцев назад
Do you mean without 10x libraries? Or you have 10x libraries but you don't want to use Cellranger? Both are very feasible. See my most recent video for the latter
@daehwankim4432
@daehwankim4432 9 месяцев назад
This is what I exactly need at this moment! Thank you so much for sharing your knowledge! I have a quick question. Is there any good way to do this analysis on GPU? How can I apply the GPU to this analysis? Thanks!
@sanbomics
@sanbomics 9 месяцев назад
A lot of these analyses are being sped up with a GPU already especially if you are using SCVI. There was a recent software drop converting a lot of single cell functions with scanpy to GPU, but I forget what it is called off the top of my head. Shouldn't be too hard to find though.
@fsh9134
@fsh9134 10 месяцев назад
Thank you very much for great video. I wonder why you have used .csv data file. In most of other videos you used .h5 or .mtx file. As per my knowledge CellRanger out put does not include such data file (.csv) you are using for this demo...
@sanbomics
@sanbomics 9 месяцев назад
I am beholden to the data that is available. In this case, that is what the authors of the paper provided. Ideally, everyone would deposite h5ad files xD
@エルディープダリア
@エルディープダリア 9 месяцев назад
Do we have to get markers using scanpy in order to proceed with model.differential.expression by scVI? Also, is there a way to navigate the markers generated in both dataframes, like for example generate an csv file or something to see all the genes in all clusters?
@sanbomics
@sanbomics 9 месяцев назад
They are independent of each other so you can do only one if you want. And you should be able to get a dataframe for both. sc.get.rank_gene_groups_df (not sure if that is spelled right) and the de dataframe i showed in the tutorial for scvi.
@Sub-C-160
@Sub-C-160 9 месяцев назад
Very good tutorial! I have one issue with concatenating adata objects. When I use sc.concat() I loose the mt labeling and also all the qc metrics in the adata.var object. Do you have an idea what could be the problem?
@sanbomics
@sanbomics 9 месяцев назад
Yes! You can change how it is merged. Chaning outer/inner will change if things are dropped or not if they don't exist in all the datasets. Should be an option in the sc.concat() function itself
@sahilsukla
@sahilsukla 11 месяцев назад
wonderful explanation. Just one query, should we not remove MT & Ribosomal genes before preparing models for identifying doublets ?
@sanbomics
@sanbomics 11 месяцев назад
No, I don't think so. Doublet removal is independent of any biological knowledge. Removing specific genes because they are MT/ribosomal wouldn't affect anything and only remove potentially useful features for identifying doublets
@irfanalahi380
@irfanalahi380 Год назад
Thanks again for the awesome video. I have a question. In the def pp(csv_path) function you read a csv file twice. Can we avoid reading the same file twice? I think file reading takes some time. Thanks.
@sanbomics
@sanbomics Год назад
Yup! you can make a copy instead after reading it in the first time. How I have it will add an extra ~20 sec per sample.
@sadrahakim1272
@sadrahakim1272 10 месяцев назад
Great tutorial!! Can we do this on a Count Matrix? I mean instead of having expression matrix, we have count matrix (rows to be cells and cols to be genes).
@sanbomics
@sanbomics 9 месяцев назад
Hi! I think I may be misunderstanding the question. Those should be the same thing.
@jorge1869
@jorge1869 Год назад
A more complete analysis from beginning to end would be interesting. In other words, from the moment you receive the raw data, until you analyze it with scanpy. In this way it would be more useful for those who start in this world. Greetings
@sanbomics
@sanbomics Год назад
Hi! I do have a couple of introductory videos that go over running CellRanger, for example.
@victorassis9078
@victorassis9078 11 месяцев назад
Hey! I’m running out of memory when integrating samples. How can I complete this tutorial? Is there any technique to reduce memory usage?
@sanbomics
@sanbomics 10 месяцев назад
Yes! Make sure to convert them to sparse matricies after loading in each dataset. This will reduce memory required a lot. But you can also load in fewer cells if that still isn't enough. You are still going to run into issues if you run anything that requires converting the sparse matrix to dense though
@jalv1499
@jalv1499 Год назад
Thank you for this amazingly helpful video! In this video, you also gave an example of differential gene expression analysis between two conditions rather than celltypes, what's the difference between DEG analysis across conditions and pseudobulk DEG analysis in another video you created? Which approach would you recommend? Thank you!
@sanbomics
@sanbomics 11 месяцев назад
I personally use pseudobulk now exclusively, mostly because diffxypy can be a pain to work with sometimes and pseudobulk is a more recognizable technique (hard to say which is actually better though).
@chrisdoan3210
@chrisdoan3210 Год назад
Hi Mark, my data is not in csv format as you, so I can use read_10x_mtx() to read the data into Jupyter Notebook, is that correct?
@sanbomics
@sanbomics Год назад
Yup! scanpy has like 5 different ways to read in the data
@zainziad3915
@zainziad3915 Год назад
Isn't using adata = sc.concat(out) going to throw away a lot of genes since it uses an inner join by default? Shouldn't we use join='outer'?
@sanbomics
@sanbomics Год назад
If your data are similar concat will only get rid of a small number of genes are are not expressed in both samples. If they are different (or if you are worried), don't filter genes until after concating. The latter is what I typically do now and will throw away no genes.
@trinhthuylinh7719
@trinhthuylinh7719 Год назад
Hi, thank you so much for doing this video! It is really helpful. I have a glitch in my practice with another dataset at the DE analysis with diffxpy. The dataset I used has negative values so I assumed it was regressed during normalization. However, there is no .to_adata or .toarray attributes to my object. How should I deal with this situation?
@sanbomics
@sanbomics Год назад
Unfortunately you will need the raw data. So there are a few options: 1) Regenerate the counts is you have access to the fastq files. 2) Use their processed data for analysis and just skip the preprocessing, scaling, and integration. Start at PCA, neighbors, UMAP, etc... which means you'll be stuck with basic DE analysis within scanpy or seurat
@theoreticalorigamiresearch186
I skipped integration step, now 'model' is not defined. How can I easily fix this while still skipping the integration process?
@sanbomics
@sanbomics Год назад
Hi, the model is specific to the scvi integration. If you don't do the integration part you cannot recreate anything where I use the model. But I do show alternatives without the model for every step.
@AnhNguyen-x3o4o
@AnhNguyen-x3o4o Год назад
Would you recommend using scvi differential expression over scanpy rank gene groups for DE?
@sanbomics
@sanbomics Год назад
I would recommend lots of things over rank gene groups of scanpy or find markers of seurat. scvi, diffxpy, or pseudobulk are probably your best three options. scvi is probably the easiest
@johachinjoel2076
@johachinjoel2076 6 месяцев назад
Thanks a lot. My PI asked me to analyzed some single-cell data. I had no clue on how to approach the problem as I am mostly a wet lab researcher. You helped me tremendously.
@sanbomics
@sanbomics 5 месяцев назад
Glad I could help!
@harryliu1005
@harryliu1005 Год назад
This video is absolutely helpful! Thanks a lot ! However it’s my first time to learn rnaseq, where should I begin so that I can learn all stuff about rnaseq systematically? I already learned cells,dna and rna and I have some programming experience too:)
@sanbomics
@sanbomics Год назад
Maybe start with bulk RNAseq analysis and go from there. Learning from experience and troubleshooting is a great way to start.
@ayaqz3144
@ayaqz3144 8 месяцев назад
thank you very much , you are doing very great job
@quentinchartreux6085
@quentinchartreux6085 Год назад
Very great video. I was wondering if, after identifying the cell types, when you want to perform the diff expression between two conditions only in this cell type, should we do a subset and redo a scvi model ?
@sanbomics
@sanbomics 11 месяцев назад
Thats a great question and I am not sure I know the right answer. On one hand you will have fewer samples to train the model, but the model may be more specific for the cell types you are interested in. If you try both I would be interested to know how they compare
@tarkkrloglu2406
@tarkkrloglu2406 10 месяцев назад
Hi, Thank you for tutorial. I have got one question. When I was working about model.train(), I take an error. This error:ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 128]) If I change batch size , ıt works. Howewer, Deafault parametrs isnt work. What is the wrong ?
@HUIZHAN-h6i
@HUIZHAN-h6i 10 месяцев назад
I got the same error.
@sanbomics
@sanbomics 10 месяцев назад
Check out this and see if this solves it: discourse.scverse.org/t/solo-scvi-train-error-related-to-batch-size/1591
@mostafaismail4253
@mostafaismail4253 2 года назад
What about scATAC + scDNA-seq (CNV), there's no resources for these topics and it will be great if you do a tutorial about these topics .❤️
@sanbomics
@sanbomics 2 года назад
I'll keep that in mind for some upcoming video. Definitely going to do some scATAC + RNA soon at least. So many things I want to do but so little time to actually make videos..
@HUIZHAN-h6i
@HUIZHAN-h6i 11 месяцев назад
I really like your videos! It's really helpful for beginners like me. Thank you so much! I have question about scvi model. I'm a little confused about PCA, tSNE, UMAP which are generally done in seurat. So if we use scvi to do dimentionality reduction, then we don't have to do tSNE, right? In your video, you use scvi to correct different covariates after integration. Does scvi also do dimentionality reduction?
@sanbomics
@sanbomics 10 месяцев назад
Yup, scvi will give you embeddings which you then can compute the neighborhood graph from. tSNE or UMAP are still necessary if you plan to visualize the data in that way. They use the neighborhood graph. scVI --> neighbors --> UMAP/tsne. With seurat you are used to variable genes --> pca --> neighbors --> UMAP/tsne.
@HUIZHAN-h6i
@HUIZHAN-h6i 10 месяцев назад
thanks!@@sanbomics
@エルディープダリア
Hey Sam, can't thank you enough for your tutorials, they're honestly life-saving. Following this one, I have trouble to "be creative" about sample identifiers when integrating. I have 4 samples in an H5 format, not CSV files as the samples you've used. I tried several times, but I failed to set the samples up the way you've done them. Is there a way you can follow up on that?
@sanbomics
@sanbomics Год назад
you should be able to load the h5 files with sc.read_10x_h5
@chrisjmolina
@chrisjmolina Год назад
@sanbomics Hi Sam, I'm having similar problems with h5 files. I'm on a Mac (python v3.10) and used the code your presented here, then tried the code you shared in your scvi integration yet still having the same problem. When I run this: def pp(path): adata = sc.read_10x_h5(path) sc.pp.filter_cells(adata, min_genes=300) adata.obs['Sample'] = path.split('_')[0] #D21r3_sample_filtered_feature_bc_matrix.h5 return adata out = [] for file in os.listdir('/Users/cm/Desktop/SingleCell/'): out.append(pp('/Users/cm/Desktop/SingleCell/' + file)) I get this error: Cell In[98], line 3 1 out = [] 2 for file in files: ----> 3 out.append(pp('/Users/chrismolina/Desktop/CM1CM3Force100k/' + file)) Cell In[91], line 2, in pp(path) 1 def pp(path): ----> 2 adata = sc.read_10x_h5(path) 3 adata.var_names_make_unique() 5 sc.pp.filter_cells(adata, min_genes=300) File /opt/homebrew/Caskroom/mambaforge/base/envs/NewEnv/lib/python3.10/site-packages/scanpy/readwrite.py:179, in read_10x_h5(filename, genome, gex_only, backup_url) 177 if not is_present: 178 logg.debug(f'... did not find original file {filename}') --> 179 with h5py.File(str(filename), 'r') as f: 180 v3 = '/matrix' in f OSError: Unable to open file (file signature not found) I'm able to open all the h5 files individually, so I don't think the files are corrupted. Do you have any advice for how to troubleshoot the code?
@ilyasimutin
@ilyasimutin 2 года назад
Fantastic!!
@chrisdoan3210
@chrisdoan3210 2 года назад
Hi Mark. May I know which computer you used to run this analysis?
@sanbomics
@sanbomics 2 года назад
Hey! Sure, ubuntu operating system with 128 gb ram, 24 cpu and a nvidia gpu. Ram will be the limiting factor depending on how many samples you are doing at once. Without a gpu it will just take longer
@chrisdoan3210
@chrisdoan3210 2 года назад
@@sanbomics Thank you! So most laptop, desktop can't run this analysis. I have a Linux server which have more ram but it run by command line interface. How can I get Jupiter Notebook interface at you did?
@sanbomics
@sanbomics 2 года назад
Great question: 1) start your notebook on the server with the --no-browser flag (do it in tmux so you can exit terminal) 2) on your local machine do ssh port forwarding: ssh -i path/to/key/if/you/have/one.pem -NfL 9999:localhost:8888 username@address 3) localhost:9999 will bring up the server on your local machine To make it easier in the future, you can add the command as an alias in your bashrc
@chrisdoan3210
@chrisdoan3210 Год назад
@@sanbomics Hi Mark. I don't have root to set up new things on the server. So I run your code in a python script. python scRNA_seq.py Traceback (most recent call last): File "scRNA_seq.py", line 2, in import scanpy as sc File "/home/user/.pyenv/versions/3.8.0/lib/python3.8/site-packages/scanpy/__init__.py", line 8, in check_versions() File "/home/user/.pyenv/versions/3.8.0/lib/python3.8/site-packages/scanpy/_utils/__init__.py", line 47, in check_versions umap_version = pkg_version("umap-learn") File "/home/user/.pyenv/versions/3.8.0/lib/python3.8/site-packages/scanpy/_compat.py", line 33, in pkg_version return version.parse(v(package)) File "/home/user/.pyenv/versions/3.8.0/lib/python3.8/site-packages/packaging/version.py", line 49, in parse return Version(version) File "/home/user/.pyenv/versions/3.8.0/lib/python3.8/site-packages/packaging/version.py", line 264, in __init__ match = self._regex.search(version) TypeError: expected string or bytes-like object Do you know how I can fix this error? Thank you so much.
@sanbomics
@sanbomics Год назад
Hi Chris. Are you doing this in a miniconda environment?
@shreyaslabhsetwar6083
@shreyaslabhsetwar6083 Год назад
Hey, when we do markers.logfoldchanges > .5, are we only including the upregulated genes? If we wish to extract markers for downregulated genes, would it be markers.logfoldchanges < -0.5 ?
@sanbomics
@sanbomics Год назад
Exactly. or in your initial filtering you can do abs(data) > 0.5
@shreyaslabhsetwar6083
@shreyaslabhsetwar6083 Год назад
@@sanbomics Got it. Thanks!!
@Saed7630
@Saed7630 Год назад
Great job! However, it's crucial to consider the significant questions we're aiming to address through these intricate scripting processes. Is the effort invested truly worthwhile?
@sanbomics
@sanbomics Год назад
Is there any point to anything in life?
@MrSureshbob
@MrSureshbob 6 месяцев назад
This is an interesting comment. I guess you should not be watching this channel if you think so. These videos are life savers for some of the beginners like me.
@sebastianmoreno9096
@sebastianmoreno9096 Год назад
Thanks a lot ! This video is brilliant :)! Really useful! I was just wondering why regressing out after selecting highly variably genes and not before. When I regress out cell cycle after HVG I still get a cluster of cell cycle running leiden. I don't see that kind of cluster if I regress out before HVG. Is there any reason to regress out after HVG? Thanks a lot for this inspirational video!
@sanbomics
@sanbomics Год назад
Good question and interesting observation. I have only ever seen regression after variable features. Im guessing its two parts: 1) regression influences the finding of variable features and 2) theoretically the variable features do a better job describing your data and you shouldn't see a difference.. but you do, so that is very interesting to me. What if you increase the number of variable features?
@sebastianmoreno9096
@sebastianmoreno9096 Год назад
@@sanbomics Hi! Thanks for your response. I have used different number of variable features always with the same results. If I regress out before HVG, I don't see any cluster related to what I'm regressing out. The problem is that the gene expression values changed and now I have negative values for some genes :/
@henryren2790
@henryren2790 Год назад
Thanks for the walkthrough. How did you get the pulldown manual for all the different type of filetypes under 'sc.read' at 5:04?
@henryren2790
@henryren2790 Год назад
I see, I used "tab" key to get the pulldown manual... is that a common practice in python? or it's just a trick in scanpy?
@sanbomics
@sanbomics Год назад
Its a trick for inside jupyter notebook. You can try to use tab to autocomplete anything you are typing. Which includes modules you have loaded. Another nice trick is to use the ? after a function to see the manual of it. for example: sc.read_h5ad?
@mst63th
@mst63th Год назад
Can we concatenate multiple samples and then run the pre-processing if we have multiple samples? Do you think it makes a difference compared to your approach?
@sanbomics
@sanbomics Год назад
You technically can. But, it's better to preprocess the samples individually before concatenation. For example, If you concatenate two samples, imagine one sample has a different MT% distribution than the other. If you now QC based on the combined distribution you will only remove dead cells from the one with overall higher MT%, which may just be due to technical differences. Also, the doublet removal procedure I use only works on individual samples
@mst63th
@mst63th Год назад
@@sanbomics I got your point, sounds reasonable. Thanks
@nourlarifi1689
@nourlarifi1689 Год назад
thank you very much for this tutorial . I have 2 questions : 1/ should we always apply 1e4 for data normalization or we can pick other values ? 2/ how I can store the plot generated using this command line in a file : sc.pl.umap(adata, color = ['leiden', 'Sample'], frameon = False)
@sanbomics
@sanbomics Год назад
It's actually recommended to just use log1p now with no target value. So use the default normalize command with no target. There is a "save" argument you can add, eg, save = 'thing.png'
@nourlarifi1689
@nourlarifi1689 Год назад
@@sanbomics thank you for responding You mean I apply directly sc.pp.normalize_total(adata) sc.pp.log1p(adata) without adding target_sum = 1e4 ?
@sanbomics
@sanbomics Год назад
exactly!
@toshiyukiitai1067
@toshiyukiitai1067 Год назад
Thank you for this great tutorial and other videos. I have been learning a lot from your tutorials. I have one question about cell-cycle scoring and regression, is it a must process for scRNA-seq or optional? If it is better to do it, which is better, before or after integrating multiple datasets?
@sanbomics
@sanbomics Год назад
I'm not sure it is necessary, but it can be useful in some situations especially if you are looking at readily cycling cells. You can calculate the score on each individual sample and add it to obs. When you are training the SCVI model you can include the scores as continuous covariates.
@toshiyukiitai1067
@toshiyukiitai1067 Год назад
@@sanbomics Thank you for your reply! As you mentioned, I looked for the information, but I have not found a good answer on how to handle the cell-cycling score. Many thanks again for your advice and many videos. I am a physician-scientist and a newbie in bioinformatics. Your videos are so helpful!
@chrisjmolina
@chrisjmolina Год назад
I would also be interested in how to handle the cell cycle scores@@toshiyukiitai1067
@cryan3240
@cryan3240 Год назад
Thank you dude, this really helps me a lot!
@mehdipourrostami5206
@mehdipourrostami5206 Год назад
Thank you for your awesome videos, can you suggest any software for automated cell annotation for mouse lung cells?
@sanbomics
@sanbomics Год назад
Yeah sure. I have some videos already for this. For both R and python. Just be careful because if the reference populations don't line up well you will introduce error.
@mehdipourrostami5206
@mehdipourrostami5206 Год назад
@@sanbomics that is right, I followed one of your videos on the mouse data that I have and I got some crazy umaps that did not make sense to me. anyhow thanks for all your effort. I have learned a lot.
@shilpasy
@shilpasy Год назад
Great video, thnak you so much. So many cool graphs in python! However, there are too many issues while installing scvi tools and even after successful installation, almost at each step there is some debugging involved. I am using windows, is it related to windows os specifically?
@shilpasy
@shilpasy Год назад
Just an update on this one, I tried to install this on colab nb, after installing many dependencies individually, it finally worked and I was able to import. But during "solo.train()" step, I get this error: Monitored metric validation_loss = nan is not finite. Previous best value was inf. Signaling Trainer to stop. The df (solo.predict()) has NaN values.
@sanbomics
@sanbomics Год назад
Yeah probably has to do with windows. I have never had any trouble on Ubuntu 18+
@sanbomics
@sanbomics Год назад
Do you get that same solo error with a different dataset?
@hatchet646
@hatchet646 Год назад
somehow i managed to get .obs and .var outputting each other's dataframe lol...
@sanbomics
@sanbomics Год назад
Haha oops.. what happens if you transpose the adata?
@sanbomics
@sanbomics Год назад
with a .T
Далее
Introduction to single cell ATAC data analysis in R
17:36
MIT CompBio Lecture 21 - Single-Cell Genomics
1:17:50
Просмотров 33 тыс.
ПОЮ ВЖИВУЮ🎙
3:19:12
Просмотров 878 тыс.
Single Cell Sequencing - Eric Chow (UCSF)
24:37
Просмотров 243 тыс.