Тёмный
No video :(

OpenAI Whisper Speaker Diarization - Transcription with Speaker Names 

1littlecoder
Подписаться 80 тыс.
Просмотров 55 тыс.
50% 1

High level overview of what's happening with OpenAI Whisper Speaker Diarization:
Using Open AI's Whisper model to seperate audio into segments and generate transcripts.
Then generating speaker embeddings for each segments.
Then using agglomerative clustering on the embeddings to identify the speaker for each segment.
Speaker Identification or Speaker Labelling is very important for Podcast Transcription or Conversations Audio Transcription. This code helps you do that.
Dwarkesh's Patel Tweet Announcement - / 1579672641887408129
Colab - colab.research...
huggingface.co...

Опубликовано:

 

29 авг 2024

Поделиться:

Ссылка:

Скачать:

Готовим ссылку...

Добавить в:

Мой плейлист
Посмотреть позже
Комментарии : 86   
@DwarkeshPatel
@DwarkeshPatel Год назад
Thanks so much for making this video and highlighting my code! Really cool to see it's useful to other peopl!
@integralogic
@integralogic Год назад
Thank you!
@kushagragupta149
@kushagragupta149 3 месяца назад
why its down
@bumblequiz
@bumblequiz Месяц назад
Mr. Patel, I can't find any online sites who do speaker diarization accurately. I don't have a GPU on my system. I am ready to pay if the speaker identification is absolutely accurate. Do you know of such services?
@estrangeiroemtodaparte
@estrangeiroemtodaparte Год назад
As always, delivering the goods! Thanks 1littlecoder!
@1littlecoder
@1littlecoder Год назад
Any time!
@stebe8271
@stebe8271 Год назад
I was working on a model to do this exact thing, as we speak. Thanks for the resource, this will save me lots of time
@1littlecoder
@1littlecoder Год назад
Glad to be helpful
@kmanjunath5609
@kmanjunath5609 10 месяцев назад
I almost did it manually 1.Created rttm file using pyannote. 2.Slice full length audio using rttm file in and out for each. 3.Run thru wisper for transcription. It was like 5 times slower. I was thinking hard how to do other way around. First generate full transcript and then separate segments. Some how i saw your video and am impressed and the AgglomerativeClustering at he end blowed my mind. Thanks for sharing knowledge.
@ChopLabalagun
@ChopLabalagun 11 месяцев назад
The code is DEATH no longer works :(
@serta5727
@serta5727 Год назад
Whisper is very useful ❤
@1littlecoder
@1littlecoder Год назад
It's very useful ☺️
@IWLTFT
@IWLTFT Год назад
Hi Everyone, Thanks 1littlecoder and Dwarkesh, this is fantastic, I managed to get it working and it is helping me immensely and I am learning a lot. I am struggling with Google as I always end up with 0 compute units and that causes all sorts of issues and I am unable to complete the transcriptions (i have quite large files I am processing, several 1 hr coaching sessions). Does AWS have a better option going? And the next question would be how easy would it be porting this to an AWS linux environment? if that is an option
@rrrila8851
@rrrila8851 Год назад
It is interesting, although, i think it would be way better to autodetect how many speakers are there, and then start the transcription.
@JohnHumphrey
@JohnHumphrey Год назад
I'm not much of a dev myself, but it seems like it might be simple to add a total time spoken for each speaker. I would love to be able to analyze podcasts in order to understand how much time the host is speaking relative to the guest. In fact it would be very cool if someone built an app that would remove one of the speakers from a conversation and create a separate audio file consisting of only what the remaining speaker/s said.
@Michallote
@Michallote 11 месяцев назад
It sounds like an interesting project. Becoming a dev is more about trying even though you don't know what you are doing, googling stuff and going step by step on what does each line does broadly. Progress is gradual! I encourage you to try it yourself 😊
@dreamypujara3384
@dreamypujara3384 Год назад
[ERROR] embeddings = np.zeros(shape=(len(segments), 192)) for i, segment in enumerate(segments): embeddings[i] = segment_embedding(segment) embeddings = np.nan_to_num(embeddings) i am getting assertion error here. embeddings[i] = segment_embedding(segment). i am using a hindi audio clip and base model. colab platform and GPU T4 runtime.
@JoseMorenoofi
@JoseMorenoofi Год назад
change the audio to mono
@yayaninick
@yayaninick Год назад
@@JoseMorenoofi Thanks, it worked
@warrior_1309
@warrior_1309 Год назад
@@JoseMorenoofi Thanks
@gauthierbayle1508
@gauthierbayle1508 Год назад
@@JoseMorenoofi Hi ! I get that assertion error @dreamypujara3384 mentionned above and I'd like to know where did you make that change from audio to mono... Thanks : )
@traveltastequest
@traveltastequest Год назад
​@@JoseMorenoofican you please explain how to do that
@DestroyaDestroya
@DestroyaDestroya Год назад
Anyone tried this recently? Code no longer works. Looks to me some dependencies have been upgraded.
@gerhardburau
@gerhardburau Год назад
True. Code does not work. please fix. I need it.
@geoffphillips5293
@geoffphillips5293 10 месяцев назад
None of the ones I've played with cope particularly well with more complicated situations. For instance where one person interrupts another, or if there are three people or more. They can all cope with two very clearly different speakers, but then I figure I could do that with old school techniques like simply averaging the frequency. It's weird because the text to speech itself is enormously clever, it's just surprising that the AI can't distinguish voices well.
@user-ud7cq3lq4l
@user-ud7cq3lq4l 11 месяцев назад
i started to love it when you used bruce wayne's clip
@jordandunn4731
@jordandunn4731 Год назад
on the last cell I get an error for UnicodeEncodeError: 'ascii' codec can't encode characters in position 5-6: ordinal not in range(128) any ideas whats going wrong and how do I fix it?
@latabletagrafica
@latabletagrafica Год назад
same problem.
@humbertucho2724
@humbertucho2724 Год назад
Same problem, in my case I am using spanish, looks like the problem is with tildes e.g:"ó".
@humbertucho2724
@humbertucho2724 Год назад
I solved the problem replacing the line f = open("transcript.txt", "w") by the following f = open("transcript.txt", "w", encoding="utf-8")
@latabletagrafica
@latabletagrafica Год назад
@@humbertucho2724 that worked for me, thanks.
@thijsdezeeuw8607
@thijsdezeeuw8607 Год назад
@@humbertucho2724 Thanks mate!
@cho7official55
@cho7official55 2 месяца назад
This video is one year old, is there now some improvement to opensource easily diarization as good as krisp product, and locally ?
@jhoanlagunapena429
@jhoanlagunapena429 6 месяцев назад
Hi! Thanks a lot for this video! I really liked it and I appreciate what you've just shared here. I have a question: what if you are not quite sure about the number of speakers? Sometimes it's not so easy to distinguish one voice from another if there are a lot of people talking (like a focus group). What can I do in that case?
@nealbagai5388
@nealbagai5388 6 месяцев назад
Having this same issue - wondering if its a more widespread issue or specific to this configuration of pyannote. Thinking about trying NeMo if this poses a problem
@AyiteB
@AyiteB 8 месяцев назад
Thiis kept crashing at the embeddings section for me. And the hugginface link isn't valid anymore
@sarfxa9974
@sarfxa9974 7 месяцев назад
you just have to add [if waveform.shape[0] > 1: waveform = waveform.mean(axis=0, keepdims=True)] right before the return of [segment_embedding()]
@serychristianrenaud
@serychristianrenaud Год назад
Thanks
@1littlecoder
@1littlecoder Год назад
You're welcome
@ebramsameh5349
@ebramsameh5349 Год назад
why he give me error in this line embeddings[i] = segment_embedding(segment)
@NielsPrzybilla
@NielsPrzybilla Год назад
same error here: --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) in () 1 embeddings = np.zeros(shape=(len(segments), 192)) 2 for i, segment in enumerate(segments): ----> 3 embeddings[i] = segment_embedding(segment) 4 5 embeddings = np.nan_to_num(embeddings) 1 frames /usr/local/lib/python3.9/dist-packages/pyannote/audio/pipelines/speaker_verification.py in __call__(self, waveforms, masks) 316 317 batch_size, num_channels, num_samples = waveforms.shape --> 318 assert num_channels == 1 319 320 waveforms = waveforms.squeeze(dim=1) AssertionError: And yes :-) I am na totally new coder
@geckoofglory1085
@geckoofglory1085 Год назад
The same thing happened to me.
@Hiroyuki255
@Hiroyuki255 Год назад
same, I noted it gives such an error with more or less big audio files. smaller ones worked fine. not sure if it's the reason though.
@Hiroyuki255
@Hiroyuki255 Год назад
Guess I found the answer! If you convert the initial audio file from stereo to mono it works fine
@guillemarmenteras8105
@guillemarmenteras8105 Год назад
@@Hiroyuki255 how do you do it?
@Labbsatr1
@Labbsatr1 Год назад
It is not working anymore
@IgorGeraskin
@IgorGeraskin 10 месяцев назад
Thank you for sharing your knowledge! Everything works fine, but an error started appearing: ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behavior is the source of the following dependency conflicts. llmx 0.0.15a0 requires cohere, which is not installed. llmx 0.0.15a0 requires openai, which is not installed. How to fix?
@jesusjim
@jesusjim 8 месяцев назад
working with large files of 1hr recording seems to be a problem. i was hoping i could run this locally without google colab
@micheleromanin7168
@micheleromanin7168 Месяц назад
You can run this locally, but you need a decent GPU to do it. Just make a python script out of the colab
@frosti7
@frosti7 Год назад
It doesn't work well (detects language as Malay, also, does not offer custom names for speakers) - anyone has a better working solution?
@SustainaBIT
@SustainaBIT 9 месяцев назад
Thank you so much, by any chance, do you think there could be a method to make it do all of that in real time, during a call let's say. Any ideas of where could I start would be very helpful ❤❤
@mjaym30
@mjaym30 Год назад
Amazing videos!! Keep going!. I have a request though. Could you please publish a video for customizing GPT-J-6B on colab using 8bit version.
@user-kn8zd2qj9x
@user-kn8zd2qj9x 10 месяцев назад
About the embedding issue, here is a replacement for the cell that converts to wav using ffmpeg, I have included a conversion to mono. This will fix the issue with no problems, simply replace the cell that converted to .wav with the following: # Convert to WAV format if not already in WAV if path[-3:] != 'wav': subprocess.call(['ffmpeg', '-i', path, 'audio.wav', '-y']) path = 'audio.wav' # Convert to mono subprocess.call(['ffmpeg', '-i', path, '-ac', '1', 'audio_mono.wav', '-y']) path = 'audio_mono.wav'
@MixwellSidechains
@MixwellSidechains Год назад
I'm running it locally in Jupyter notebook but I can't seem to find an offline model PreTrainedSpeakerEmbedding
@divyanshusingh7893
@divyanshusingh7893 6 месяцев назад
Can we automatically detect number of speakers in an audio?
@rehou45
@rehou45 Год назад
i tried to execute the code on google colab but it buffers during more than 1 hour and still not executed...
@raghvendra87
@raghvendra87 Год назад
it does not work anymore
@user-zw3xh2bo2z
@user-zw3xh2bo2z Год назад
hello sir...i have small doubt...if we have no of speaker more than "2" ..how could the parameter "num_speakers" be vary..
@mramilvideo
@mramilvideo 4 месяца назад
There are no speaker names. How we can identify speakers?
@klarinooo
@klarinooo Год назад
I m trying to upload a wav file of 5MB and I m getting "RangeError: Maximum call stack size exceeded" .. is this only for tiny file sizes?
@kmanjunath5609
@kmanjunath5609 10 месяцев назад
What if new speaker enters in between then number of speakers will become? old + new or old?
@Sanguen666
@Sanguen666 11 месяцев назад
smart pajeeet!
@bumblequiz
@bumblequiz Месяц назад
Link not working anymore
@vinsmokearifka
@vinsmokearifka Год назад
Thank you Sir, its only for English?
@gaurav12376
@gaurav12376 Год назад
The colab notebook is not accessible. Can you share the new link.
@cluttercleaners
@cluttercleaners 11 месяцев назад
is there an updated hugging face link?
@datasciencewithanirudh5405
@datasciencewithanirudh5405 Год назад
did anyone encounter this error
@DRHASNAINS
@DRHASNAINS 10 месяцев назад
How i can run same on Pycharm ? Can anyone guide
@olegchernov1329
@olegchernov1329 Год назад
can this be done locally?
@mgomez00
@mgomez00 11 месяцев назад
If you have enough GPU power locally, YES. I also assume you have Python and all the required libraries installed.
@JorgeLopez-gw9xc
@JorgeLopez-gw9xc Год назад
for this code you use whisper model for openai ? why you are not using api key ?
@1littlecoder
@1littlecoder Год назад
I'm using it from the model (not the API)
@JorgeLopez-gw9xc
@JorgeLopez-gw9xc Год назад
@@1littlecoder the whister model then is not the one on openai st. In my case I am looking for a method based on openai, the reason is to be able to ensure the privacy of the information since I want to do it with company data, do you know if it is possible?
@user-nl2ic1kb7v
@user-nl2ic1kb7v 4 месяца назад
@@JorgeLopez-gw9xc Hi Jorge, I am having the same question as yours, I would be really appreciative if you have figured out the solution and happy to share some ideas with me
@ciaran8491
@ciaran8491 Год назад
How can i use this on my local installation?
@yangwang9688
@yangwang9688 Год назад
Does it still work if the speech exists overlapping?
@1littlecoder
@1littlecoder Год назад
I'm not sure if it'd work fine then. I have not checked it.
@user-jf5ru5ow8u
@user-jf5ru5ow8u 7 месяцев назад
bhai MP3 sites wala links v daal detay
@abinashnayak8132
@abinashnayak8132 Год назад
How to change speaker1 and speaker2 to the actual person's names ?
@mllife7921
@mllife7921 Год назад
you need to store person's voice embeddings and then compare (some similarity) and map with the one in the samples generated
@AndreAngelantoni
@AndreAngelantoni Год назад
Export into text then do a search and replace.
@OttmarFlorez
@OttmarFlorez Год назад
Malísimo
Далее
Speaker diarization -- Herve Bredin -- JSALT 2023
1:18:56
Fine tuning Whisper for Speech Transcription
49:26
Просмотров 19 тыс.
Open AI’s Whisper is Amazing!
25:51
Просмотров 480 тыс.
Timestamped words Whisper
3:37
Просмотров 767
Best FREE Speech to Text AI - Whisper AI
8:22
Просмотров 958 тыс.
I wish every AI Engineer could watch this.
33:49
Просмотров 80 тыс.
Faster Whisper Server - Local Transcription with AI
8:22
Can Whisper be used for real-time streaming ASR?
8:41