sepinf-inc / IPED

IPED Digital Forensic Tool. It is an open source software that can be used to process and analyze digital evidence, often seized at crime scenes by law enforcement or in a corporate investigation by private examiners.
Other
884 stars 209 forks source link

#1823 whisper transcription #2165

Closed lfcnassif closed 1 month ago

lfcnassif commented 2 months ago

When finished this will close #1823.

Already tested on CPU. I still need to test on GPU, test the remote service and verify Wav2Vec2 backwards compatibility.

lfcnassif commented 2 months ago

I think this is finished. @marcus6n, I would appreciate very much if you could test this on Monday, thank you.

marcus6n commented 2 months ago

@lfcnassif Yes, I can test it!

marcus6n commented 2 months ago

@lfcnassif I've run the tests and everything is working properly.

gfd2020 commented 2 months ago

I was waiting for this PR. Thank you. I will test this PR with GPU CUDA. I also found some audios that had encoding problems in the transcription. I'll test them too.

@lfcnassif , a suggestion. I had also suggested how to calculate the finalscore in Python with "numpy.average(probs)", but numpy.average is the weighted average, as it has no weighting and is not passed as a parameter, it is the same as numpy.mean. Maybe it's a little faster...

Another thing, does this PR also close issue #1335?

lfcnassif commented 2 months ago

I will test this PR with GPU CUDA. I also found some audios that had encoding problems in the transcription. I'll test them too.

Hi @gfd2020! Additional tests will be very welcome!

@lfcnassif , a suggestion. I had also suggested how to calculate the finalscore in Python with "numpy.average(probs)", but numpy.average is the weighted average, as it has no weighting and is not passed as a parameter, it is the same as numpy.mean. Maybe it's a little faster...

I took the final score computation from your previous code suggestion, thank you! Good to know, we can replace the function, but I think the time difference will not be noticeable.

Another thing, does this PR also close issue https://github.com/sepinf-inc/IPED/issues/1335?

No, I'll keep it open, since I didn't finish all my planned tests. I'm integrating this because some users asked for it. Beyond Whisper.cpp which improved a lot in the last months and added full CUDA support, I also found WhisperX (which uses Faster-Whisper under the hood) and Insanely-Faster-Whisper. Those last 2 libs break long audios into 30s parts and executes batch inference on the audio segments simultaneously, resulting in up to 10x speed up because of batching, at the cost of increased GPU memory usage. I did a quick test with them and they are really really fast for long audios indeed! But their approach can decrease the final accuracy, since default Whisper algorithm uses previous transcribed tokens to help transcribing the next ones. AFAIK, those libraries break the audio in parts and the transcription is done independently on the 30s audio segments. As I didn't measure WER for those libraries yet, I'm concerned about integrating them. If they could accept many different audios as input and transcribe them using batch inference instead of breaking the audios, that would be a safer approach. But that would require more work from our side, to group audios with similar duration before transcription, decide waiting or not to group audios, signal last audio, etc.

lfcnassif commented 2 months ago

Using float16 precision instead of int8 gave almost a 50% speed up on RTX 3090.

gfd2020 commented 2 months ago

Using float16 precision instead of int8 gave almost a 50% speed up on RTX 3090.

On CPU too?

lfcnassif commented 2 months ago

On CPU too?

Possibly not, I'll check and report back.

lfcnassif commented 2 months ago

@gfd2020 thanks for asking about the effect of float16 on CPU. Actually it doesn't work on CPU at all, just pushed commit fixing it. About float32 x int8 speed on CPU, testing with ~160 audios on 48 threads CPU, medium Whisper model:

lfcnassif commented 2 months ago

Speed numbers of other implementations over a single 442s audio using 1 RTX 3090, medium model, float16 precision (except whisper.cpp, it couldn't be defined):

Running over the 160 real world small audios dataset above (total duration of 2758s):

PS: Whisper.cpp seems to parallelize better than others using multiple processes, so its last number could be improved. PS2: For inference on CPU, Whisper.cpp is faster than Faster-Whisper by ~35%, not sure if I will time all of them on CPU... PS3: Using large-v3 model within Whisper.cpp, it produced hallucinations (repeated texts and a few non existing texts), it was also observed with Faster-Whisper in a lower level.

gfd2020 commented 2 months ago

Hi, @lfcnassif

I don't have a very powerful GPU but it has a tensor cores and the following error occurred: "Requested float16 compute type, but the target device or backend does not support efficient float16 computation."

So I changed it to float32 and it gave the following error: "CUDA failed with error out of memory"

finally, change to int8 and worked fine on GPU.

So, I have two suggestions: 1) Print the error message if is change to computing on the CPU. 2) Leave int8 as the default and use compute type as a parameter on audiotranscripttask.txt

I'm still doing other tests

lfcnassif commented 2 months ago

So, I have two suggestions:

  1. Print the error message if is change to computing on the CPU.
  2. Leave int8 as the default and use compute type as a parameter on audiotranscripttask.txt

Thanks for testing @gfd2020! Both are good suggestions and I was already planning to externalize the compute_type (precision) parameter, and also the batch_size if we switch to WhisperX, I'm running accuracy tests and should post the results soon. About the float16 not supported, what is your CUDA Toolkit installed version?

gfd2020 commented 2 months ago

Thanks for testing @gfd2020! Both are good suggestions and I was already planning to externalize the compute_type (precision) parameter, and also the batch_size if we switch to WhisperX, I'm running accuracy tests and should post the results soon. About the float16 not supported, what is your CUDA Toolkit installed version?

NVIDIA CUDA 11.7.99 driver on Quadro P620 torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1

This was the only version that I managed to make work on these weaker GPUs (Quadro P620 and T400)

gfd2020 commented 2 months ago

I don't know if it's a proxy failure, but I couldn't download the 'medium' model. It starts to download and gives an error with 150 MB. So I'm using the medium model created in the topic https://github.com/sepinf-inc/IPED/issues/1335#issuecomment-1645622285. I placed the model in the iped models folder and change variable whisperModel to 'models/models--dwhoelz--whisper-medium-pt-ct2'.

lfcnassif commented 2 months ago

I don't know if it's a proxy failure, but I couldn't download the 'medium' model. It starts to download and gives an error with 150 MB.

Seems a network issue, you can try to clean the local cache, AFAIK it is located in user_home/.cache

I'm using the medium model created in the topic #1335 (comment).

Based on my tests, if possible, I would suggest the default medium model over that one. That one was significantly better on CommonVoice, but it was fine tuned on it, and worse on others, so possibly there is a bias here. It also returns inconsistent results for numbers, sometimes returning arabic numbers and sometimes returning numbers as written text, and default Whisper always returns Arabic numbers, that seems an issue in fine tuning to me. And fine tuning, if not properly done, can make the model generalization to unseen audios worse.

PS: Jonatas Grosman's fine tuning, for example, always returns numbers as written text. It was fine tuned on Commom Voice, but results improved in many others.

gfd2020 commented 2 months ago

I finished my tests here and everything works great, including remote transcription.

I was only able to download the medium model directly from the website. I compared each one manually and each has advantages and disadvantages.

@lfcnassif , Will this PR be part of version 4.2 ?

lfcnassif commented 2 months ago

Thank you @gfd2020 for testing!

@lfcnassif , Will this PR be part of version 4.2 ?

For sure. And also #1341.

lfcnassif commented 2 months ago

Just updated the code to use WhisperX, since it is much much faster for long audios when using a GPU and its accuracy is very similar to Faster-Whisper (see #1335). It is also faster than Faster-Whisper on CPU, because it uses a VAD filter to ignore audio parts without human speech.

But, to make probability scores to be returned, I had to apply this pending PR on WhisperX: https://github.com/m-bain/whisperX/pull/413

To use the patched version, install our WhisperX fork with command below inside IPED embedded python: pip install git+https://github.com/sepinf-inc/whisperx.git@confidence_score

You must also put FFmpeg on PATH and install gputil inside IPED python: pip install gputil

Before merging this, I'll externalize compute_type and batch_size params to the config file.

In the future, we should patch WhisperX even more to transcribe many audios at the same time using batches (see #1539).

lfcnassif commented 2 months ago

I just finished my planned changes, pushing a new python package including the docopt-0.6.2 lib causing issues with WhisperX installation, now it should be easier.

Since I made several changes, I would appreciate a lot if you @marcus6n and @gfd2020 could test this again before merging, thanks in advance!

gfd2020 commented 2 months ago

Since I made several changes, I would appreciate a lot if you @marcus6n and @gfd2020 could test this again before merging, thanks in advance!

My tests will take a while because I'm having trouble installing whisperX on GPU. On CPU works. The torch version must be greater than 2.0. I'm getting around it but it will take a while.

I think some additional tips can be given on the wiki for installing on the GPU. I believe that for an average user it will be very difficult to install...

lfcnassif commented 2 months ago

My tests will take a while because I'm having trouble installing whisperX on GPU. On CPU works. The torch version must be greater than 2.0. I'm getting around it but it will take a while.

I think some additional tips can be given on the wiki for installing on the GPU. I believe that for an average user it will be very difficult to install...

Thanks @gfd2020!

The only issue I had was with docopt dependency not installing into IPED embedded python, did you face it before I included it in the package?

The GPU steps was the same I did for faster-whisper after installing whisperx, I have just overwritten pytorch with one of the commands at https://pytorch.org/get-started/locally/ because I already had CUDA toolkit and cuDNN installed and set on PATH.

Is your issue related to some incompatibility between Torch 2 and your GPU card?

gfd2020 commented 2 months ago

The only issue I had was with docopt dependency not installing into IPED embedded python, did you face it before I included it in the package?

Yes. I was only able to install it in Python from this PR. In the python that comes with that separate package, I couldn't do it.

The GPU steps was the same I did for faster-whisper after installing whisperx, I have just overwritten pytorch with one of the commands at https://pytorch.org/get-started/locally/ because I already had CUDA toolkit and cuDNN installed and set on PATH.

I have to do more tests here. I'll try to install it with this link you sent me.

Is your issue related to some incompatibility between Torch 2 and your GPU card?

I managed to get it to work with torch 2.0, even with an old GPU. WhisperX works ok after I fix the error 1 below.

Now I wanted to show some things that appeared.

1) It seems that ffmpeg is a prerequisite for whisperX to work. I ran it on a computer without ffmpeg in the path and it gave the following error: [ERROR] [task.transcript.AbstractTranscriptTask] Unexpected exception while transcribing: audios/9.wav java.lang.RuntimeException: Transcription failed, returned: FileNotFoundError(2, 'O sistema não pode encontrar o arquivo especificado', None, 2, None) at iped.engine.task.transcript.Wav2Vec2TranscriptTask.transcribeWavPart(Wav2Vec2TranscriptTask.java:271) ~[iped-engine-4.2-snapshot.jar:?]

I found this link with a tip and it really worked after putting ffmpeg in the path. https://stackoverflow.com/questions/73845566/openai-whisper-filenotfounderror-winerror-2-the-system-cannot-find-the-file

2) These two warnings appeared with the medium model:

[WARN] [task.transcript.WhisperTranscriptTask$1] Model was trained with pyannote.audio 0.0.1, yours is 3.1.1. Bad things might happen unless you revert pyannote.audio to 0.x. [WARN] [task.transcript.WhisperTranscriptTask$1] Model was trained with torch 1.10.0+cu102, yours is 2.3.0+cu118. Bad things might happen unless you revert torch to 1.x.

lfcnassif commented 2 months ago
  1. It seems that ffmpeg is a prerequisite for whisperX to work. I ran it on a computer without ffmpeg in the path and it gave the following error: [ERROR] [task.transcript.AbstractTranscriptTask] Unexpected exception while transcribing: audios/9.wav java.lang.RuntimeException: Transcription failed, returned: FileNotFoundError(2, 'O sistema não pode encontrar o arquivo especificado', None, 2, None) at iped.engine.task.transcript.Wav2Vec2TranscriptTask.transcribeWavPart(Wav2Vec2TranscriptTask.java:271) ~[iped-engine-4.2-snapshot.jar:?]

That's sad news, I removed FFmpeg from PATH and just reproduced it. Yesterday I tested both faster-whisper and whisperX into a VM with a fresh Windows 10 install and strangely that error didn't happen, the only dependency needed was the MS Visual C++ Redistributable 2015-2019 package.

3. These two warnings appeared with the medium model:

[WARN] [task.transcript.WhisperTranscriptTask$1] Model was trained with pyannote.audio 0.0.1, yours is 3.1.1. Bad things might happen unless you revert pyannote.audio to 0.x. [WARN] [task.transcript.WhisperTranscriptTask$1] Model was trained with torch 1.10.0+cu102, yours is 2.3.0+cu118. Bad things might happen unless you revert torch to 1.x.

Those warnings also happen here. But all tests done on #1335 were with those warnings present.

gfd2020 commented 2 months ago

That's sad news, I removed FFmpeg from PATH and just reproduced it. Yesterday I tested both faster-whisper and whisperX into a VM with a fresh Windows 10 install and strangely that error didn't happen, the only dependency needed was the MS Visual C++ Redistributable 2015-2019 package.

Couldn't you put ffmpeg.exe in the iped tools? Is the problem putting it in the path?

Those warnings also happen here. But all tests done on #1335 were with those warnings present.

Ok, just to let you know about them.

lfcnassif commented 2 months ago

Couldn't you put ffmpeg.exe in the iped tools? Is the problem putting it in the path?

It's possible, but on #1267 @wladimirleite did a good job to remove ffmpeg as dependency, since we already use mplayer for video related stuff...

Ok, just to let you know about them.

Thanks!

lfcnassif commented 2 months ago

Yesterday I tested both faster-whisper and whisperX into a VM with a fresh Windows 10 install and strangely that error didn't happen

My fault, I tested again into the VM and WhisperX returns error without FFmpeg. I just added an explicit check and better error message to the user if it is not found.

gfd2020 commented 2 months ago

My fault, I tested again into the VM and WhisperX returns error without FFmpeg. I just added an explicit check and better error message to the user if it is not found.

Is there no way to modify the Python code to search for ffmpeg in a relative path within iped?

lfcnassif commented 2 months ago

Is there no way to modify the Python code to search for ffmpeg in a relative path within iped?

We can set the PATH env var of the main IPED process from the start up process and point to an embedded ffmpeg. But I'm not sure if we should embed ffmpeg and actually I'm thinking about offering both faster-whipser and whisperx as suggested by @rafael844 because faster-whisper doesn't have ffmpeg dependency and whisperx has many dependencies that may cause conflicts with other modules (now or in the future).

gfd2020 commented 2 months ago

Can I do a small step by step guide to install the requirements on the GPU? I did some tests here and everything worked.

I had to make some modifications to the code to be able to use it in an environment without an internet connection and point to the local model.

So the modelName parameter accepts the model name, relative path ( iped folder) and absolute path.

Examples: whisperModel = medium whisperModel = models/my_model whisperModel = C:/my_model

try:
    import os
    localModel = False
    localPath = os.path.join(os.getcwd(), modelName)
    if os.path.exists(modelName) and os.path.isabs(modelName):
        localModel = True
        localPath = modelName
    elif os.path.exists(localPath): 
        localModel = True        
    if localModel:
        import torch
        from whisperx.vad import load_vad_model            
        model_fp = os.path.join(localPath, "whisperx-vad-segmentation.bin")
        vad_model = load_vad_model(torch.device(deviceNum), vad_onset=0.500, vad_offset=0.363, use_auth_token=None, model_fp=model_fp)
        model = whisperx.load_model(localPath, device=deviceId, device_index=deviceNum, threads=threads, compute_type=compute_type, language=language, vad_model=vad_model)
    else:
        model = whisperx.load_model(modelName, device=deviceId, device_index=deviceNum, threads=threads, compute_type=compute_type, language=language)
lfcnassif commented 2 months ago

Can I do a small step by step guide to install the requirements on the GPU?

If it is independent of user environment or hardware, for sure! The wiki is publicly editable.

Maybe above code won't work if IPED is executed outside from its folder. For that, we use System.getProperty('iped.root') to get IPED's root folder.

lfcnassif commented 2 months ago

Without code above, does it need to be always connected to the Internet or just in the first run to download models?

gfd2020 commented 2 months ago

Can I do a small step by step guide to install the requirements on the GPU?

If it is independent of user environment or hardware, for sure! The wiki is publicly editable.

Windows only, any graphics card.

Maybe above code won't work if IPED is executed outside from its folder. For that, we use System.getProperty('iped.root') to get IPED's root folder.

Thanks. I'll try.

Without code above, does it need to be always connected to the Internet or just in the first run to download models?

Just the first run. But my idea is to create a customized IPED package with the models. This way, you would just install this package without internet access.

lfcnassif commented 2 months ago

Windows only, any graphics card.

That would be totally enough, thank you @gfd2020 for trying to improve the manual!

lfcnassif commented 2 months ago

@gfd2020, out of curiosity, have you played with the batchSize parameter? I know your GPUs are quite old and have a limited amount of memory, but I wonder if you got some speed up with it.

gfd2020 commented 2 months ago

@gfd2020, out of curiosity, have you played with the batchSize parameter? I know your GPUs are quite old and have a limited amount of memory, but I wonder if you got some speed up with it.

Not yet. thanks for reminding me

gfd2020 commented 2 months ago

hi @lfcnassif , I did some tests with the batchSize values. Regarding the speedup, I didn't notice a big difference but I have to test it with a larger case with several audios. Then I do these tests. Pytorch Version: 2.3.0+cu121 CUDA driver version: 12.4.89

Offboard Card - NVIDIA Quadro P620 - 2GB VRAM int8 batchsize = 3 ( does not work with values ​​greater than this )

float16 batchsize = 1 ( does not work at all )

float32 batchsize = 2 ( does not work with values ​​greater than this ) ############################ Offboard Card - NVIDIA T400- 4GB VRAM int8 batchsize = 25 ( It works up to this value but I didn't go beyond that )

float16 batchsize = 25 ( It works up to this value but I didn't go beyond that )

float32 batchsize = 20 ( does not work with values ​​greater than this )

gfd2020 commented 2 months ago

Maybe above code won't work if IPED is executed outside from its folder. For that, we use System.getProperty('iped.root') to get IPED's root folder.

Just try the code and does not work. "No module named 'java'"

from java.lang import System ipedRoot = System.getProperty('iped.root')

@lfcnassif , Something I didn't do right?

lfcnassif commented 2 months ago

hi @lfcnassif , I did some tests with the batchSize values. Regarding the speedup, I didn't notice a big difference but I have to test it with a larger case with several audios. Then I do these tests. Pytorch Version: 2.3.0+cu121 CUDA driver version: 12.4.89

It should make a difference just with audios longer than 30s, the longer the better.

from java.lang import System ipedRoot = System.getProperty('iped.root')

@lfcnassif , Something I didn't do right?

Sorry, my mistake, that works just into python tasks, the current python code runs as a separate independent python process, it won't see java classes or objects.

gfd2020 commented 2 months ago

About the wiki part below:

cd IPED_ROOT/python python get-pip.py set PATH=%PATH%;IPED_ROOT_ABSOLUTE_PATH/python/Scripts cd Scripts

I did it a little differently, so I didn't need to set the path or interfere with another installed python: Some warnings may appear that Python is not in the path, but it works normally.

Go to stand alone Iped python folder and install packages ( example ): cd c:\iped-4.2\python c:\iped-4.2\python>.\python.exe get-pip.py c:\iped-4.2\python>.\Scripts\pip.exe install numpy c:\iped-4.2\python>.\Scripts\pip.exe install whisperx etc

@lfcnassif , what do you think?

lfcnassif commented 2 months ago

About the wiki part below:

cd IPED_ROOT/python python get-pip.py set PATH=%PATH%;IPED_ROOT_ABSOLUTE_PATH/python/Scripts cd Scripts

I did it a little differently, so I didn't need to set the path or interfere with another installed python: Some warnings may appear that Python is not in the path, but it works normally.

Go to stand alone Iped python folder and install packages ( example ): cd c:\iped-4.2\python c:\iped-4.2\python>.\python.exe get-pip.py c:\iped-4.2\python>.\Scripts\pip.exe install numpy c:\iped-4.2\python>.\Scripts\pip.exe install whisperx etc

@lfcnassif , what do you think?

It's better! I also thought to change it in the past exactly to avoid mixing with an env installed python, those warnings never brought issues to me too.

lfcnassif commented 1 month ago

@wladimirleite, what do you think about embedding ffmpeg? In the long run, we should stay with WhisperX, since we should be able to paralellize small audios transcription on the GPU with an improved version of it.

wladimirleite commented 1 month ago

@wladimirleite, what do you think about embedding ffmpeg? In the long run, we should stay with WhisperX, since we should be able to parallelize small audios transcription on the GPU with an improved version of it.

I think it is perfectly fine! As far as I remember, I suggested removing it because it was being used for something that could be achieved with MPlayer, which we already use. So it would be possible to avoid another dependency. But in the present case FFmpeg is used directly by WhisperX, so it is better to include it than requiring additional installation steps.

lfcnassif commented 1 month ago

We were using it to break wav audios on 60s boundaries, it was not possible with mplayer, but you came up with a 100% java solution for that usage.

wladimirleite commented 1 month ago

We were using it to break wav audios on 60s boundaries, it was not possible with mplayer, but you came up with a 100% java solution for that usage.

You are right, I completely forgot about that :-)

lfcnassif commented 1 month ago

Just pushed changes to support both whisperx and faster_whisper as @rafael844 suggested. Most users won't benefit from whisperx since it needs a GPU with good VRAM to speed up transcribing long audios. For CPU users, faster_whisper is enough, it doesn't need FFmpeg and it is much smaller.

Thanks @gfd2020 and @marcus6n for testing this! If you find any issues with my last commits, please let me know.