20240131在WIN10下配置whisper

20240131在WIN10下配置whisper

2024/1/31 18:25

首先你要有一张NVIDIA的显卡,比如我用的PDD拼多多的二手GTX1080显卡。【并且极其可能是矿卡!】800¥

2、请正确安装好NVIDIA最新的545版本的驱动程序和CUDA。

2、安装Torch

3、配置whisper

https://blog.csdn.net/m0_52156129/article/details/129263703

如何在你的电脑上完成whisper的简单部署

【根据你的位置或者网速,你下载的速度可能会很慢或者中断,重来即可!^_】
https://pytorch.org/get-started/locally/
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

START LOCALLY

Select your preferences and run the install command. Stable represents the most currently tested and supported version of PyTorch. This should be suitable for many users. Preview is available if you want the latest, not fully tested and supported, builds that are generated nightly. Please ensure that you have met the prerequisites below (e.g., numpy), depending on your package manager. Anaconda is our recommended package manager since it installs all dependencies. You can also install previous versions of PyTorch. Note that LibTorch is only available for C++.

NOTE: Latest PyTorch requires Python 3.8 or later. For more details, see Python section below.

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Tue_Jun_13_19:42:34_Pacific_Daylight_Time_2023
Cuda compilation tools, release 12.2, V12.2.91
Build cuda_12.2.r12.2/compiler.32965470_0

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
C:\Users\wb491>pip install -U openai-whisper
C:\Users\wb491>whisper -h
C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>whisper Utopia.AU.S01E04.Onwards.and.Upwards.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv --model small --language Chinese

LOG:

Microsoft Windows [版本 10.0.19045.3930]

(c) Microsoft Corporation。保留所有权利。

C:\Users\wb491>pip install -U openai-whisper

Collecting openai-whisper

Downloading openai-whisper-20231117.tar.gz (798 kB)

---------------------------------------- 798.6/798.6 kB 2.2 MB/s eta 0:00:00

Installing build dependencies ... done

Getting requirements to build wheel ... done

Preparing metadata (pyproject.toml) ... done

Collecting numba (from openai-whisper)

Downloading numba-0.58.1-cp38-cp38-win_amd64.whl.metadata (2.8 kB)

Requirement already satisfied: numpy in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from openai-whisper) (1.24.4)

Requirement already satisfied: torch in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from openai-whisper) (1.8.1)

Collecting tqdm (from openai-whisper)

Downloading tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)

---------------------------------------- 57.6/57.6 kB ? eta 0:00:00

Collecting more-itertools (from openai-whisper)

Downloading more_itertools-10.2.0-py3-none-any.whl.metadata (34 kB)

Collecting tiktoken (from openai-whisper)

Downloading tiktoken-0.5.2-cp38-cp38-win_amd64.whl.metadata (6.8 kB)

Collecting llvmlite<0.42,>=0.41.0dev0 (from numba->openai-whisper)

Downloading llvmlite-0.41.1-cp38-cp38-win_amd64.whl.metadata (4.9 kB)

Collecting importlib-metadata (from numba->openai-whisper)

Downloading importlib_metadata-7.0.1-py3-none-any.whl.metadata (4.9 kB)

Collecting regex>=2022.1.18 (from tiktoken->openai-whisper)

Downloading regex-2023.12.25-cp38-cp38-win_amd64.whl.metadata (41 kB)

---------------------------------------- 42.0/42.0 kB ? eta 0:00:00

Collecting requests>=2.26.0 (from tiktoken->openai-whisper)

Downloading requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)

Requirement already satisfied: typing-extensions in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch->openai-whisper) (4.9.0)

Collecting colorama (from tqdm->openai-whisper)

Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)

Collecting charset-normalizer<4,>=2 (from requests>=2.26.0->tiktoken->openai-whisper)

Downloading charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl.metadata (34 kB)

Collecting idna<4,>=2.5 (from requests>=2.26.0->tiktoken->openai-whisper)

Downloading idna-3.6-py3-none-any.whl.metadata (9.9 kB)

Collecting urllib3<3,>=1.21.1 (from requests>=2.26.0->tiktoken->openai-whisper)

Downloading urllib3-2.2.0-py3-none-any.whl.metadata (6.4 kB)

Collecting certifi>=2017.4.17 (from requests>=2.26.0->tiktoken->openai-whisper)

Downloading certifi-2023.11.17-py3-none-any.whl.metadata (2.2 kB)

Collecting zipp>=0.5 (from importlib-metadata->numba->openai-whisper)

Downloading zipp-3.17.0-py3-none-any.whl.metadata (3.7 kB)

Downloading more_itertools-10.2.0-py3-none-any.whl (57 kB)

---------------------------------------- 57.0/57.0 kB 2.9 MB/s eta 0:00:00

Downloading numba-0.58.1-cp38-cp38-win_amd64.whl (2.6 MB)

---------------------------------------- 2.6/2.6 MB 15.2 MB/s eta 0:00:00

Downloading tiktoken-0.5.2-cp38-cp38-win_amd64.whl (786 kB)

---------------------------------------- 786.4/786.4 kB 48.5 MB/s eta 0:00:00

Downloading tqdm-4.66.1-py3-none-any.whl (78 kB)

---------------------------------------- 78.3/78.3 kB 4.3 MB/s eta 0:00:00

Downloading llvmlite-0.41.1-cp38-cp38-win_amd64.whl (28.1 MB)

---------------------------------------- 28.1/28.1 MB 40.9 MB/s eta 0:00:00

Downloading regex-2023.12.25-cp38-cp38-win_amd64.whl (269 kB)

---------------------------------------- 269.5/269.5 kB 16.2 MB/s eta 0:00:00

Downloading requests-2.31.0-py3-none-any.whl (62 kB)

---------------------------------------- 62.6/62.6 kB ? eta 0:00:00

Downloading importlib_metadata-7.0.1-py3-none-any.whl (23 kB)

Downloading certifi-2023.11.17-py3-none-any.whl (162 kB)

---------------------------------------- 162.5/162.5 kB 10.2 MB/s eta 0:00:00

Downloading charset_normalizer-3.3.2-cp38-cp38-win_amd64.whl (99 kB)

---------------------------------------- 99.6/99.6 kB ? eta 0:00:00

Downloading idna-3.6-py3-none-any.whl (61 kB)

---------------------------------------- 61.6/61.6 kB 3.2 MB/s eta 0:00:00

Downloading urllib3-2.2.0-py3-none-any.whl (120 kB)

---------------------------------------- 120.9/120.9 kB 7.4 MB/s eta 0:00:00

Downloading zipp-3.17.0-py3-none-any.whl (7.4 kB)

Building wheels for collected packages: openai-whisper

Building wheel for openai-whisper (pyproject.toml) ... done

Created wheel for openai-whisper: filename=openai_whisper-20231117-py3-none-any.whl size=801375 sha256=0b59001c7b0cf9b553836246ea71e0c10b01936089a7a2ee3e5c031eba9277df

Stored in directory: c:\users\wb491\appdata\local\pip\cache\wheels\d2\33\5e\ab7fe45178ca9489707f18a89fd9a22611b656edf804b3cf53

Successfully built openai-whisper

Installing collected packages: zipp, urllib3, regex, more-itertools, llvmlite, idna, colorama, charset-normalizer, certifi, tqdm, requests, importlib-metadata, tiktoken, numba, openai-whisper

Successfully installed certifi-2023.11.17 charset-normalizer-3.3.2 colorama-0.4.6 idna-3.6 importlib-metadata-7.0.1 llvmlite-0.41.1 more-itertools-10.2.0 numba-0.58.1 openai-whisper-20231117 regex-2023.12.25 requests-2.31.0 tiktoken-0.5.2 tqdm-4.66.1 urllib3-2.2.0 zipp-3.17.0

C:\Users\wb491>

C:\Users\wb491>

C:\Users\wb491>whisper -h

usage: whisper [-h] [--model MODEL] [--model_dir MODEL_DIR] [--device DEVICE] [--output_dir OUTPUT_DIR] [--output_format {txt,vtt,srt,tsv,json,all}] [--verbose VERBOSE] [--task {transcribe,translate}]

[--language {af,am,ar,as,az,ba,be,bg,bn,bo,br,bs,ca,cs,cy,da,de,el,en,es,et,eu,fa,fi,fo,fr,gl,gu,ha,haw,he,hi,hr,ht,hu,hy,id,is,it,ja,jw,ka,kk,km,kn,ko,la,lb,ln,lo,lt,lv,mg,mi,mk,ml,mn,mr,ms,mt,my,ne,nl,nn,no,oc,pa,pl,ps,pt,ro,ru,sa,sd,si,sk,sl,sn,so,sq,sr,su,sv,sw,ta,te,tg,th,tk,tl,tr,tt,uk,ur,uz,vi,yi,yo,yue,zh,Afrikaans,Albanian,Amharic,Arabic,Armenian,Assamese,Azerbaijani,Bashkir,Basque,Belarusian,Bengali,Bosnian,Breton,Bulgarian,Burmese,Cantonese,Castilian,Catalan,Chinese,Croatian,Czech,Danish,Dutch,English,Estonian,Faroese,Finnish,Flemish,French,Galician,Georgian,German,Greek,Gujarati,Haitian,Haitian Creole,Hausa,Hawaiian,Hebrew,Hindi,Hungarian,Icelandic,Indonesian,Italian,Japanese,Javanese,Kannada,Kazakh,Khmer,Korean,Lao,Latin,Latvian,Letzeburgesch,Lingala,Lithuanian,Luxembourgish,Macedonian,Malagasy,Malay,Malayalam,Maltese,Mandarin,Maori,Marathi,Moldavian,Moldovan,Mongolian,Myanmar,Nepali,Norwegian,Nynorsk,Occitan,Panjabi,Pashto,Persian,Polish,Portuguese,Punjabi,Pushto,Romanian,Russian,Sanskrit,Serbian,Shona,Sindhi,Sinhala,Sinhalese,Slovak,Slovenian,Somali,Spanish,Sundanese,Swahili,Swedish,Tagalog,Tajik,Tamil,Tatar,Telugu,Thai,Tibetan,Turkish,Turkmen,Ukrainian,Urdu,Uzbek,Valencian,Vietnamese,Welsh,Yiddish,Yoruba}]

[--temperature TEMPERATURE] [--best_of BEST_OF] [--beam_size BEAM_SIZE] [--patience PATIENCE] [--length_penalty LENGTH_PENALTY] [--suppress_tokens SUPPRESS_TOKENS] [--initial_prompt INITIAL_PROMPT]

[--condition_on_previous_text CONDITION_ON_PREVIOUS_TEXT] [--fp16 FP16] [--temperature_increment_on_fallback TEMPERATURE_INCREMENT_ON_FALLBACK] [--compression_ratio_threshold COMPRESSION_RATIO_THRESHOLD]

[--logprob_threshold LOGPROB_THRESHOLD] [--no_speech_threshold NO_SPEECH_THRESHOLD] [--word_timestamps WORD_TIMESTAMPS] [--prepend_punctuations PREPEND_PUNCTUATIONS] [--append_punctuations APPEND_PUNCTUATIONS]

[--highlight_words HIGHLIGHT_WORDS] [--max_line_width MAX_LINE_WIDTH] [--max_line_count MAX_LINE_COUNT] [--max_words_per_line MAX_WORDS_PER_LINE] [--threads THREADS]

audio [audio ...]

positional arguments:

audio audio file(s) to transcribe

optional arguments:

-h, --help show this help message and exit

--model MODEL name of the Whisper model to use (default: small)

--model_dir MODEL_DIR

the path to save model files; uses ~/.cache/whisper by default (default: None)

--device DEVICE device to use for PyTorch inference (default: cpu)

--output_dir OUTPUT_DIR, -o OUTPUT_DIR

directory to save the outputs (default: .)

--output_format {txt,vtt,srt,tsv,json,all}, -f {txt,vtt,srt,tsv,json,all}

format of the output file; if not specified, all available formats will be produced (default: all)

--verbose VERBOSE whether to print out the progress and debug messages (default: True)

--task {transcribe,translate}

whether to perform X->X speech recognition ('transcribe') or X->English translation ('translate') (default: transcribe)

--language {af,am,ar,as,az,ba,be,bg,bn,bo,br,bs,ca,cs,cy,da,de,el,en,es,et,eu,fa,fi,fo,fr,gl,gu,ha,haw,he,hi,hr,ht,hu,hy,id,is,it,ja,jw,ka,kk,km,kn,ko,la,lb,ln,lo,lt,lv,mg,mi,mk,ml,mn,mr,ms,mt,my,ne,nl,nn,no,oc,pa,pl,ps,pt,ro,ru,sa,sd,si,sk,sl,sn,so,sq,sr,su,sv,sw,ta,te,tg,th,tk,tl,tr,tt,uk,ur,uz,vi,yi,yo,yue,zh,Afrikaans,Albanian,Amharic,Arabic,Armenian,Assamese,Azerbaijani,Bashkir,Basque,Belarusian,Bengali,Bosnian,Breton,Bulgarian,Burmese,Cantonese,Castilian,Catalan,Chinese,Croatian,Czech,Danish,Dutch,English,Estonian,Faroese,Finnish,Flemish,French,Galician,Georgian,German,Greek,Gujarati,Haitian,Haitian Creole,Hausa,Hawaiian,Hebrew,Hindi,Hungarian,Icelandic,Indonesian,Italian,Japanese,Javanese,Kannada,Kazakh,Khmer,Korean,Lao,Latin,Latvian,Letzeburgesch,Lingala,Lithuanian,Luxembourgish,Macedonian,Malagasy,Malay,Malayalam,Maltese,Mandarin,Maori,Marathi,Moldavian,Moldovan,Mongolian,Myanmar,Nepali,Norwegian,Nynorsk,Occitan,Panjabi,Pashto,Persian,Polish,Portuguese,Punjabi,Pushto,Romanian,Russian,Sanskrit,Serbian,Shona,Sindhi,Sinhala,Sinhalese,Slovak,Slovenian,Somali,Spanish,Sundanese,Swahili,Swedish,Tagalog,Tajik,Tamil,Tatar,Telugu,Thai,Tibetan,Turkish,Turkmen,Ukrainian,Urdu,Uzbek,Valencian,Vietnamese,Welsh,Yiddish,Yoruba}

language spoken in the audio, specify None to perform language detection (default: None)

--temperature TEMPERATURE

temperature to use for sampling (default: 0)

--best_of BEST_OF number of candidates when sampling with non-zero temperature (default: 5)

--beam_size BEAM_SIZE

number of beams in beam search, only applicable when temperature is zero (default: 5)

--patience PATIENCE optional patience value to use in beam decoding, as in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search (default: None)

--length_penalty LENGTH_PENALTY

optional token length penalty coefficient (alpha) as in https://arxiv.org/abs/1609.08144, uses simple length normalization by default (default: None)

--suppress_tokens SUPPRESS_TOKENS

comma-separated list of token ids to suppress during sampling; '-1' will suppress most special characters except common punctuations (default: -1)

--initial_prompt INITIAL_PROMPT

optional text to provide as a prompt for the first window. (default: None)

--condition_on_previous_text CONDITION_ON_PREVIOUS_TEXT

if True, provide the previous output of the model as a prompt for the next window; disabling may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop

(default: True)

--fp16 FP16 whether to perform inference in fp16; True by default (default: True)

--temperature_increment_on_fallback TEMPERATURE_INCREMENT_ON_FALLBACK

temperature to increase when falling back when the decoding fails to meet either of the thresholds below (default: 0.2)

--compression_ratio_threshold COMPRESSION_RATIO_THRESHOLD

if the gzip compression ratio is higher than this value, treat the decoding as failed (default: 2.4)

--logprob_threshold LOGPROB_THRESHOLD

if the average log probability is lower than this value, treat the decoding as failed (default: -1.0)

--no_speech_threshold NO_SPEECH_THRESHOLD

if the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence (default: 0.6)

--word_timestamps WORD_TIMESTAMPS

(experimental) extract word-level timestamps and refine the results based on them (default: False)

--prepend_punctuations PREPEND_PUNCTUATIONS

if word_timestamps is True, merge these punctuation symbols with the next word (default: "'"¿([{-)

--append_punctuations APPEND_PUNCTUATIONS

if word_timestamps is True, merge these punctuation symbols with the previous word (default: "'.。,,!!??::")]}、)

--highlight_words HIGHLIGHT_WORDS

(requires --word_timestamps True) underline each word as it is spoken in srt and vtt (default: False)

--max_line_width MAX_LINE_WIDTH

(requires --word_timestamps True) the maximum number of characters in a line before breaking the line (default: None)

--max_line_count MAX_LINE_COUNT

(requires --word_timestamps True) the maximum number of lines in a segment (default: None)

--max_words_per_line MAX_WORDS_PER_LINE

(requires --word_timestamps True, no effect with --max_line_width) the maximum number of words in a segment (default: None)

--threads THREADS number of threads used by torch for CPU inference; supercedes MKL_NUM_THREADS/OMP_NUM_THREADS (default: 0)

C:\Users\wb491>cd C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>dir

驱动器 C 中的卷是 WIN10

卷的序列号是 9273-D6A8

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB 的目录

2024/01/31 00:02 <DIR> .

2024/01/31 00:02 <DIR> ..

2024/01/30 22:50 111,189 04.srt

2024/01/30 22:50 113,309 05.srt

2024/01/30 22:51 107,750 06.srt

2024/01/30 22:51 101,014 07.srt

2024/01/30 22:51 111,620 08.srt

2024/01/30 19:28 124,714 161426695262720.7z

2024/01/30 21:12 447,089 2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB 2.7z

2024/01/30 22:45 287,154 4[内置字幕]字幕1+台湾.ssa

2024/01/30 22:46 281,620 5[内置字幕]字幕1+台湾.ssa

2024/01/30 22:46 276,722 6[内置字幕]字幕1 (1)+台湾.ssa

2024/01/30 22:47 255,284 7[内置字幕]字幕1 (2)+台湾.ssa

2024/01/30 22:48 293,888 8[内置字幕]字幕1 (3)+台湾.ssa

2024/01/30 18:43 31 RARBG.txt

2024/01/30 18:43 1,082,562,938 Utopia.AU.S01E04.Onwards.and.Upwards.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv

2024/01/30 18:43 1,068,829,082 Utopia.AU.S01E05.Arts.and.Minds.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv

2024/01/30 18:43 1,065,442,786 Utopia.AU.S01E06.Then.We.Can.Build.It.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv

2024/01/30 18:43 1,041,821,540 Utopia.AU.S01E07.The.First.Project.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv

2024/01/30 18:43 1,065,084,003 Utopia.AU.S01E08.The.Whole.Enchilada.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv

18 个文件 5,326,251,733 字节

2 个目录 260,072,566,784 可用字节

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>whisper Utopia.AU.S01E04.Onwards.and.Upwards.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv --model small --language Chinese

100%|███████████████████████████████████████| 461M/461M [00:41<00:00, 11.5MiB/s]

c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead

warnings.warn("FP16 is not supported on CPU; using FP32 instead")

Traceback (most recent call last):

File "c:\users\wb491\appdata\local\programs\python\python38\lib\runpy.py", line 192, in _run_module_as_main

return _run_code(code, main_globals, None,

File "c:\users\wb491\appdata\local\programs\python\python38\lib\runpy.py", line 85, in _run_code

exec(code, run_globals)

File "C:\Users\wb491\AppData\Local\Programs\Python\Python38\Scripts\whisper.exe\main.py", line 7, in <module>

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py", line 478, in cli

result = transcribe(model, audio_path, temperature=temperature, **args)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py", line 240, in transcribe

result: DecodingResult = decode_with_fallback(mel_segment)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py", line 170, in decode_with_fallback

decode_result = model.decode(segment, options)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context

return func(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 824, in decode

result = DecodingTask(model, options).run(mel)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context

return func(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 737, in run

tokens, sum_logprobs, no_speech_probs = self._main_loop(audio_features, tokens)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 687, in _main_loop

logits = self.inference.logits(tokens, audio_features)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 163, in logits

return self.model.decoder(tokens, audio_features, kv_cache=self.kv_cache)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl

result = self.forward(*input, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 211, in forward

x = block(x, xa, mask=self.mask, kv_cache=kv_cache)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl

result = self.forward(*input, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 138, in forward

x = x + self.cross_attn(self.cross_attn_ln(x), xa, kv_cache=kv_cache)[0]

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl

result = self.forward(*input, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 90, in forward

wv, qk = self.qkv_attention(q, k, v, mask)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 108, in qkv_attention

return (w @ v).permute(0, 2, 1, 3).flatten(start_dim=2), qk.detach()

KeyboardInterrupt

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>nvcc --versuib

nvcc fatal : Unknown option '--versuib'

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2023 NVIDIA Corporation

Built on Tue_Jun_13_19:42:34_Pacific_Daylight_Time_2023

Cuda compilation tools, release 12.2, V12.2.91

Build cuda_12.2.r12.2/compiler.32965470_0

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

Looking in indexes: https://download.pytorch.org/whl/nightly/cu121

Requirement already satisfied: torch in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (1.8.1)

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 10.3 MB/s eta 0:00:00

Collecting torchaudio

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 43.2 MB/s eta 0:00:00

Requirement already satisfied: typing-extensions in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch) (4.9.0)

Requirement already satisfied: numpy in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch) (1.24.4)

Requirement already satisfied: requests in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torchvision) (2.31.0)

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 2.9 MB/s eta 0:00:00

Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)

Downloading https://download.pytorch.org/whl/nightly/Pillow-9.3.0-cp38-cp38-win_amd64.whl (2.5 MB)

---------------------------------------- 2.5/2.5 MB 437.3 kB/s eta 0:00:00

Collecting filelock (from torch)

Downloading https://download.pytorch.org/whl/nightly/filelock-3.9.0-py3-none-any.whl (9.7 kB)

Collecting sympy (from torch)

Downloading https://download.pytorch.org/whl/nightly/sympy-1.11.1-py3-none-any.whl (6.5 MB)

---------------------------------------- 6.5/6.5 MB 51.7 MB/s eta 0:00:00

Collecting networkx (from torch)

Downloading https://download.pytorch.org/whl/nightly/networkx-3.0rc1-py3-none-any.whl (2.0 MB)

---------------------------------------- 2.0/2.0 MB 43.7 MB/s eta 0:00:00

Collecting jinja2 (from torch)

Downloading https://download.pytorch.org/whl/nightly/Jinja2-3.1.2-py3-none-any.whl (133 kB)

---------------------------------------- 133.1/133.1 kB 8.2 MB/s eta 0:00:00

Collecting fsspec (from torch)

Downloading https://download.pytorch.org/whl/nightly/fsspec-2023.4.0-py3-none-any.whl (153 kB)

---------------------------------------- 154.0/154.0 kB ? eta 0:00:00

INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 4.3 MB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 2.7 MB/s eta 0:00:00

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 531.9 kB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 2.8 MB/s eta 0:00:00

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 4.2 MB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

--------- ------------------------------ 0.6/2.4 GB 459.1 kB/s eta 1:06:27

ERROR: Exception:

Traceback (most recent call last):

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 438, in _error_catcher

yield

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 561, in read

data = self._fp_read(amt) if not fp_closed else b""

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 527, in _fp_read

return self._fp.read(amt) if amt is not None else self._fp.read()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py", line 98, in read

data: bytes = self.__fp.read(amt)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\http\client.py", line 454, in read

n = self.readinto(b)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\http\client.py", line 498, in readinto

n = self.fp.readinto(b)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\socket.py", line 669, in readinto

return self._sock.recv_into(b)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\ssl.py", line 1241, in recv_into

return self.read(nbytes, buffer)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\ssl.py", line 1099, in read

return self._sslobj.read(len, buffer)

socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper

status = run_func(*args)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\req_command.py", line 245, in wrapper

return func(self, options, args)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\commands\install.py", line 377, in run

requirement_set = resolver.resolve(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 95, in resolve

result = self._result = resolver.resolve(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve

state = resolution.resolve(requirements, max_rounds=max_rounds)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 427, in resolve

failure_causes = self._attempt_to_pin_criterion(name)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 239, in _attempt_to_pin_criterion

criteria = self._get_updated_criteria(candidate)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 230, in _get_updated_criteria

self._add_to_criteria(criteria, requirement, parent=candidate)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria

if not criterion.candidates:

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\structs.py", line 156, in bool

return bool(self._sequence)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 155, in bool

return any(self)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 143, in <genexpr>

return (c for c in iterator if id(c) not in self._incompatible_ids)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built

candidate = func()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 182, in _make_candidate_from_link

base: Optional[BaseCandidate] = self._make_base_candidate_from_link(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 228, in _make_base_candidate_from_link

self._link_candidate_cache[link] = LinkCandidate(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 293, in init

super().init(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 156, in init

self.dist = self._prepare()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 225, in _prepare

dist = self._prepare_distribution()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 304, in _prepare_distribution

return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 525, in prepare_linked_requirement

return self._prepare_linked_requirement(req, parallel_builds)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 596, in _prepare_linked_requirement

local_file = unpack_url(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 168, in unpack_url

file = get_http_url(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 109, in get_http_url

from_path, content_type = download(link, temp_dir.path)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\network\download.py", line 147, in call

for chunk in chunks:

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\progress_bars.py", line 53, in _rich_progress_bar

for chunk in iterable:

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\network\utils.py", line 63, in response_chunks

for chunk in response.raw.stream(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 622, in stream

data = self.read(amt=amt, decode_content=decode_content)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 587, in read

raise IncompleteRead(self._fp_bytes_read, self.length_remaining)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in exit

self.gen.throw(type, value, traceback)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 443, in _error_catcher

raise ReadTimeoutError(self._pool, None, "Read timed out.")

pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='download.pytorch.org', port=443): Read timed out.

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

Looking in indexes: https://download.pytorch.org/whl/nightly/cu121

Requirement already satisfied: torch in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (1.8.1)

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torchaudio

Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

Requirement already satisfied: typing-extensions in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch) (4.9.0)

Requirement already satisfied: numpy in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch) (1.24.4)

Requirement already satisfied: requests in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torchvision) (2.31.0)

Collecting torch

Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)

Using cached https://download.pytorch.org/whl/nightly/Pillow-9.3.0-cp38-cp38-win_amd64.whl (2.5 MB)

Collecting filelock (from torch)

Using cached https://download.pytorch.org/whl/nightly/filelock-3.9.0-py3-none-any.whl (9.7 kB)

Collecting sympy (from torch)

Using cached https://download.pytorch.org/whl/nightly/sympy-1.11.1-py3-none-any.whl (6.5 MB)

Collecting networkx (from torch)

Using cached https://download.pytorch.org/whl/nightly/networkx-3.0rc1-py3-none-any.whl (2.0 MB)

Collecting jinja2 (from torch)

Using cached https://download.pytorch.org/whl/nightly/Jinja2-3.1.2-py3-none-any.whl (133 kB)

Collecting fsspec (from torch)

Using cached https://download.pytorch.org/whl/nightly/fsspec-2023.4.0-py3-none-any.whl (153 kB)

INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torch

Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torch

Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------- ----------------------- 1.0/2.4 GB 56.0 kB/s eta 6:55:44

ERROR: Exception:

Traceback (most recent call last):

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 438, in _error_catcher

yield

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 561, in read

data = self._fp_read(amt) if not fp_closed else b""

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 527, in _fp_read

return self._fp.read(amt) if amt is not None else self._fp.read()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\cachecontrol\filewrapper.py", line 98, in read

data: bytes = self.__fp.read(amt)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\http\client.py", line 454, in read

n = self.readinto(b)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\http\client.py", line 498, in readinto

n = self.fp.readinto(b)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\socket.py", line 669, in readinto

return self._sock.recv_into(b)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\ssl.py", line 1241, in recv_into

return self.read(nbytes, buffer)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\ssl.py", line 1099, in read

return self._sslobj.read(len, buffer)

socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper

status = run_func(*args)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\req_command.py", line 245, in wrapper

return func(self, options, args)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\commands\install.py", line 377, in run

requirement_set = resolver.resolve(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py", line 95, in resolve

result = self._result = resolver.resolve(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 546, in resolve

state = resolution.resolve(requirements, max_rounds=max_rounds)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 427, in resolve

failure_causes = self._attempt_to_pin_criterion(name)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 239, in _attempt_to_pin_criterion

criteria = self._get_updated_criteria(candidate)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 230, in _get_updated_criteria

self._add_to_criteria(criteria, requirement, parent=candidate)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria

if not criterion.candidates:

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\resolvelib\structs.py", line 156, in bool

return bool(self._sequence)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 155, in bool

return any(self)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 143, in <genexpr>

return (c for c in iterator if id(c) not in self._incompatible_ids)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built

candidate = func()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 182, in _make_candidate_from_link

base: Optional[BaseCandidate] = self._make_base_candidate_from_link(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py", line 228, in _make_base_candidate_from_link

self._link_candidate_cache[link] = LinkCandidate(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 293, in init

super().init(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 156, in init

self.dist = self._prepare()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 225, in _prepare

dist = self._prepare_distribution()

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\resolution\resolvelib\candidates.py", line 304, in _prepare_distribution

return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 525, in prepare_linked_requirement

return self._prepare_linked_requirement(req, parallel_builds)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 596, in _prepare_linked_requirement

local_file = unpack_url(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 168, in unpack_url

file = get_http_url(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\operations\prepare.py", line 109, in get_http_url

from_path, content_type = download(link, temp_dir.path)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\network\download.py", line 147, in call

for chunk in chunks:

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\cli\progress_bars.py", line 53, in _rich_progress_bar

for chunk in iterable:

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_internal\network\utils.py", line 63, in response_chunks

for chunk in response.raw.stream(

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 622, in stream

data = self.read(amt=amt, decode_content=decode_content)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 587, in read

raise IncompleteRead(self._fp_bytes_read, self.length_remaining)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in exit

self.gen.throw(type, value, traceback)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\pip\_vendor\urllib3\response.py", line 443, in _error_catcher

raise ReadTimeoutError(self._pool, None, "Read timed out.")

pip._vendor.urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='download.pytorch.org', port=443): Read timed out.

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121

Looking in indexes: https://download.pytorch.org/whl/nightly/cu121

Requirement already satisfied: torch in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (1.8.1)

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torchaudio

Using cached https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

Requirement already satisfied: typing-extensions in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch) (4.9.0)

Requirement already satisfied: numpy in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torch) (1.24.4)

Requirement already satisfied: requests in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from torchvision) (2.31.0)

Collecting torch

Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240130%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)

Using cached https://download.pytorch.org/whl/nightly/Pillow-9.3.0-cp38-cp38-win_amd64.whl (2.5 MB)

Collecting filelock (from torch)

Using cached https://download.pytorch.org/whl/nightly/filelock-3.9.0-py3-none-any.whl (9.7 kB)

Collecting sympy (from torch)

Using cached https://download.pytorch.org/whl/nightly/sympy-1.11.1-py3-none-any.whl (6.5 MB)

Collecting networkx (from torch)

Using cached https://download.pytorch.org/whl/nightly/networkx-3.0rc1-py3-none-any.whl (2.0 MB)

Collecting jinja2 (from torch)

Using cached https://download.pytorch.org/whl/nightly/Jinja2-3.1.2-py3-none-any.whl (133 kB)

Collecting fsspec (from torch)

Using cached https://download.pytorch.org/whl/nightly/fsspec-2023.4.0-py3-none-any.whl (153 kB)

INFO: pip is looking at multiple versions of torch to determine which version is compatible with other requirements. This could take a while.

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torch

Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torch

Using cached https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

Collecting torchvision

Using cached https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 2.4 MB/s eta 0:00:00

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240126%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 4.2 MB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240126%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 2.5 MB/s eta 0:00:00

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240125%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 4.3 MB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240125%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 3.0 MB/s eta 0:00:00

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240124%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 1.1 MB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240124%2Bcu121-cp38-cp38-win_amd64.whl (2413.5 MB)

---------------------------------------- 2.4/2.4 GB 2.9 MB/s eta 0:00:00

Collecting torchvision

Downloading https://download.pytorch.org/whl/nightly/cu121/torchvision-0.18.0.dev20240123%2Bcu121-cp38-cp38-win_amd64.whl (5.8 MB)

---------------------------------------- 5.8/5.8 MB 4.3 MB/s eta 0:00:00

Collecting torch

Downloading https://download.pytorch.org/whl/nightly/cu121/torch-2.3.0.dev20240122%2Bcu121-cp38-cp38-win_amd64.whl (2465.0 MB)

---------------------------------------- 2.5/2.5 GB 2.7 MB/s eta 0:00:00

INFO: pip is looking at multiple versions of torchaudio to determine which version is compatible with other requirements. This could take a while.

Collecting torchaudio

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240129%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 3.2 MB/s eta 0:00:00

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240128%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 647.8 kB/s eta 0:00:00

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240127%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 1.4 MB/s eta 0:00:00

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240126%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 3.1 MB/s eta 0:00:00

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240125%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 3.2 MB/s eta 0:00:00

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240124%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 3.3 MB/s eta 0:00:00

Downloading https://download.pytorch.org/whl/nightly/cu121/torchaudio-2.2.0.dev20240123%2Bcu121-cp38-cp38-win_amd64.whl (4.1 MB)

---------------------------------------- 4.1/4.1 MB 3.0 MB/s eta 0:00:00

Collecting MarkupSafe>=2.0 (from jinja2->torch)

Downloading https://download.pytorch.org/whl/nightly/MarkupSafe-2.1.3-cp38-cp38-win_amd64.whl (17 kB)

Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from requests->torchvision) (3.3.2)

Requirement already satisfied: idna<4,>=2.5 in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from requests->torchvision) (3.6)

Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from requests->torchvision) (2.2.0)

Requirement already satisfied: certifi>=2017.4.17 in c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages (from requests->torchvision) (2023.11.17)

Collecting mpmath>=0.19 (from sympy->torch)

Downloading https://download.pytorch.org/whl/nightly/mpmath-1.2.1-py3-none-any.whl (532 kB)

---------------------------------------- 532.6/532.6 kB 8.4 MB/s eta 0:00:00

Installing collected packages: mpmath, sympy, pillow, networkx, MarkupSafe, fsspec, filelock, jinja2, torch, torchvision, torchaudio

Attempting uninstall: torch

Found existing installation: torch 1.8.1

Uninstalling torch-1.8.1:

Successfully uninstalled torch-1.8.1

Successfully installed MarkupSafe-2.1.3 filelock-3.9.0 fsspec-2023.4.0 jinja2-3.1.2 mpmath-1.2.1 networkx-3.0rc1 pillow-9.3.0 sympy-1.11.1 torch-2.3.0.dev20240122+cu121 torchaudio-2.2.0.dev20240123+cu121 torchvision-0.18.0.dev20240123+cu121

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>python

Python 3.8.0 (tags/v3.8.0:fa919fd, Oct 14 2019, 19:37:50) [MSC v.1916 64 bit (AMD64)] on win32

Type "help", "copyright", "credits" or "license" for more information.

>>>

>>> import torch

>>> print(torch.version)

2.3.0.dev20240122+cu121

>>> print(torch.cuda.is_available())

True

>>>

>>> exit()

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>nvcc --version

nvcc: NVIDIA (R) Cuda compiler driver

Copyright (c) 2005-2023 NVIDIA Corporation

Built on Tue_Jun_13_19:42:34_Pacific_Daylight_Time_2023

Cuda compilation tools, release 12.2, V12.2.91

Build cuda_12.2.r12.2/compiler.32965470_0

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>whisper Utopia.AU.S01E04.Onwards.and.Upwards.1080p.WEB-DL.AAC2.0.H.264-ABH.mkv --model small --language Chinese

[00:30.000 --> 00:31.000] Katey

[00:32.000 --> 00:33.000] 我找不到咖啡

[00:33.000 --> 00:34.000] 我們找到了

[00:34.000 --> 00:35.000] 為什麼

[00:35.000 --> 00:36.000] 健康的選擇

[00:36.000 --> 00:37.000] 只有一個月而已

[00:37.000 --> 00:38.000] 我們在做工作

[00:38.000 --> 00:41.000] 免費咖啡、糖、醬汁

[00:41.000 --> 00:43.000] 那是四個基本食物群嗎

[00:44.000 --> 00:45.000] 不

[00:46.000 --> 00:47.000] 我會喝一杯

[00:47.000 --> 00:48.000] 有CAMMANMile和Ginger

[00:49.000 --> 00:50.000] 那是誰

[00:50.000 --> 00:51.000] Toni

[00:51.000 --> 00:52.000] 那是Lauren

[00:52.000 --> 00:53.000] 她是一名記者

[00:53.000 --> 00:54.000] 我們在調查

[00:54.000 --> 00:55.000] 不,是一名記者

[00:55.000 --> 00:56.000] 是一名記者

[00:56.000 --> 00:57.000] 是一名記者

[00:57.000 --> 00:58.000] 她是一名記者

[00:58.000 --> 00:59.000] 我們在調查

[00:59.000 --> 01:00.000] 不,是一名記者

[01:00.000 --> 01:01.000] 25年代澳洲人

[01:01.000 --> 01:02.000] 誰在調查我們的未來

[01:02.000 --> 01:03.000] 她在調查我們的未來

[01:03.000 --> 01:04.000] 她在調查我們的未來

[01:04.000 --> 01:05.000] 他答應了

[01:05.000 --> 01:06.000] Ronda在他的旁邊

[01:06.000 --> 01:07.000] 要停止

[01:07.000 --> 01:08.000] 小政治的立場

[01:08.000 --> 01:09.000] 要不然

[01:09.000 --> 01:10.000] 要不然

[01:10.000 --> 01:11.000] 對

[01:11.000 --> 01:12.000] 對不起,我還不確定

[01:12.000 --> 01:13.000] 那是甚麼

[01:13.000 --> 01:14.000] 是Rose Hep

[01:14.000 --> 01:15.000] 是嗎

[01:15.000 --> 01:16.000] 是

[01:16.000 --> 01:17.000] 那些小小的

[01:17.000 --> 01:18.000] 所以

[01:18.000 --> 01:20.000] 這些項目都已經完成了

[01:20.000 --> 01:21.000] 已經完成了

[01:21.000 --> 01:22.000] 還沒結束

[01:22.000 --> 01:23.000] 沒有,他們...

[01:23.000 --> 01:25.000] 他們在各種程度

[01:25.000 --> 01:26.000] 他們是一種技術

[01:26.000 --> 01:27.000] 技術技術

[01:27.000 --> 01:28.000] 還有一種長 term vision

[01:28.000 --> 01:29.000] 對

[01:29.000 --> 01:30.000] 步步步步步步步步

[01:30.000 --> 01:31.000] 很棒,很棒

[01:31.000 --> 01:32.000] 謝謝

[01:32.000 --> 01:33.000] 對不起,我...

[01:33.000 --> 01:34.000] 對不起

[01:34.000 --> 01:35.000] 我看你很熱心

[01:35.000 --> 01:36.000] 我們在討論長 term vision

[01:36.000 --> 01:38.000] 我希望我們可以給你一點點

[01:40.000 --> 01:41.000] Katy

[01:41.000 --> 01:42.000] 你用了甚麼手機

[01:42.000 --> 01:43.000] 我用了

[01:43.000 --> 01:44.000] 為甚麼

[01:44.000 --> 01:45.000] 健康的選擇

[01:45.000 --> 01:46.000] 但所有的食物都在

[01:46.000 --> 01:47.000] 對

[01:47.000 --> 01:48.000] 那是甚麼選擇

[01:48.000 --> 01:49.000] 你可以用雞肉

[01:49.000 --> 01:50.000] 或雞肉

[01:50.000 --> 01:52.000] 這是甚麼選擇

[01:52.000 --> 01:53.000] 這裡

[01:53.000 --> 01:54.000] 你好,Jim

[01:54.000 --> 01:55.000] 你現在在做甚麼

[01:55.000 --> 01:56.000] 我正在做巧克力

[01:56.000 --> 01:57.000] 你現在在做甚麼

[01:57.000 --> 01:58.000] 做甚麼

[01:58.000 --> 02:00.000] 我正在做NHP

[02:00.000 --> 02:01.000] NHP

[02:01.000 --> 02:03.000] National Highways Program

[02:03.000 --> 02:04.000] Connecting Australia

[02:04.000 --> 02:05.000] 27 Billion Dollar

[02:05.000 --> 02:06.000] Kate Zabrano

[02:06.000 --> 02:07.000] 在發展

[02:07.000 --> 02:08.000] 對

[02:08.000 --> 02:09.000] 對

[02:09.000 --> 02:10.000] 對

[02:10.000 --> 02:11.000] 對

[02:11.000 --> 02:12.000] 我可能會把那一個

[02:12.000 --> 02:14.000] 放在背後

[02:14.000 --> 02:15.000] 你對Clerk Priority

[02:15.000 --> 02:16.000] 第一

[02:16.000 --> 02:17.000] 對,國際戰鬥

[02:17.000 --> 02:18.000] 我們可能要把

[02:18.000 --> 02:19.000] 一半的氣勢

[02:19.000 --> 02:20.000] 滑倒了

[02:20.000 --> 02:21.000] 我只是半小時

[02:21.000 --> 02:22.000] 告訴你一件事

[02:22.000 --> 02:24.000] 我們在討論長 term project

[02:24.000 --> 02:25.000] 那聲音很棒

[02:25.000 --> 02:26.000] 我意思是

[02:26.000 --> 02:27.000] 你不要放在自己身上

[02:27.000 --> 02:28.000] 我不是放在自己身上

[02:28.000 --> 02:29.000] 我是放在你身上

[02:31.000 --> 02:32.000] 他不願意喝咖啡

[02:32.000 --> 02:34.000] 不願意

[02:35.000 --> 02:37.000] 那些大男人在討論你

[02:37.000 --> 02:38.000] 那些大男人

[02:38.000 --> 02:39.000] 他曾經在樓下工作

[02:39.000 --> 02:40.000] 但他移動了

[02:40.000 --> 02:41.000] 在這裡

[02:41.000 --> 02:42.000] 他在哪裡

[02:42.000 --> 02:43.000] 在那邊

[02:43.000 --> 02:44.000] 旁邊

[02:44.000 --> 02:45.000] 是否安全

[02:45.000 --> 02:46.000] 當然

[02:49.000 --> 02:50.000] 沒有人在

[02:50.000 --> 02:51.000] 他在附近

[02:51.000 --> 02:52.000] 那為什麼我們在說

[02:52.000 --> 02:53.000] 我不知道

[02:54.000 --> 02:55.000] 他在問我們

[02:55.000 --> 02:56.000] 他在問我們

[02:56.000 --> 02:57.000] 他的表演表演

[02:57.000 --> 02:58.000] 什麼

[02:58.000 --> 02:59.000] 我不知道

[02:59.000 --> 03:00.000] 他在前面

[03:00.000 --> 03:01.000] 他在前面

[03:01.000 --> 03:02.000] 所以希望你能做到

[03:02.000 --> 03:03.000] 他在這裡

[03:03.000 --> 03:04.000] 我怎麼應該

[03:04.000 --> 03:05.000] 在表演表演

[03:05.000 --> 03:06.000] 在我前面

[03:06.000 --> 03:07.000] 我認為我們必須

[03:07.000 --> 03:08.000] 為什麼

[03:08.000 --> 03:09.000] 這是一件事

[03:09.000 --> 03:10.000] 你的表演

[03:10.000 --> 03:11.000] 好

[03:11.000 --> 03:12.000] 你給我一個 Summary

[03:12.000 --> 03:13.000] 我看他做什麼

[03:13.000 --> 03:14.000] 我不知道

[03:14.000 --> 03:15.000] 你找到嗎

[03:15.000 --> 03:16.000] 我問他

[03:16.000 --> 03:17.000] 你不要問他

[03:17.000 --> 03:18.000] 為什麼我們在說

[03:18.000 --> 03:19.000] 他在討論

[03:19.000 --> 03:20.000] 他會在討論

[03:20.000 --> 03:21.000] 當然

[03:21.000 --> 03:22.000] 你怎麼會這樣

[03:22.000 --> 03:23.000] 你喜歡他

Traceback (most recent call last):

File "c:\users\wb491\appdata\local\programs\python\python38\lib\runpy.py", line 192, in _run_module_as_main

return _run_code(code, main_globals, None,

File "c:\users\wb491\appdata\local\programs\python\python38\lib\runpy.py", line 85, in _run_code

exec(code, run_globals)

File "C:\Users\wb491\AppData\Local\Programs\Python\Python38\Scripts\whisper.exe\main.py", line 7, in <module>

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py", line 478, in cli

result = transcribe(model, audio_path, temperature=temperature, **args)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py", line 240, in transcribe

result: DecodingResult = decode_with_fallback(mel_segment)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\transcribe.py", line 170, in decode_with_fallback

decode_result = model.decode(segment, options)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 824, in decode

result = DecodingTask(model, options).run(mel)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 737, in run

tokens, sum_logprobs, no_speech_probs = self._main_loop(audio_features, tokens)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 687, in _main_loop

logits = self.inference.logits(tokens, audio_features)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\decoding.py", line 163, in logits

return self.model.decoder(tokens, audio_features, kv_cache=self.kv_cache)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl

return forward_call(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 211, in forward

x = block(x, xa, mask=self.mask, kv_cache=kv_cache)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl

return forward_call(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 136, in forward

x = x + self.attn(self.attn_ln(x), mask=mask, kv_cache=kv_cache)[0]

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl

return forward_call(*args, **kwargs)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 90, in forward

wv, qk = self.qkv_attention(q, k, v, mask)

File "c:\users\wb491\appdata\local\programs\python\python38\lib\site-packages\whisper\model.py", line 108, in qkv_attention

return (w @ v).permute(0, 2, 1, 3).flatten(start_dim=2), qk.detach()

KeyboardInterrupt

C:\2014[乌托邦(澳洲版) 第一季]Utopia.AU.S01.1080p.WEB-DL.AAC2.0.H.264-ABH[rartv]-7.83GB>

相关推荐
昨日之日20067 天前
Moonshine - 新型开源ASR(语音识别)模型,体积小,速度快,比OpenAI Whisper快五倍 本地一键整合包下载
人工智能·whisper·语音识别
新缸中之脑16 天前
基于Distil-Whisper的实时ASR【自动语音识别】
人工智能·whisper·语音识别
敢敢のwings21 天前
如何在Windows平台下基于Whisper来训练自己的数据
windows·whisper·1024程序员节
z千鑫22 天前
【OpenAI】第六节(语音生成与语音识别技术)从 ChatGPT 到 Whisper 的全方位指南
人工智能·chatgpt·whisper·gpt-3·openai·语音识别·codemoss能用ai
bug智造25 天前
Whisper 音视频转写
whisper·音视频
客院载论1 个月前
论文学习——基于Whisper迁移学习的阿尔兹海默症检测方法——音频特征和语义特征的结合
学习·whisper·迁移学习
htsitr1 个月前
实时语音转文字(基于NAudio+Whisper+VOSP+Websocket)
whisper
盼海1 个月前
安装openai-whisper 失败
python·whisper
aiAIman1 个月前
主流显卡和 CPU 进行 Whisper 转录性能 RTF 转录时间估算
人工智能·python·语言模型·whisper
碳治郎AI1 个月前
【AIGC】OpenAI 宣布推出Whisper large-v3-turbo 语音转录模型 速度提高了8倍
人工智能·whisper·aigc