Code: Select all
import whisper
model = whisper.load_model("small")
transcript = model.transcribe(
word_timestamps=True,
audio="path/to/audio"
)
< /code>
In Datenbanks stelle ich das Modell wie folgt ab: < /p>
response = workspace_client.serving_endpoints.query(
name="whisperv3",
inputs=[base64_audio_chunks[first_audio_key]]
)
print(response.predictions[0])
Mobile version