Skip to content

Conversation

@akharlamov
Copy link
Contributor

The change allows building whisper.cpp with any BLAS implementation, which in our case required for Intel oneMKL.

@ggerganov ggerganov merged commit 041be06 into ggml-org:master May 20, 2023
jacobwu-b pushed a commit to jacobwu-b/Transcriptify-by-whisper.cpp that referenced this pull request Oct 24, 2023
* Build with any BLAS library

* ci: Removed explicit CUDA nvcc path
jacobwu-b pushed a commit to jacobwu-b/Transcriptify-by-whisper.cpp that referenced this pull request Oct 24, 2023
* Build with any BLAS library

* ci: Removed explicit CUDA nvcc path
landtanin pushed a commit to landtanin/whisper.cpp that referenced this pull request Dec 16, 2023
* Build with any BLAS library

* ci: Removed explicit CUDA nvcc path
iThalay pushed a commit to iThalay/whisper.cpp that referenced this pull request Sep 23, 2024
* Build with any BLAS library

* ci: Removed explicit CUDA nvcc path
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants