Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix CUDA build for llm-chain-llama-sys #261

Closed
wants to merge 1 commit into from

Conversation

mhlakhani
Copy link

This fixes the build when attempting to build llm-chain-llama-sys with CUDA enabled (by setting CARGO_FEATURE_CUDA).

Without this PR, builds fail with errors similar to ggml-org/llama.cpp#1728

I spent some time coming up with a solution that just worked on my machine before reading the comment at the top of the file which references https://github.com/tazz4843/whisper-rs/blob/master/sys/build.rs - which already had a cleaner cross-platform solution so I just copied that.

After this PR I can successfully build llm-chain-llama-sys with CUDA support (confirmed by setting the environment flag, and running a test app on my machine).

@andychenbruce
Copy link
Contributor

For me on Ubuntu having this as my build.rs works:

fn main() {
    let stuff: &[&str] = &[
        "cublas", "culibos", "cudart", "cublasLt", "pthread", "dl", "rt",
    ];
    for i in stuff {
        println!("cargo:rustc-link-arg=-l{}", i);
    }
}

@mhlakhani
Copy link
Author

#266 does this a little better

@mhlakhani mhlakhani closed this Feb 11, 2024
@mhlakhani mhlakhani deleted the fix-cuda-build branch February 11, 2024 21:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants