We have hosted the application llm in order to run this application in our online workstations with Wine or directly.
Quick description about llm:
llm is an ecosystem of Rust libraries for working with large language models - it's built on top of the fast, efficient GGML library for machine learning. The primary entry point for developers is the llm crate, which wraps the llm-base and the supported model crates. Documentation for the released version is available on Docs.rs. For end-users, there is a CLI application, llm-cli, which provides a convenient interface for interacting with supported models. Text generation can be done as a one-off based on a prompt, or interactively, through REPL or chat modes. The CLI can also be used to serialize (print) decoded models, quantize GGML files, or compute the perplexity of a model. It can be downloaded from the latest GitHub release or by installing it from crates.io.Features:
- llm is powered by the ggml tensor library, and aims to bring the robustness and ease of use of Rust to the world of large language models
- This project depends on Rust v1.65.0 or above and a modern C toolchain
- The llm library is engineered to take advantage of hardware accelerators such as cuda and metal for optimized performance
- To enable llm to harness these accelerators, some preliminary configuration steps are necessary
- The easiest way to get started with llm-cli is to download a pre-built executable from a released version of llm
- By default, llm builds with support for remotely fetching the tokenizer from Hugging Face's model hub
Programming Language: Rust.
Categories:
©2024. Winfy. All Rights Reserved.
By OD Group OU – Registry code: 1609791 -VAT number: EE102345621.