📄️ LLM inference
WasmEdge now supports running open-source Large Language Models (LLMs) in Rust. We will use this example project to show how to make AI inferences with the llama-3.1-8B model in WasmEdge and Rust.
📄️ Mediapipe solutions
Mediapipe is a collection of highly popular AI models developed by Google. They focus on intelligent processing of media files and streams. The mediapipe-rs crate is a Rust library for data processing using the Mediapipe suite of models. The crate provides Rust APIs to pre-process the data in media files or streams, run AI model inference to analyze the data, and then post-process or manipulate the media data based on the AI output.
📄️ OpenVINO Backend
We will use this example project to show how to make AI inference with an OpenVINO model in WasmEdge and Rust.
📄️ TensorFlow Lite Backend
We will use this example project to show how to make AI inference with a TensorFlow Lite model in WasmEdge and Rust.
📄️ PyTorch Backend
We will use this example project to show how to make AI inference with a PyTorch model in WasmEdge and Rust.
📄️ Piper Backend
We will use this example project to show how to make AI inference with a Piper model in WasmEdge and Rust.
📄️ Whisper Backend
We will use this example project to show how to make AI inference with a Whisper model in WasmEdge and Rust.
📄️ TensorFlow And TensorFlow-Lite Plug-in For WasmEdge
Developers can use WASI-NN to inference the models. However, for the TensorFlow and TensorFlow-Lite users, the WASI-NN APIs could be more friendly to retrieve the input and output tensors. Therefore WasmEdge provides the TensorFlow-related plug-in and rust SDK for inferencing models in WASM.