📄️ Llama 2 inference
WasmEdge now supports running llama2 series of models in Rust. We will use this example project to show how to make AI inferences with the llama2 model in WasmEdge and Rust.
📄️ Mediapipe solutions
Mediapipe is a collection of highly popular AI models developed by Google. They focus on intelligent processing of media files and streams. The mediapipe-rs crate is a Rust library for data processing using the Mediapipe suite of models. The crate provides Rust APIs to pre-process the data in media files or streams, run AI model inference to analyze the data, and then post-process or manipulate the media data based on the AI output.
📄️ PyTorch Backend
We will use this example project to show how to make AI inference with a PyTorch model in WasmEdge and Rust.
📄️ TensorFlow Lite Backend
We will use this example project to show how to make AI inference with a TensorFlow Lite model in WasmEdge and Rust.
📄️ OpenVINO Backend
We will use this example project to show how to make AI inference with an OpenVINO model in WasmEdge and Rust.
📄️ TensorFlow Plug-in For WasmEdge
Developers can use WASI-NN to inference the models. However, for the TensorFlow and TensorFlow-Lite users, the WASI-NN APIs could be more friendly to retrieve the input and output tensors. Therefore WasmEdge provides the TensorFlow-related plug-in and rust SDK for inferencing models in WASM.