Demo Video: https://youtu.be/yk4MmO7PUzM
Use Rootkit's Setup guide for initial Pre-Requisites
- Nvidia RTX 980 🆙, higher or equivalent
- And one of the following:
- Nvidia CUDA Toolkit 11.8 DOWNLOAD HERE
- Download and Unzip the AI Aimbot and stash the folder somewhere handy 🗂️.
- Ensure you've got Python installed (like a pet python 🐍) – grab version 3.11 HERE.
- 🛑 Facing a
python is not recognized...error? WATCH THIS! - 🛑 Is it a
pip is not recognized...error? WATCH THIS!
- 🛑 Facing a
- Fire up
PowerShellorCommand Prompton Windows 🔍. - To install
PyTorch, select the appropriate command based on your GPU.- Nvidia
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118 - AMD or CPU
pip install torch torchvision torchaudio
- Nvidia
- 📦 Run the file below to install all the libraries
Install Requirements.bat
Follow these steps after Python and all packages have been installed:
- Tweak the
onnxChoicevariable in the menu to correspond with your hardware specs:onnxChoice = 1# CPU ONLY 🖥onnxChoice = 2# AMD/NVIDIA ONLY 🎮onnxChoice = 3# NVIDIA ONLY 🏎️
- IF you have an NVIDIA set up, run the following
pip install onnxruntime-gpu pip install cupy-cuda11x - Follow the same steps as for the Fast 🏃♂️ Version above except for step 4, you will run
python main_onnx.pyinstead.
Follow these sparkly steps to get your TensorRT ready for action! 🛠️✨
-
Introduction 🎬 Watch the TensorRT section of the setup video 🎥 before you begin. It's loaded with useful tips!
-
Oops! Don't Forget the Environment 🌱 We forgot to mention adding environmental variable paths in the video. Make sure to do this part!
-
Get Support If You're Stumped 🤔 If you ever feel lost, you can always
@Wonderyour questions in our Discord 💬. Wonder is here to help! -
Install Cupy Run the following
pip install cupy-cuda11x -
CUDNN Installation 🧩 Click to install CUDNN 📥. You'll need a Nvidia account to proceed. Don't worry it's free.
-
Unzip and Relocate 📁➡️ Open the .zip CuDNN file and move all the folders/files to where the CUDA Toolkit is on your machine, usually at
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8. -
Get TensorRT 8.6 GA 🔽 Fetch
TensorRT 8.6 GA 🛒. -
Unzip and Relocate 📁➡️ Open the .zip TensorRT file and move all the folders/files to where the CUDA Toolkit is on your machine, usually at
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8. -
Python TensorRT Installation 🎡 Once you have all the files copied over, you should have a folder at
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python. If you do, good, then run the following command to install TensorRT in python.pip install "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\python\tensorrt-8.6.1-cp311-none-win_amd64.whl"🚨 If the following steps didn't work, don't stress out! 😅 The labeling of the files corresponds with the Python version you have installed on your machine. We're not looking for the 'lean' or 'dispatch' versions. 🔍 Just locate the correct file and replace the path with your new one. 🔄 You've got this! 💪
-
Set Your Environmental Variables 🌎 Add these paths to your environment:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvpC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
-
Download Pre-trained Models 🤖 You can use one of the .engine models we supply. But if it doesn't work, then you will need to re-export it. Grab the
.ptfile here for the model you want. We recommendyolov5s.pyoryolov5m.pyHERE 🔗. -
Run the Export Script 🏃♂️💻
Time to run BUILD_ENGINE.bat
Note: You can pick a different YOLOv5 model size. TensorRT's power allows for larger models if desired!
If you've followed these steps, you should be all set with TensorRT! ⚙️🚀
Dont forget to select your model in the menu!





