NVIDIA Triton Inference Server as a service
under review
Stefan
"NVIDIA Triton™ Inference Server simplifies the deployment of AI models at scale in production. Open-source inference serving software, it lets teams deploy trained AI models from any framework (TensorFlow, NVIDIA® TensorRT®, PyTorch, ONNX Runtime, or custom) from local storage or cloud platform on any GPU- or CPU-based infrastructure (cloud, data center, or edge)." - https://developer.nvidia.com/nvidia-triton-inference-server
Stefan
under review