Does anyone success to convert saved_model.pb to TensorRT INT8 format ? The saved_model.pb is a TensorFlow model file which is deployed for container by Auto ML. I want to efficiently use this model on Nvidia Jetson. I could convert them to FP16 and FP32 mode, but couldn't to INT8 mode.
Great question! Following along by to see what members share about their experiences.
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |