1# flake8: noqa: F401 2r""" 3This file is in the process of migration to `torch/ao/quantization`, and 4is kept here for compatibility while the migration process is ongoing. 5If you are adding a new entry/functionality, please, add it to the 6`torch/ao/quantization/fake_quantize.py`, while adding an import statement 7here. 8""" 9 10from torch.ao.quantization.fake_quantize import ( 11 _is_fake_quant_script_module, 12 _is_per_channel, 13 _is_per_tensor, 14 _is_symmetric_quant, 15 default_fake_quant, 16 default_fixed_qparams_range_0to1_fake_quant, 17 default_fixed_qparams_range_neg1to1_fake_quant, 18 default_fused_act_fake_quant, 19 default_fused_per_channel_wt_fake_quant, 20 default_fused_wt_fake_quant, 21 default_histogram_fake_quant, 22 default_per_channel_weight_fake_quant, 23 default_weight_fake_quant, 24 disable_fake_quant, 25 disable_observer, 26 enable_fake_quant, 27 enable_observer, 28 FakeQuantize, 29 FakeQuantizeBase, 30 FixedQParamsFakeQuantize, 31 FusedMovingAvgObsFakeQuantize, 32) 33