Deep Network Quantization via Error Compensation.


Journal

IEEE transactions on neural networks and learning systems
ISSN: 2162-2388
Titre abrégé: IEEE Trans Neural Netw Learn Syst
Pays: United States
ID NLM: 101616214

Informations de publication

Date de publication:
Sep 2022
Historique:
pubmed: 15 4 2021
medline: 15 4 2021
entrez: 14 4 2021
Statut: ppublish

Résumé

For portable devices with limited resources, it is often difficult to deploy deep networks due to the prohibitive computational overhead. Numerous approaches have been proposed to quantize weights and/or activations to speed up the inference. Loss-aware quantization has been proposed to directly formulate the impact of weight quantization on the model's final loss. However, we discover that, under certain circumstances, such a method may not converge and end up oscillating. To tackle this issue, we introduce a novel loss-aware quantization algorithm to efficiently compress deep networks with low bit-width model weights. We provide a more accurate estimation of gradients by leveraging the Taylor expansion to compensate for the quantization error, which leads to better convergence behavior. Our theoretical analysis indicates that the gradient mismatch issue can be fixed by the newly introduced quantization error compensation term. Experimental results for both linear models and convolutional networks verify the effectiveness of our proposed method.

Identifiants

pubmed: 33852390
doi: 10.1109/TNNLS.2021.3064293
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

4960-4970

Auteurs

Classifications MeSH