User Tools

Site Tools


products:ict:ai:model_compression

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
products:ict:ai:model_compression [2022/04/15 03:40] – created wikiadminproducts:ict:ai:model_compression [2022/04/26 14:19] (current) – external edit 127.0.0.1
Line 1: Line 1:
 +
 +[[https://www.youtube.com/watch?v=AgezOkBTV90|Pruning and Model Compression]]
 +
 +[[https://www.youtube.com/watch?v=0WZmuryQdgg|Model Compression]]
 +
 +[[https://www.youtube.com/watch?v=-796LBBvipw|The Knowledge Within: Methods for Data-Free Model Compression]]
 +
 +
 +[[https://analyticsindiamag.com/model-compression-is-the-big-ml-flavour-of-2021/|Model Compression Is The Big ML Flavour Of 2021]]
 +
 +[[https://nni.readthedocs.io/en/stable/model_compression.html|Model Compression]]
 +
 +
 +[[https://arxiv.org/pdf/1710.09282.pdf|A Survey of Model Compression and Acceleration
 +for Deep Neural Networks]]
 +
 +
 +
 +[[https://medium.com/gsi-technology/an-overview-of-model-compression-techniques-for-deep-learning-in-space-3fd8d4ce84e5|An Overview of Model Compression Techniques for Deep Learning in Space]]
 +
 +[[https://arxiv.org/pdf/1802.03494v4.pdf|AMC: AutoML for Model Compression
 +and Acceleration on Mobile Devices]]
 +
 +[[https://arxiv.org/pdf/1908.08962v2.pdf|WELL-READ STUDENTS LEARN BETTER: ON THE IM-
 +PORTANCE OF PRE-TRAINING COMPACT MODELS]]
 +
 +[[https://arxiv.org/pdf/1602.07360v4.pdf|SQUEEZENET: ALEXNET-LEVEL ACCURACY WITH
 +50X FEWER PARAMETERS AND <0.5MB MODEL SIZE]]
 +
 +[[https://arxiv.org/pdf/1802.05668v1.pdf|MODEL COMPRESSION VIA
 +DISTILLATION AND QUANTIZATION]]
 +
 +
 +[[https://arxiv.org/pdf/1902.09574v1.pdf|The State of Sparsity in Deep Neural Networks]]
 +
 +
 +[[https://towardsdatascience.com/three-model-compression-methods-you-need-to-know-in-2021-1adee49cc35a|Three Model Compression Methods You Need To Know in 2021]]
 +