量化方法与高效神经网络推断综述

介绍-实现高效神经网络的方向

设计高效的NN模型

微观结构优化

  • 卷积核调整为depthwise
  • 低秩分解
  • inception
  • 残差结构
  • 自动化机器学习(Automl)

宏观结构优化

自动搜索网络结构(EfficienNet)

•     旨在以自动化方式,在给定的模型大小,深度和/或宽度的约束下,找到正确的模型架构

  • 神经网络结构搜索(NAS)
  • 非结构剪枝-移除不显著的神经元

联合NN架构和硬件条件共同考虑

NN架构调整用于特定的目标硬件平台

剪枝

具有较小响应的神经元被移除,从而得到稀疏的计算图

•     优点:可以进行主动剪枝,去除大部分神经网络参数,对模型的泛化性能影响很小

•     缺点:会导致稀疏矩阵运算,众所周知,稀疏矩阵运算很难加速,而且通常内存有限[21,65]

  • 结构剪枝-去除一组参数(例如,整个卷积滤波器)

•     优点:有改变层和权重矩阵的输入和输出形状的效果,因此仍然允许密集的矩阵运算。

•     缺点:过度的结构化剪枝往往会导致精确度显著降低

知识蒸馏

训练一个大模型,然后用它作为老师来训练一个更紧凑小巧的模型

量化

用于推理的量化是本文的重点

量化与神经科学

连续形式存储的信息将不可避免地受到噪声的破坏,离散信号表示对噪声更具鲁棒性

量化的基础概念

问题及符号标记

基础量化概念

均匀量化

  • 量化函数

•    

 

  • 逆量化函数

•    

 

  • 最重要的是缩放因子S的选择,缩放因子的本质是将真实值r的范围一定数量的分区上去

•    

 

  • [α,β]是真实值的裁剪范围,b是量化bit宽度,所以想要确定缩放因子S,先要确定[α,β],选择裁剪范围的过程称为“校准calibration”
  • 非对称量化

对称量化和非对称量化

•     裁剪范围选择为−α != β,常见例如:α = r_min,β=r_max

•     非对称量化相比对称量化会有更小的剪切范围

•     当目标权重或激活需要被平衡时,这一点尤其重要,例如,ReLU 之后的激活始终是非负值

  • 对称量化

•     裁剪范围选择为−α = β,常见例如:−α=β=  max(|r_max|,|r_min|)

•     两种方法去选择缩放因子S

•     全范围对称量化

•     使用INT8的全部范围,[-128,127]

 

•     限制范围对称量化

•     量化范围是[-127,127]。S的值是

 

•     全范围方法更准确

  • 不同的校准(calibration)方法,(α,β)的选择

•     min/max比较常见

•     这种方法容易受到激活中的异常值影响,会增大量化范围,降低量化分辨率

•     使用x/max百分比

•     选择β和α,使得真实值与量化值之间的KL散度(也就是信息损失)最小

  • 对称与非对称量化比较

•     对称量化在量化权重的实践中被广泛采用,因为零点归零会导致推论期间计算成本的降低 [247],并且使实现更加简单。

•     对于剪切范围会变动的,或者不对称的情况,对称量化不如非对称量化

  •  

 

静态与动态量化

  • 量化激活的方法有两种

•     动态量化

•     范围在运行期间对每个激活映射进行动态计算

•     计算开销大,精度好

•     (常用)静态量化

•     范围是预先计算的,并且在推理期间是静态的

•     不增加计算开销,精度低点

•     范围预计算方法

•     运行一系列校准输入来计算典型的激活范围[112,259]

•     (常用)最小化原始未量化的权重分布和对应的量化值之间的均方误差(MSE)[40,214,221,273],熵[184]

•     NN训练期间学习该裁剪范围[36,144,268,278]

•     这里值得注意的工作是LQNets[268]、PACT[36]、LSq[56]和LSq+[15],它们在训练期间联合优化NN中的裁剪范围和权重。

量化粒度

  • 如何计算权重的限幅范围[α,β]的粒度

•     逐层量化

•     考虑[131]一层卷积滤波器中的所有权重来确定限幅范围

•     分组量化

•     对层内的多个不同通道进行分组,以计算(激活或卷积内核的)剪切范围

•     逐通道量化

•     对每个卷积滤波器使用固定值,独立于其他通道[104,112,131,215,268,276]

非均匀量化

  • 量化步长以及量化级别非均匀间隔

•    

 

•     Xi表示离散量化级别和∆i量化步长(阈值),∆i和Xi都不是均匀的

•     对于固定的位宽,非均匀量化可能会实现更高的精度,因为人们可以通过更关注重要价值区域或找到适当的动态范围来更好地捕获分布

•     非均匀量化方法

•     典型的非均匀量化是使用对数分布[175,274]

•     其中量化步长和级别以指数而不是线性增加

•     基于二进制代码的量化

•    

 

•     非均匀量化器公式化为一个优化问题

•    

 

fine-tuning方法

量化感知训练(QAT)

  • 量化之后重新训练模型,来调整NN的参数

•     反向传播的量化算子微分方法

•     直通估计器(STE)[13]

•     本质上忽略了舍入操作,而用恒等函数来逼近它

•     随机神经元方法作为STE的替代方法[13]

•     组合优化[65]

•     目标传播[138]

•     Gumbel-Softmax[115]

•     消除了在EQ2中使用不可微量化运算符的需要

•     非STE方法[4,8,39,98,142,179,274]

•     使用脉冲训练来近似不连续点的导数[45]

•     Ada round[177]

•     一种自适应舍入方法来代替舍入方法

训练后量化(PTQ)

  • 不重新训练模型,来量化调整NN的参数

•     主要论文有:[11,24,40,60,61,68,69,88,107,140,146,170,178,273]

•     优劣

•     PTQ有一个额外的优势,那就是它可以应用于数据有限或未标记的情况,但是精度低了点

•     优化方向

•     偏差校正

•     [11,63]观察量化后权值的均值和方差的固有偏差,并提出偏差校正方法

•     跨层/通道均衡

•     [170,178]表明均衡不同层或通道之间的权重范围(和隐含的激活范围)可以减少量化误差

•     最佳裁剪范围和通道位宽设置

•     ACIQ[11]解析地计算PTQ的最佳裁剪范围和通道位宽设置。       

•     ACIQ可以实现较低的精度下降,但ACIQ中使用的通道激活量化很难在硬件上有效地实现

•     OMSE方法[40]去除了激活时的通道方式量化

•     建议通过优化量化张量和相应的浮点张量之间的L2距离来进行PTQ

•     缓解离群点对PTQ的不利影响,在文献[273]中提出了一种离群点通道分裂(OCS)方法

•     该方法将含有离群点的通道复制并减半。

•     round

•     Adaround[177]

•     它表明朴素的四舍五入量化方法可能会反直觉地导致次优解,并提出了一种自适应舍入方法,更好地减少了损失。

•     AdaQuant[107]

•     相比Adaround,提出了一种更通用的方法,允许量化权重根据需要进行更改。

零点量化(ZSQ)

  • 级别1:无数据和无微调(ZSQ+PTQ)

•     [178]的工作,该方法依赖于均衡权值范围和校正偏差误差

•     然而,由于该方法基于(分段)线性激活函数的比例等变特性,因此对于具有非线性激活的NNs,该方法可能是次优的

  • 级别2:无数据但需要微调(ZSQ+QAT)

•     gan生成与真实数据相似的合成数据

•     存在问题

•     没有考虑内部统计的合成数据可能不能正确地表示真实数据分布

•     解决方案

•     使用存储在批归一化(BatchNorm)[111]中的统计量,即通道的均值和方差,来生成更真实的合成数据。

•     [84]通过直接最小化内部统计量的KL散度来产生数据

•     ZeroQ方案,待了解

随机量化方法

随机量化将浮点数向上或向下映射为与权重更新的大小相关的概率

  • [29,78]中,EQ2中的Int运算符被定义为

•    

 

  • 42]将其扩展到二进制量

•    

 

  • Quant Noise[59]中:随机权重子集

•     QuantNoise在每次前向传播过程中随机量化不同的权重子集,并使用无偏梯度训练模型

  • 随机量化方法的一个主要挑战是为每次权重更新创建随机数的开销,因此它们在实践中还没有被广泛采用
  •  

高级概念:量化低于 8 bit

模拟和纯整数量化区别

模拟量化(也称为伪量化)

 

•     伪量化方法对于带宽限制大于计算限制的问题是有益

纯整数量化(也称为定点量化)

  •  

 

•     激活函数是relu

•     [151]将批量归一化融合到先前的卷积层中

•     [112]提出了批量归一化残差网络的仅整数计算方法。

•     突破激活函数是relu的限制

•     [130]最近的工作通过用整数算法近似Gelu[93]、Softmax和层归一化[6]来解决这一限制,并进一步将仅限整数的量化扩展到Transformer[235]体系结构。

•     二元量化的思路

•     所有的定标都是用二进数执行的,二进数是分子为整数值、分母为2的幂的有理数[259]

•     所有的加法(例如剩余连接)都强制具有相同的并矢规模,这可以使加法逻辑更简单,效率更高

混合精度量化

每一层都以不同的bit精度进行量化,[51,81,101,182,193,204,231,238,241,255,277]

  • 存在的挑战

•     可选择的bit设置的搜索空间与层数成指数关系

  • 解决方案

•     基于强化学习

•     [238]提出了一种基于强化学习(RL)的量化策略自动确定方法,并使用硬件模拟器将硬件加速器的反馈纳入RL代理反馈中

•     基于神经网络结构搜索(NAS)

•     [246]将混合精度构型搜索问题描述为一个神经结构搜索(NAS)问题,并用微分NAS(DNAs)方法有效地搜索搜索空间

•     基于周期函数正则化

•     方法是自动区分不同的层及其相对于精度的不同重要性,同时获得它们各自的位宽[179]

•     基于 Hessian

•     HAWQ

•     根据海森矩阵最大特征值确定量化精度与顺序[50]

•     HAWQv2

•     无需任何人工干预即可自动选择不同层的比特精度

•     HAWQv3

•     提出了一种快速整数线性规划方法来为给定的特定应用约束(例如,模型大小或延迟)寻找最优位精度。

硬件感知量化

动机:并不是所有的硬件在某一层/操作被量化后都能提供相同的速度[86,90,238,242,248,248,257,259]

工作[238]使用强化学习代理来确定硬件感知的混合精度量化设置

  • 该方法基于不同层的不同位宽的延迟查找表,但是该方法使用模拟的硬件延迟
  • 模型蒸馏[3,94,148,173,189,200,260,262,280]是一种利用较高精度的大模型作为教师,帮助训练紧凑的学生模型的方法。

工作[259]直接在硬件中部署量化操作,并测量不同量化位精度下每层的实际部署延迟

蒸馏辅助量化

思路

•     在学生模型的训练过程中,模型蒸馏建议利用教师产生的软概率,这些软概率可能包含更多的输入信息,而不是只使用真实类别标签

•     也就是说,总损失函数包含学生损失和蒸馏损失,通常公式如下

 

方案

  • 以往的知识提炼方法侧重于探索不同的知识源
  • [94,148,187]使用logits(软概率)作为知识的来源
  • [3,200,261]试图利用来自中间层的知识
  • [227,265]使用多个教师模型来联合监督学生模型
  • [43,269]则在没有教师模型的情况下采用自我蒸馏
  • 存在问题

极端低bit量化

二进制量化 [18,25,47,52,77,82,91,92,118,120,122,127,129,133,139,147,152,157,190,192,210,241,243,254,279,281]

•     极端量化很好地提高速度,但不成熟的二值化方法会导致精度显著下降

  • 方案

•     BinaryConnect[42]将权重限制为+1或-1

•     权重保持为实值,并且仅在正向和反向传递过程中被二值化,以模拟二值化效果

•     正向传递过程中,基于符号函数将实数值权重转换为+1或-1

•     然后用标准的STE训练方法训练网络,通过不可微符号函数传播梯度。

•     Bi-narized NN(BNN)[106]通过对激活和权重进行二值

•     联合二值化权重和激活还有一个额外的好处,那就是改进了延迟,因为昂贵的浮点矩阵乘法可以被轻量级的XNOR操作取代,然后再进行位计数。

•     [45]中提出的二元权重网络和异或网

•     在权值中加入比例因子并使用+α或-α而不是+1或-1来获得更高的精度。这里,α是选择的比例因子,用于最小化实值权重和结果二值化权重之间的距离。

•     实值权重矩阵W可以表示为W≈αB,其中B是满足以下优化问题的二进制权重矩阵:

 

•     三进制值(例如,+1、0和-1),显式地允许量化值为零来实现的[143,156]

•     三元二值网络(TBN)[236],结合二值网络权重和三值激活可以在精度和计算效率之间实现最佳折衷

  • 降低极端量化时的精度下降的方案[191]

•     量化误差最小化

•     HORQ[149]和ABC-NET[155]使用多个二进制矩阵的线性组合,即W≈α1b1+···+αmBm,以减少量化误差。

•     改进损失函数

•     [97,98] 减少最终模型损失(不懂,改不改进都要减少模型损失啊)

•     知识蒸馏:全精度的教师模型也有利于提高二值/三值模型的精度 [33, 173, 189, 252]

•     改进训练方法

•     动机

•     STE在通过符号函数反向传播梯度方面的局限性:STE只传播[-1,1]范围内的权重和/或激活的梯度。

•     解决方案

•     符号函数(导数)的近似

•     BNN+[44]:引入了符号函数的导数的连续近似

•     [192,253,264]则用光滑的可微函数替换符号函数,这些函数逐渐锐化并逼近符号函数。

•     Bi-Real Net [160] :在连续模块中,激活层与激活层之间进行shortcut连接(没看懂)

•     DoReFa-Net[276]:除了对权重和激活进行量化外,还对梯度进行量化

矢量量化

基于k-means量化:[1,30,74,83,116,166,176,184,248]的工作将权重聚类到不同的组中,并在推理过程中使用每组的质心作为量化值

  •  

 

基于k-means的矢量量化与剪枝和霍夫曼编码相结合,可以进一步减小模型规模[83]

乘积量化[74,219,248]是矢量量化的扩展,将权重矩阵划分为子矩阵,并对每个子矩阵进行矢量量

问题残留

目前公司用的那个算法较多

fine-tuning时反向传播怎么做的

剪枝,结构化剪枝,去除层和怎么选,去除featuremap,还是去除kernel

非均匀分布(个人想法,概率密度模拟分布)

全范围vs限制范围对称量化

动态量化vs静态量化:微调时只能动态量化呀

模拟量化是量化后运算时逆量化为运算?纯整数量化和模拟量化哪个用的多

量化的困难在哪?工作中的困难

[1]EirikurAgustsson,FabianMentzer,MichaelTschannen,LukasCavigelli,RaduTimofte,LucaBenini,andLucVanGool.Soft-to-hardvectorquantizationforend-to-endlearningcompressiblerepresentations.arXivpreprintarXiv:1704.00648,2017.[2]EirikurAgustssonandLucasTheis.Universallyquantizedneuralcompression.Advancesinneuralinformationprocessingsystems,2020.[3]SungsooAhn,ShellXuHu,AndreasDamianou,NeilDLawrence,andZhenwenDai.Variationalinformationdistillationforknowledgetransfer.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages9163–9171,2019.[4]MiladAlizadeh,ArashBehboodi,MartvanBaalen,ChristosLouizos,TijmenBlankevoort,andMaxWelling.Gradientl1regularizationforquantizationrobustness.arXivpreprintarXiv:2002.07520,2020.[5]MiladAlizadeh,JavierFernández-Marqués,NicholasDLane,andYarinGal.Anempiricalstudyofbinaryneuralnetworks’optimisation.InInternationalConferenceonLearningRepresentations,2018.[6]JimmyLeiBa,JamieRyanKiros,andGeoffreyEHinton.Layernormalization.arXivpreprintarXiv:1607.06450,2016.[7]HaoliBai,WeiZhang,LuHou,LifengShang,JingJin,XinJiang,QunLiu,MichaelLyu,andIrwinKing.Binarybert:Pushingthelimitofbertquantization.arXivpreprintarXiv:2012.15701,2020.[8]YuBai,Yu-XiangWang,andEdoLiberty.Proxquant:Quantizedneuralnetworksviaproximaloperators.arXivpreprintarXiv:1810.00861,2018.[9]DanaHarryBallard.Anintroductiontonaturalcomputation.MITpress,1999.[10]RonBanner,ItayHubara,EladHoffer,andDanielSoudry.Scalablemethodsfor8-bittrainingofneuralnetworks.Advancesinneuralinformationprocessingsystems,2018.[11]RonBanner,YuryNahshan,EladHoffer,andDanielSoudry.Post-training4-bitquantizationofconvolutionnetworksforrapid-deployment.arXivpreprintarXiv:1810.05723,2018.[12]ChaimBaskin,EliSchwartz,EvgeniiZheltonozhskii,NatanLiss,RajaGiryes,AlexMBronstein,andAviMendelson.Uniq:Uniformnoiseinjectionfornon-uniformquantizationofneuralnetworks.arXivpreprintarXiv:1804.10969,2018.[13]YoshuaBengio,NicholasLéonard,andAaronCourville.Estimatingorpropagatinggradientsthroughstochasticneuronsforconditionalcomputation.arXivpreprintarXiv:1308.3432,2013.[14]WilliamRalphBennett.Spectraofquantizedsignals.TheBellSystemTechnicalJournal,27(3):446–472,1948.[15]AishwaryaBhandare,VamsiSripathi,DeepthiKarkada,VivekMenon,SunChoi,KushalDatta,andVikramSaletore.Efficient8-bitquantizationoftransformerneuralmachinelanguagetranslationmodel.arXivpreprintarXiv:1906.00532,2019.[16]DavisBlalock,JoseJavierGonzalezOrtiz,JonathanFrankle,andJohnGuttag.Whatisthestateofneuralnetworkpruning?arXivpreprintarXiv:2003.03033,2020.[17]TomBBrown,BenjaminMann,NickRyder,MelanieSubbiah,JaredKaplan,PrafullaDhariwal,ArvindNeelakantan,PranavShyam,GirishSastry,AmandaAskell,etal.Languagemodelsarefew-shotlearners.arXivpreprintarXiv:2005.14165,2020.[18]AdrianBulat,BraisMartinez,andGeorgiosTzimiropoulos.High-capacityexpertbinarynetworks.InternationalConferenceonLearningRepresentations,2021.[19]AdrianBulatandGeorgiosTzimiropoulos.Xnornet++:Improvedbinaryneuralnetworks.arXivpreprintarXiv:1909.13863,2019.[20]AdrianBulat,GeorgiosTzimiropoulos,JeanKossaifi,andMajaPantic.Improvedtrainingofbinarynetworksforhumanposeestimationandimagerecognition.arXivpreprintarXiv:1904.05868,2019.[21]AydinBulucandJohnRGilbert.Challengesandadvancesinparallelsparsematrix-matrixmultiplication.In200837thInternationalConferenceonParallelProcessing,pages503–510.IEEE,2008.[22]HanCai,ChuangGan,TianzheWang,ZhekaiZhang,andSongHan.Once-for-all:Trainonenetworkandspecializeitforefficientdeployment.arXivpreprintarXiv:1908.09791,2019.[23]HanCai,LigengZhu,andSongHan.Proxylessnas:Directneuralarchitecturesearchontargettaskandhardware.arXivpreprintarXiv:1812.00332,2018.[24]YaohuiCai,ZheweiYao,ZhenDong,AmirGholami,MichaelWMahoney,andKurtKeutzer.Zeroq:Anovelzeroshotquantizationframework.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages13169–13178,2020.[25]ZhaoweiCai,XiaodongHe,JianSun,andNunoVasconcelos.Deeplearningwithlowprecisionbyhalf-wavegaussianquantization.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages5918–5926,2017.[26]LéopoldCambier,AnahitaBhiwandiwalla,TingGong,MehranNekuii,OguzHElibol,andHanlinTang.Shiftedandsqueezed8-bitfloatingpointformatforlow-precisiontrainingofdeepneuralnetworks.arXivpreprintarXiv:2001.05674,2020.[27]RishidevChaudhuriandIlaFiete.Computationalprinciplesofmemory.Natureneuroscience,19(3):394,2016.[28]HantingChen,YunheWang,ChangXu,ZhaohuiYang,ChuanjianLiu,BoxinShi,ChunjingXu,ChaoXu,andQiTian.Data-freelearningofstudentnetworks.InProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision,pages3514–3522,2019.[29]JianfeiChen,YuGai,ZheweiYao,MichaelWMahoney,andJosephEGonzalez.Astatisticalframeworkforlowbitwidthtrainingofdeepneuralnetworks.arXivpreprintarXiv:2010.14298,2020.[30]KuilinChenandChi-GuhnLee.Incrementalfew-shotlearningviavectorquantizationindeepembeddedspace.InInternationalConferenceonLearningRepresentations,2021.[31]ShangyuChen,WenyaWang,andSinnoJialinPan.Metaquant:Learningtoquantizebylearningtopenetratenon-differentiablequantization.InH.Wallach,H.Larochelle,A.Beygelzimer,F.d\’Alché-Buc,E.Fox,andR.Garnett,editors,AdvancesinNeuralInformationProcessingSystems,volume32.Curran19Associates,Inc.,2019.[32]TianqiChen,ThierryMoreau,ZihengJiang,LianminZheng,EddieYan,HaichenShen,MeghanCowan,LeyuanWang,YuweiHu,LuisCeze,etal.TVM:Anautomatedend-to-endoptimizingcompilerfordeeplearning.In13th{USENIX}SymposiumonOperatingSystemsDesignandImplementation({OSDI}18),pages578–594,2018.[33]XiuyiChen,GuangcanLiu,JingShi,JiamingXu,andBoXu.Distilledbinaryneuralnetworkformonauralspeechseparation.In2018InternationalJointConferenceonNeuralNetworks(IJCNN),pages1–8.IEEE,2018.[34]Ting-WuChin,PierceI-JenChuang,VikasChandra,andDianaMarculescu.Oneweightbitwidthtorulethemall.ProceedingsoftheEuropeanConferenceonComputerVision(ECCV),2020.[35]BrianChmiel,LiadBen-Uri,MoranShkolnik,EladHoffer,RonBanner,andDanielSoudry.Neuralgradientsarenearlognormal:improvedquantizedandsparsetraining.InInternationalConferenceonLearningRepresentations,2021.[36]JungwookChoi,ZhuoWang,SwagathVenkataramani,PierceI-JenChuang,VijayalakshmiSrinivasan,andKailashGopalakrishnan.Pact:Parameterizedclippingactivationforquantizedneuralnetworks.arXivpreprintarXiv:1805.06085,2018.[37]YoojinChoi,JihwanChoi,MostafaEl-Khamy,andJungwonLee.Data-freenetworkquantizationwithadversarialknowledgedistillation.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognitionWorkshops,pages710–711,2020.[38]YoojinChoi,MostafaEl-Khamy,andJungwonLee.Towardsthelimitofnetworkquantization.arXivpreprintarXiv:1612.01543,2016.[39]YoojinChoi,MostafaEl-Khamy,andJungwonLee.Learninglowprecisiondeepneuralnetworksthroughregularization.arXivpreprintarXiv:1809.00095,2,2018.[40]YoniChoukroun,EliKravchik,FanYang,andPavelKisilev.Low-bitquantizationofneuralnetworksforefficientinference.InICCVWorkshops,pages3009–3018,2019.[41]MatthieuCourbariaux,YoshuaBengio,andJean-PierreDavid.Trainingdeepneuralnetworkswithlowprecisionmultiplications.arXivpreprintarXiv:1412.7024,2014.[42]MatthieuCourbariaux,YoshuaBengio,andJean-PierreDavid.BinaryConnect:Trainingdeepneuralnetworkswithbinaryweightsduringpropagations.InAdvancesinneuralinformationprocessingsystems,pages3123–3131,2015.[43]ElliotJCrowley,GavinGray,andAmosJStorkey.Moonshine:Distillingwithcheapconvolutions.InNeurIPS,pages2893–2903,2018.[44]SajadDarabi,MouloudBelbahri,MatthieuCourbariaux,andVahidPartoviNia.Bnn+:Improvedbinarynetworktraining.2018.[45]LeiDeng,PengJiao,JingPei,ZhenzhiWu,andGuoqiLi.Gxnor-net:Trainingdeepneuralnetworkswithternaryweightsandactivationswithoutfull-precisionmemoryunderaunifieddiscretizationframework.NeuralNetworks,100:49–58,2018.[46]JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova.Bert:Pre-trainingofdeepbidirectionaltransformersforlanguageunderstanding.arXivpreprintarXiv:1810.04805,2018.[47]JamesDiffenderferandBhavyaKailkhura.Multi-prizelotterytickethypothesis:Findingaccuratebinaryneuralnetworksbypruningarandomlyweightednetwork.InInternationalConferenceonLearningRepresentations,2021.[48]RuizhouDing,Ting-WuChin,ZeyeLiu,andDianaMarculescu.Regularizingactivationdistributionfortrainingbinarizeddeepnetworks.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages11408–11417,2019.[49]XinDong,ShangyuChen,andSinnoJialinPan.Learningtoprunedeepneuralnetworksvialayer-wiseoptimalbrainsurgeon.arXivpreprintarXiv:1705.07565,2017.[50]ZhenDong,ZheweiYao,DaiyaanArfeen,AmirGholami,MichaelW.Mahoney,andKurtKeutzer.HAWQ-V2:Hessianawaretrace-weightedquantizationofneuralnetworks.Advancesinneuralinformationprocessingsystems,2020.[51]ZhenDong,ZheweiYao,AmirGholami,MichaelWMahoney,andKurtKeutzer.Hawq:Hessianawarequantizationofneuralnetworkswithmixed-precision.InProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision,pages293–302,2019.[52]YueqiDuan,JiwenLu,ZiweiWang,JianjiangFeng,andJieZhou.Learningdeepbinarydescriptorwithmulti-quantization.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages1183–1192,2017.[53]JGDunn.Theperformanceofaclassofndimensionalquantizersforagaussiansource.InProc.ColumbiaSymp.SignalTransmissionProcessing,pages76–81,1965.[54]ThomasElsken,JanHendrikMetzen,FrankHutter,etal.Neuralarchitecturesearch:Asurvey.J.Mach.Learn.Res.,20(55):1–21,2019.[55]WilliamHEquitz.Anewvectorquantizationclusteringalgorithm.IEEEtransactionsonacoustics,speech,andsignalprocessing,37(10):1568–1575,1989.[56]FartashFaghri,ImanTabrizian,IliaMarkov,DanAlistarh,DanielRoy,andAliRamezani-Kebrya.Adaptivegradientquantizationfordata-parallelsgd.Advancesinneuralinformationprocessingsystems,2020.[57]AAldoFaisal,LucPJSelen,andDanielMWolpert.Noiseinthenervoussystem.Naturereviewsneuroscience,9(4):292–303,2008.[58]AngelaFan,PierreStock,BenjaminGraham,EdouardGrave,RémiGribonval,HervéJégou,andArmandJoulin.Trainingwithquantizationnoiseforextrememodelcompression.arXive-prints,pagesarXiv–2004,2020.[59]JunFang,AliShafiee,HamzahAbdel-Aziz,DavidThorsley,GeorgiosGeorgiadis,andJosephHassoun.Near-losslessposttrainingquantizationofdeepneuralnetworksviaapiecewiselinearapproximation.arXivpreprintarXiv:2002.00104,2020.[60]JunFang,AliShafiee,HamzahAbdel-Aziz,DavidThorsley,GeorgiosGeorgiadis,andJosephHHassoun.Post-trainingpiecewiselinearquantizationfordeepneuralnetworks.InEuropeanConferenceonComputerVision,pages69–86.Springer,2020.[61]JulianFaraone,NicholasFraser,MichaelaBlott,andPhilipHWLeong.Syq:Learningsymmetricquantizationforefficientdeepneuralnetworks.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages4300–4309,2018.[62]AlexanderFinkelstein,UriAlmog,andMarkGrobman.Fightingquantizationbiaswithbias.arXivpreprintarXiv:1906.03193,2019.[63]EricFlamand,DavideRossi,FrancescoConti,IgorLoi,AntonioPullini,FlorentRotenberg,andLucaBenini.Gap8:Arisc-vsocforaiattheedgeoftheiot.In2018IEEE29thInternationalConferenceonApplication-specificSystems,ArchitecturesandProcessors(ASAP),pages1–4.IEEE,2018.[64]AbramLFriesenandPedroDomingos.Deeplearningasamixedconvex-combinatorialoptimizationproblem.arXiv20preprintarXiv:1710.11573,2017.[65]TrevorGale,ErichElsen,andSaraHooker.Thestateofsparsityindeepneuralnetworks.arXivpreprintarXiv:1902.09574,2019.[66]AEGamal,LHemachandra,ItzhakShperling,andVWei.Usingsimulatedannealingtodesigngoodcodes.IEEETransactionsonInformationTheory,33(1):116–123,1987.[67]SahajGarg,AnirudhJain,JoeLou,andMitchellNahmias.Confoundingtradeoffsforneuralnetworkquantization.arXivpreprintarXiv:2102.06366,2021.[68]SahajGarg,JoeLou,AnirudhJain,andMitchellNahmias.Dynamicprecisionanalogcomputingforneuralnetworks.arXivpreprintarXiv:2102.06365,2021.[69]AmirGholami,KiseokKwon,BichenWu,ZizhengTai,XiangyuYue,PeterJin,SichengZhao,andKurtKeutzer.SqueezeNext:Hardware-awareneuralnetworkdesign.WorkshoppaperinCVPR,2018.[70]AmirGholami,MichaelWMahoney,andKurtKeutzer.Anintegratedapproachtoneuralnetworkdesign,training,andinference.Univ.California,Berkeley,Berkeley,CA,USA,Tech.Rep,2020.[71]BorisGinsburg,SergeiNikolaev,AhmadKiswani,HaoWu,AmirGholaminejad,SlawomirKierat,MichaelHouston,andAlexFit-Florea.Tensorprocessingusinglowprecisionformat,December282017.USPatentApp.15/624,577.[72]RuihaoGong,XianglongLiu,ShenghuJiang,TianxiangLi,PengHu,JiazhenLin,FengweiYu,andJunjieYan.Differentiablesoftquantization:Bridgingfull-precisionandlow-bitneuralnetworks.InProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision,pages4852–4861,2019.[73]YunchaoGong,LiuLiu,MingYang,andLubomirBourdev.Compressingdeepconvolutionalnetworksusingvectorquantization.arXivpreprintarXiv:1412.6115,2014.[74]IanJGoodfellow,JeanPouget-Abadie,MehdiMirza,BingXu,DavidWarde-Farley,SherjilOzair,AaronCourville,andYoshuaBengio.Generativeadversarialnetworks.arXivpreprintarXiv:1406.2661,2014.[75]RobertM.GrayandDavidL.Neuhoff.Quantization.IEEEtransactionsoninformationtheory,44(6):2325–2383,1998.[76]YiwenGuo,AnbangYao,HaoZhao,andYurongChen.Networksketching:Exploitingbinarystructureindeepcnns.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages5955–5963,2017.[77]SuyogGupta,AnkurAgrawal,KailashGopalakrishnan,andPritishNarayanan.Deeplearningwithlimitednumericalprecision.InInternationalconferenceonmachinelearning,pages1737–1746.PMLR,2015.[78]PhilippGysel,MohammadMotamedi,andSoheilGhiasi.Hardware-orientedapproximationofconvolutionalneuralnetworks.arXivpreprintarXiv:1604.03168,2016.[79]PhilippGysel,JonPimentel,MohammadMotamedi,andSoheilGhiasi.Ristretto:Aframeworkforempiricalstudyofresource-efficientinferenceinconvolutionalneuralnetworks.IEEEtransactionsonneuralnetworksandlearningsystems,29(11):5784–5789,2018.[80]HaiVictorHabi,RoyHJennings,andArnonNetzer.Hmq:Hardwarefriendlymixedprecisionquantizationblockforcnns.arXivpreprintarXiv:2007.09952,2020.[81]KaiHan,YunheWang,YixingXu,ChunjingXu,EnhuaWu,andChangXu.Trainingbinaryneuralnetworksthroughlearningwithnoisysupervision.InInternationalConferenceonMachineLearning,pages4017–4026.PMLR,2020.[82]SongHan,HuiziMao,andWilliamJDally.Deepcompression:Compressingdeepneuralnetworkswithpruning,trainedquantizationandhuffmancoding.arXivpreprintarXiv:1510.00149,2015.[83]MatanHaroush,ItayHubara,EladHoffer,andDanielSoudry.Theknowledgewithin:Methodsfordata-freemodelcompression.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages8494–8502,2020.[84]BabakHassibiandDavidGStork.Secondorderderivativesfornetworkpruning:Optimalbrainsurgeon.MorganKaufmann,1993.[85]BenjaminHawks,JavierDuarte,NicholasJFraser,AlessandroPappalardo,NhanTran,andYamanUmuroglu.Psandqs:Quantization-awarepruningforefficientlowlatencyneuralnetworkinference.arXivpreprintarXiv:2102.11289,2021.[86]KaimingHe,XiangyuZhang,ShaoqingRen,andJianSun.Deepresiduallearningforimagerecognition.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages770–778,2016.[87]XiangyuHeandJianCheng.Learningcompressionfromlimitedunlabeleddata.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages752–769,2018.[88]XiangyuHe,QinghaoHu,PeisongWang,andJianCheng.Generativezero-shotnetworkquantization.arXivpreprintarXiv:2101.08430,2021.[89]YihuiHe,JiLin,ZhijianLiu,HanruiWang,Li-JiaLi,andSongHan.Amc:Automlformodelcompressionandaccelerationonmobiledevices.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages784–800,2018.[90]ZhezhiHeandDeliangFan.Simultaneouslyoptimizingweightandquantizerofternaryneuralnetworkusingtruncatedgaussianapproximation.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages11438–11446,2019.[91]KoenHelwegen,JamesWiddicombe,LukasGeiger,ZechunLiu,Kwang-TingCheng,andRoelandNusselder.Latentweightsdonotexist:Rethinkingbinarizedneuralnetworkoptimization.Advancesinneuralinformationprocessingsystems,2019.[92]DanHendrycksandKevinGimpel.Gaussianerrorlinearunits(GELUs).arXivpreprintarXiv:1606.08415,2016.[93]GeoffreyHinton,OriolVinyals,andJeffDean.Distillingtheknowledgeinaneuralnetwork.arXivpreprintarXiv:1503.02531,2015.[94]TorstenHoefler,DanAlistarh,TalBen-Nun,NikoliDryden,andAlexandraPeste.Sparsityindeeplearning:Pruningandgrowthforefficientinferenceandtraininginneuralnetworks.arXivpreprintarXiv:2102.00554,2021.[95]MarkHorowitz.1.1computing’senergyproblem(andwhatwecandoaboutit).In2014IEEEInternationalSolid-StateCircuitsConferenceDigestofTechnicalPapers(ISSCC),pages10–14.IEEE,2014.[96]LuHouandJamesTKwok.Loss-awareweightquantizationofdeepnetworks.arXivpreprintarXiv:1802.08635,2018.[97]LuHou,QuanmingYao,andJamesTKwok.Loss-awarebinarizationofdeepnetworks.arXivpreprintarXiv:1611.01600,2016.[98]AndrewHoward,MarkSandler,GraceChu,Liang-ChiehChen,BoChen,MingxingTan,WeijunWang,YukunZhu,RuomingPang,VijayVasudevan,etal.SearchingforMobilenetV3.InProceedingsoftheIEEEInternationalConferenceonComputerVision,pages1314–1324,2019.[99]AndrewGHoward,MenglongZhu,BoChen,Dmitry21Kalenichenko,WeijunWang,TobiasWeyand,MarcoAndreetto,andHartwigAdam.MobileNets:Efficientconvolutionalneuralnetworksformobilevisionapplications.arXivpreprintarXiv:1704.04861,2017.[100]PengHu,XiPeng,HongyuanZhu,MohamedMSabryAly,andJieLin.Opq:Compressingdeepneuralnetworkswithone-shotpruning-quantization.2021.[101]QinghaoHu,PeisongWang,andJianCheng.Fromhashingtocnns:Trainingbinaryweightnetworksviahashing.InProceedingsoftheAAAIConferenceonArtificialIntelligence,volume32,2018.[102]GaoHuang,ZhuangLiu,LaurensVanDerMaaten,andKilianQWeinberger.Denselyconnectedconvolutionalnetworks.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages4700–4708,2017.[103]QijingHuang,DequanWang,ZhenDong,YizhaoGao,YaohuiCai,TianLi,BichenWu,KurtKeutzer,andJohnWawrzynek.Codenet:Efficientdeploymentofinput-adaptiveobjectdetectiononembeddedfpgas.InThe2021ACM/SIGDAInternationalSymposiumonField-ProgrammableGateArrays,pages206–216,2021.[104]ZehaoHuangandNaiyanWang.Data-drivensparsestructureselectionfordeepneuralnetworks.InProceedingsoftheEuropeanconferenceoncomputervision(ECCV),pages304–320,2018.[105]ItayHubara,MatthieuCourbariaux,DanielSoudry,RanElYaniv,andYoshuaBengio.Binarizedneuralnetworks.InAdvancesinneuralinformationprocessingsystems,pages4107–4115,2016.[106]ItayHubara,YuryNahshan,YairHanani,RonBanner,andDanielSoudry.Improvingposttrainingneuralquantization:Layer-wisecalibrationandintegerprogramming.arXivpreprintarXiv:2006.10518,2020.[107]DavidAHuffman.Amethodfortheconstructionofminimumredundancycodes.ProceedingsoftheIRE,40(9):1098–1101,1952.[108]ForrestNIandola,SongHan,MatthewWMoskewicz,KhalidAshraf,WilliamJDally,andKurtKeutzer.SqueezeNet:Alexnet-levelaccuracywith50xfewerparametersand<0.5mbmodelsize.arXivpreprintarXiv:1602.07360,2016.[109]YaniIoannou,DuncanRobertson,RobertoCipolla,andAntonioCriminisi.Deeproots:Improvingcnnefficiencywithhierarchicalfiltergroups.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages1231–1240,2017.[110]SergeyIoffeandChristianSzegedy.Batchnormalization:Acceleratingdeepnetworktrainingbyreducinginternalcovariateshift.InInternationalconferenceonmachinelearning,pages448–456.PMLR,2015.[111]BenoitJacob,SkirmantasKligys,BoChen,MenglongZhu,MatthewTang,AndrewHoward,HartwigAdam,andDmitryKalenichenko.Quantizationandtrainingofneuralnetworksforefficientinteger-arithmetic-onlyinference.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition(CVPR),2018.[112]AnimeshJain,ShoubhikBhattacharya,MasahiroMasuda,VinSharma,andYidaWang.Efficientexecutionofquantizeddeeplearningmodels:Acompilerapproach.arXivpreprintarXiv:2006.10226,2020.[113]ShubhamJain,SwagathVenkataramani,VijayalakshmiSrinivasan,JungwookChoi,KailashGopalakrishnan,andLelandChang.Biscaled-dnn:Quantizinglong-taileddatastructureswithtwoscalefactorsfordeepneuralnetworks.In201956thACM/IEEEDesignAutomationConference(DAC),pages1–6.IEEE,2019.[114]EricJang,ShixiangGu,andBenPoole.Categoricalreparameterizationwithgumbel-softmax.arXivpreprintarXiv:1611.01144,2016.[115]HerveJegou,MatthijsDouze,andCordeliaSchmid.Productquantizationfornearestneighborsearch.IEEEtransactionsonpatternanalysisandmachineintelligence,33(1):117–128,2010.[116]YongkweonJeon,BaeseongPark,SeJungKwon,ByeongwookKim,JeonginYun,andDongsooLee.Biqgemm:matrixmultiplicationwithlookuptableforbinary-coding-basedquantizeddnns.arXivpreprintarXiv:2005.09904,2020.[117]KaiJiaandMartinRinard.Efficientexactverificationofbinarizedneuralnetworks.Advancesinneuralinformationprocessingsystems,2020.[118]JingJin,CaiLiang,TianchengWu,LiqinZou,andZhiliangGan.Kdlsq-bert:Aquantizedbertcombiningknowledgedistillationwithlearnedstepsizequantization.arXivpreprintarXiv:2101.05938,2021.[119]QingJin,LinjieYang,andZhenyuLiao.Adabits:Neuralnetworkquantizationwithadaptivebit-widths.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages2146–2156,2020.[120]JeffJohnson.Rethinkingfloatingpointfordeeplearning.arXivpreprintarXiv:1811.01721,2018.[121]FelixJuefei-Xu,VishnuNareshBoddeti,andMariosSavvides.Localbinaryconvolutionalneuralnetworks.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages19–28,2017.[122]SangilJung,ChangyongSon,SeohyungLee,JinwooSon,JaeJoonHan,YoungjunKwak,SungJuHwang,andChangkyuChoi.Learningtoquantizedeepnetworksbyoptimizingquantizationintervalswithtaskloss.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages4350–4359,2019.[123]PradKadambi,KarthikeyanNatesanRamamurthy,andVisarBerisha.Comparingfisherinformationregularizationwithdistillationfordnnquantization.Advancesinneuralinformationprocessingsystems,2020.[124]PPKanjilal,PKDey,andDNBanerjee.Reduced-sizeneuralnetworksthroughsingularvaluedecompositionandsubsetselection.ElectronicsLetters,29(17):1516–1518,1993.[125]MelWinKhaw,LuminitaStevens,andMichaelWoodford.Discreteadjustmenttoachangingenvironment:Experimentalevidence.JournalofMonetaryEconomics,91:88–103,2017.[126]HyungjunKim,KyungsuKim,JinseokKim,andJae-JoonKim.Binaryduo:Reducinggradientmismatchinbinaryactivationnetworkbycouplingbinaryactivations.InternationalConferenceonLearningRepresentations,2020.[127]JanghoKim,KiYoonYoo,andNojunKwak.Position-basedscaledgradientformodelquantizationandsparsetraining.Advancesinneuralinformationprocessingsystems,2020.[128]MinjeKimandParisSmaragdis.Bitwiseneuralnetworks.arXivpreprintarXiv:1601.06071,2016.[129]SehoonKim,AmirGholami,ZheweiYao,MichaelWMahoney,andKurtKeutzer.I-bert:Integer-onlybertquantization.arXivpreprintarXiv:2101.01321,2021.[130]RaghuramanKrishnamoorthi.Quantizingdeepconvolutionalnetworksforefficientinference:Awhitepaper.arXivpreprintarXiv:1806.08342,2018.[131]SeJungKwon,DongsooLee,ByeongwookKim,ParichayKapoor,BaeseongPark,andGu-YeonWei.Structured22compressionbyweightencryptionforunstructuredpruningandquantization.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages1909–1918,2020.[132]LiangzhenLai,NaveenSuda,andVikasChandra.CMSIS-NN:Efficientneuralnetworkkernelsforarmcortex-mcpus.arXivpreprintarXiv:1801.06601,2018.[133]HamedFLangroudi,ZachariahCarmichael,DavidPastuch,andDhireeshaKudithipudi.Cheetah:Mixedlow-precisionhardware&softwareco-designframeworkfordnnsontheedge.arXivpreprintarXiv:1908.02386,2019.[134]KennethWLatimer,JacobLYates,MiriamLRMeister,AlexanderCHuk,andJonathanWPillow.Single-trialspiketrainsinparietalcortexrevealdiscretestepsduringdecisionmaking.Science,349(6244):184–187,2015.[135]YannLeCun,JohnSDenker,andSaraASolla.Optimalbraindamage.InAdvancesinneuralinformationprocessingsystems,pages598–605,1990.[136]Dong-HyunLee,SaizhengZhang,AsjaFischer,andYoshuaBengio.Differencetargetpropagation.InJointeuropeanconferenceonmachinelearningandknowledgediscoveryindatabases,pages498–515.Springer,2015.[137]DongsooLee,SeJungKwon,ByeongwookKim,YongkweonJeon,BaeseongPark,andJeonginYun.Flexor:Trainablefractionalquantization.Advancesinneuralinformationprocessingsystems,2020.[138]JunHaengLee,SangwonHa,SaeromChoi,Won-JoLee,andSeungwonLee.Quantizationforrapiddeploymentofdeepneuralnetworks.arXivpreprintarXiv:1810.05488,2018.[139]NamhoonLee,ThalaiyasingamAjanthan,andPhilipHSTorr.Snip:Single-shotnetworkpruningbasedonconnectionsensitivity.arXivpreprintarXiv:1810.02340,2018.[140]CongLeng,ZeshengDou,HaoLi,ShenghuoZhu,andRongJin.Extremelylowbitneuralnetwork:Squeezethelastbitoutwithadmm.InProceedingsoftheAAAIConferenceonArtificialIntelligence,volume32,2018.[141]FengfuLi,BoZhang,andBinLiu.Ternaryweightnetworks.arXivpreprintarXiv:1605.04711,2016.[142]RundongLi,YanWang,FengLiang,HongweiQin,JunjieYan,andRuiFan.Fullyquantizednetworkforobjectdetection.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition(CVPR),2019.[143]YuhangLi,XinDong,andWeiWang.Additivepowers-of-twoquantization:Anefficientnon-uniformdiscretizationforneuralnetworks.arXivpreprintarXiv:1909.13144,2019.[144]YuhangLi,RuihaoGong,XuTan,YangYang,PengHu,QiZhang,FengweiYu,WeiWang,andShiGu.Brecq:Pushingthelimitofpost-trainingquantizationbyblockreconstruction.InternationalConferenceonLearningRepresentations,2021.[145]YuhangLi,RuihaoGong,FengweiYu,XinDong,andXianglongLiu.Dms:Differentiabledimensionsearchforbinaryneuralnetworks.InternationalConferenceonLearningRepresentations,2020.[146]YunchengLi,JianchaoYang,YaleSong,LiangliangCao,JieboLuo,andLi-JiaLi.Learningfromnoisylabelswithdistillation.InProceedingsoftheIEEEInternationalConferenceonComputerVision,pages1910–1918,2017.[147]ZefanLi,BingbingNi,WenjunZhang,XiaokangYang,andWenGao.Performanceguaranteednetworkaccelerationviahigh-orderresidualquantization.InProceedingsoftheIEEEinternationalconferenceoncomputervision,pages2584–2592,2017.[148]ZhenyuLiao,RomainCouillet,andMichaelWMahoney.Sparsequantizedspectralclustering.InternationalConferenceonLearningRepresentations,2021.[149]DarrylLin,SachinTalathi,andSreekanthAnnapureddy.Fixedpointquantizationofdeepconvolutionalnetworks.InInternationalconferenceonmachinelearning,pages2849–2858.PMLR,2016.[150]MingbaoLin,RongrongJi,ZihanXu,BaochangZhang,YanWang,YongjianWu,FeiyueHuang,andChia-WenLin.Rotatedbinaryneuralnetwork.Advancesinneuralinformationprocessingsystems,2020.[151]ShaohuiLin,RongrongJi,YuchaoLi,YongjianWu,FeiyueHuang,andBaochangZhang.Acceleratingconvolutionalnetworksviaglobal&dynamicfilterpruning.InIJCAI,pages2425–2432,2018.[152]WuweiLin.Automatingoptimizationofquantizeddeeplearningmodelsoncuda:https://tvm.apache.org/2019/04/29/optcuda-quantized,2019.[153]XiaofanLin,CongZhao,andWeiPan.Towardsaccuratebinaryconvolutionalneuralnetwork.arXivpreprintarXiv:1711.11294,2017.[154]ZhouhanLin,MatthieuCourbariaux,RolandMemisevic,andYoshuaBengio.Neuralnetworkswithfewmultiplications.arXivpreprintarXiv:1510.03009,2015.[155]ChunleiLiu,WenruiDing,XinXia,BaochangZhang,JiaxinGu,JianzhuangLiu,RongrongJi,andDavidDoermann.Circulantbinaryconvolutionalnetworks:Enhancingtheperformanceof1-bitdcnnswithcirculantbackpropagation.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages2691–2699,2019.[156]HanxiaoLiu,KarenSimonyan,andYimingYang.Darts:Differentiablearchitecturesearch.arXivpreprintarXiv:1806.09055,2018.[157]YinhanLiu,MyleOtt,NamanGoyal,JingfeiDu,MandarJoshi,DanqiChen,OmerLevy,MikeLewis,LukeZettlemoyer,andVeselinStoyanov.RoBERTa:Arobustlyoptimizedbertpretrainingapproach.arXivpreprintarXiv:1907.11692,2019.[158]ZechunLiu,BaoyuanWu,WenhanLuo,XinYang,WeiLiu,andKwang-TingCheng.Bi-realnet:Enhancingtheperformanceof1-bitcnnswithimprovedrepresentationalcapabilityandadvancedtrainingalgorithm.InProceedingsoftheEuropeanconferenceoncomputervision(ECCV),pages722–737,2018.[159]Zhi-GangLiuandMatthewMattina.Learninglow-precisionneuralnetworkswithoutstraight-throughestimator(STE).arXivpreprintarXiv:1903.01061,2019.[160]Jian-HaoLuo,JianxinWu,andWeiyaoLin.Thinet:Afilterlevelpruningmethodfordeepneuralnetworkcompression.InProceedingsoftheIEEEinternationalconferenceoncomputervision,pages5058–5066,2017.[161]NingningMa,XiangyuZhang,Hai-TaoZheng,andJianSun.ShufflenetV2:Practicalguidelinesforefficientcnnarchitecturedesign.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages116–131,2018.[162]FranckMamaletandChristopheGarcia.Simplifyingconvnetsforfastlearning.InInternationalConferenceonArtificialNeuralNetworks,pages58–65.Springer,2012.[163]BraisMartinez,JingYang,AdrianBulat,andGeorgiosTzimiropoulos.Trainingbinaryneuralnetworkswithreal-tobinaryconvolutions.arXivpreprintarXiv:2003.11535,2020.[164]JulietaMartinez,ShobhitZakhmi,HolgerHHoos,andJamesJLittle.Lsq++:Lowerrunningtimeandhigherrecallinmulti-codebookquantization.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages491–506,2018.23[165]WarrenSMcCullochandWalterPitts.Alogicalcalculusoftheideasimmanentinnervousactivity.Thebulletinofmathematicalbiophysics,5(4):115–133,1943.[166]JeffreyLMcKinstry,StevenKEsser,RathinakumarAppuswamy,DeepikaBablani,JohnVArthur,IzzetBYildiz,andDharmendraSModha.Discoveringlow-precisionnetworksclosetofull-precisionnetworksforefficientembeddedinference.arXivpreprintarXiv:1809.04191,2018.[167]NaveenMellempudi,SudarshanSrinivasan,DipankarDas,andBharatKaul.Mixedprecisiontrainingwith8-bitfloatingpoint.arXivpreprintarXiv:1905.12334,2019.[168]EldadMeller,AlexanderFinkelstein,UriAlmog,andMarkGrobman.Same,samebutdifferent:Recoveringneuralnetworkquantizationerrorthroughweightfactorization.InInternationalConferenceonMachineLearning,pages4486–4495.PMLR,2019.[169]PauliusMicikevicius,SharanNarang,JonahAlben,GregoryDiamos,ErichElsen,DavidGarcia,BorisGinsburg,MichaelHouston,OleksiiKuchaiev,GaneshVenkatesh,etal.Mixedprecisiontraining.arXivpreprintarXiv:1710.03740,2017.[170]SzymonMigacz.Nvidia8-bitinferencewithtensorrt.GPUTechnologyConference,2017.[171]AsitMishraandDebbieMarr.Apprentice:Usingknowledgedistillationtechniquestoimprovelow-precisionnetworkaccuracy.arXivpreprintarXiv:1711.05852,2017.[172]AsitMishra,ErikoNurvitadhi,JeffreyJCook,andDebbieMarr.Wrpn:Widereduced-precisionnetworks.arXivpreprintarXiv:1709.01134,2017.[173]DaisukeMiyashita,EdwardHLee,andBorisMurmann.Convolutionalneuralnetworksusinglogarithmicdatarepresentation.arXivpreprintarXiv:1603.01025,2016.[174]LopamudraMukherjee,SathyaNRavi,JimingPeng,andVikasSingh.Abiresolutionspectralframeworkforproductquantization.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages3329–3338,2018.[175]MarkusNagel,RanaAliAmjad,MartVanBaalen,ChristosLouizos,andTijmenBlankevoort.Upordown?adaptiveroundingforpost-trainingquantization.InInternationalConferenceonMachineLearning,pages7197–7206.PMLR,2020.[176]MarkusNagel,MartvanBaalen,TijmenBlankevoort,andMaxWelling.Data-freequantizationthroughweightequalizationandbiascorrection.InProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision,pages1325–1334,2019.[177]MaximNaumov,UtkuDiril,JongsooPark,BenjaminRay,JedrzejJablonski,andAndrewTulloch.Onperiodicfunctionsasregularizersforquantizationofneuralnetworks.arXivpreprintarXiv:1811.09862,2018.[178]MaximNaumov,DheevatsaMudigere,Hao-JunMichaelShi,JianyuHuang,NarayananSundaraman,JongsooPark,XiaodongWang,UditGupta,Carole-JeanWu,AlissonGAzzolini,etal.Deeplearningrecommendationmodelforpersonalizationandrecommendationsystems.arXivpreprintarXiv:1906.00091,2019.[179]RenkunNi,Hong-minChu,OscarCastañeda,Ping-yehChiang,ChristophStuder,andTomGoldstein.Wrapnet:Neuralnetinferencewithultra-low-resolutionarithmetic.arXivpreprintarXiv:2007.13242,2020.[180]LinNing,GuoyangChen,WeifengZhang,andXipengShen.Simpleaugmentationgoesalongway:{ADRL}for{dnn}quantization.InInternationalConferenceonLearningRepresentations,2021.[181]BMOliver,JRPierce,andClaudeEShannon.Thephilosophyofpcm.ProceedingsoftheIRE,36(11):1324–1331,1948.[182]EunhyeokPark,JunwhanAhn,andSungjooYoo.Weightedentropy-basedquantizationfordeepneuralnetworks.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages5456–5464,2017.[183]EunhyeokPark,SungjooYoo,andPeterVajda.Value-awarequantizationfortrainingandinferenceofneuralnetworks.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages580–595,2018.[184]SejunPark,JaehoLee,SangwooMo,andJinwooShin.Lookahead:afar-sightedalternativeofmagnitude-basedpruning.arXivpreprintarXiv:2002.04809,2020.[185]WonpyoPark,DongjuKim,YanLu,andMinsuCho.Relationalknowledgedistillation.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages3967–3976,2019.[186]HieuPham,MelodyGuan,BarretZoph,QuocLe,andJeffDean.Efficientneuralarchitecturesearchviaparameterssharing.InInternationalConferenceonMachineLearning,pages4095–4104.PMLR,2018.[187]AntonioPolino,RazvanPascanu,andDanAlistarh.Modelcompressionviadistillationandquantization.arXivpreprintarXiv:1802.05668,2018.[188]HaotongQin,ZhongangCai,MingyuanZhang,YifuDing,HaiyuZhao,ShuaiYi,XianglongLiu,andHaoSu.Bipointnet:Binaryneuralnetworkforpointclouds.InternationalConferenceonLearningRepresentations,2021.[189]HaotongQin,RuihaoGong,XianglongLiu,XiaoBai,JingkuanSong,andNicuSebe.Binaryneuralnetworks:Asurvey.PatternRecognition,105:107281,2020.[190]HaotongQin,RuihaoGong,XianglongLiu,MingzhuShen,ZiranWei,FengweiYu,andJingkuanSong.Forwardandbackwardinformationretentionforaccuratebinaryneuralnetworks.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages2250–2259,2020.[191]ZhongnanQu,ZimuZhou,YunCheng,andLotharThiele.Adaptiveloss-awarequantizationformulti-bitnetworks.InIEEE/CVFConferenceonComputerVisionandPatternRecognition(CVPR),June2020.[192]AlecRadford,KarthikNarasimhan,TimSalimans,andIlyaSutskever.Improvinglanguageunderstandingbygenerativepre-training,2018.[193]AlecRadford,JeffreyWu,RewonChild,DavidLuan,DarioAmodei,andIlyaSutskever.Languagemodelsareunsupervisedmultitasklearners.OpenAIblog,1(8):9,2019.[194]PrajitRamachandran,BarretZoph,andQuocVLe.Swish:aself-gatedactivationfunction.arXivpreprintarXiv:1710.05941,7:1,2017.[195]MohammadRastegari,VicenteOrdonez,JosephRedmon,andAliFarhadi.Xnor-net:Imagenetclassificationusingbinaryconvolutionalneuralnetworks.InEuropeanconferenceoncomputervision,pages525–542.Springer,2016.[196]BernhardRiemann.UeberdieDarstellbarkeiteinerFunctiondurcheinetrigonometrischeReihe,volume13.Dieterich,1867.[197]AdrianaRomero,NicolasBallas,SamiraEbrahimiKahou,AntoineChassang,CarloGatta,andYoshuaBengio.Fitnets:Hintsforthindeepnets.arXivpreprintarXiv:1412.6550,2014.[198]KennethRose,EitanGurewitz,andGeoffreyFox.Adeterministicannealingapproachtoclustering.PatternRecognitionLetters,11(9):589–594,1990.[199]FrankRosenblatt.Theperceptron,aperceivingandrecognizing24automatonProjectPara.CornellAeronauticalLaboratory,1957.[200]FrankRosenblatt.Principlesofneurodynamics.perceptronsandthetheoryofbrainmechanisms.Technicalreport,CornellAeronauticalLabIncBuffaloNY,1961.[201]ManueleRusci,MarcoFariselli,AlessandroCapotondi,andLucaBenini.Leveragingautomatedmixed-low-precisionquantizationfortinyedgemicrocontrollers.InIoTStreamsforData-DrivenPredictiveMaintenanceandIoT,Edge,andMobileforEmbeddedMachineLearning,pages296–308.Springer,2020.[202]TaraNSainath,BrianKingsbury,VikasSindhwani,EbruArisoy,andBhuvanaRamabhadran.Low-rankmatrixfactorizationfordeepneuralnetworktrainingwithhigh-dimensionaloutputtargets.In2013IEEEinternationalconferenceonacoustics,speechandsignalprocessing,pages6655–6659.IEEE,2013.[203]DaveSalvator,HaoWu,MilindKulkarni,andNiallEmmart.Int4precisionforaiinference:https://developer.nvidia.com/blog/int4-for-ai-inference/,2019.[204]MarkSandler,AndrewHoward,MenglongZhu,AndreyZhmoginov,andLiang-ChiehChen.MobilenetV2:Invertedresidualsandlinearbottlenecks.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages4510–4520,2018.[205]ClaudeEShannon.Amathematicaltheoryofcommunication.TheBellsystemtechnicaljournal,27(3):379–423,1948.[206]ClaudeEShannon.Codingtheoremsforadiscretesourcewithafidelitycriterion.IRENat.Conv.Rec,4(142-163):1,1959.[207]AlexanderShekhovtsov,ViktorYanush,andBorisFlach.Pathsample-analyticgradientestimatorsforstochasticbinarynetworks.Advancesinneuralinformationprocessingsystems,2020.[208]MingzhuShen,XianglongLiu,RuihaoGong,andKaiHan.Balancedbinaryneuralnetworkswithgatedresidual.InICASSP2020-2020IEEEInternationalConferenceonAcoustics,SpeechandSignalProcessing(ICASSP),pages4197–4201.IEEE,2020.[209]ShengShen,ZhenDong,JiayuYe,LinjianMa,ZheweiYao,AmirGholami,MichaelWMahoney,andKurtKeutzer.QBERT:Hessianbasedultralowprecisionquantizationofbert.InAAAI,pages8815–8821,2020.[210]WilliamFleetwoodSheppard.Onthecalculationofthemostprobablevaluesoffrequency-constants,fordataarrangedaccordingtoequidistantdivisionofascale.ProceedingsoftheLondonMathematicalSociety,1(1):353–380,1897.[211]MoranShkolnik,BrianChmiel,RonBanner,GilShomron,YuriNahshan,AlexBronstein,andUriWeiser.Robustquantization:Onemodeltorulethemall.Advancesinneuralinformationprocessingsystems,2020.[212]K.SimonyanandA.Zisserman.Verydeepconvolutionalnetworksforlarge-scaleimagerecognition.InInternationalConferenceonLearningRepresentations,2015.[213]S.M.Stigler.TheHistoryofStatistics:TheMeasurementofUncertaintybefore1900.HarvardUniversityPress,Cambridge,1986.[214]PierreStock,AngelaFan,BenjaminGraham,EdouardGrave,RémiGribonval,HerveJegou,andArmandJoulin.Trainingwithquantizationnoiseforextrememodelcompression.InInternationalConferenceonLearningRepresentations,2021.[215]PierreStock,ArmandJoulin,RémiGribonval,BenjaminGraham,andHervéJégou.Andthebitgoesdown:Revisitingthequantizationofneuralnetworks.arXivpreprintarXiv:1907.05686,2019.[216]JohnZSun,GraceIWang,VivekKGoyal,andLavRVarshney.Aframeworkforbayesianoptimalityofpsychophysicallaws.JournalofMathematicalPsychology,56(6):495–501,2012.[217]ChristianSzegedy,VincentVanhoucke,SergeyIoffe,JonShlens,andZbigniewWojna.RethinkingtheInceptionarchitectureforcomputervision.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages2818–2826,2016.[218]ShyamATailor,JavierFernandez-Marques,andNicholasDLane.Degree-quant:Quantization-awaretrainingforgraphneuralnetworks.InternationalConferenceonLearningRepresentations,2021.[219]MingxingTan,BoChen,RuomingPang,VijayVasudevan,MarkSandler,AndrewHoward,andQuocVLe.Mnasnet:Platform-awareneuralarchitecturesearchformobile.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages2820–2828,2019.[220]MingxingTanandQuocVLe.EfficientNet:Rethinkingmodelscalingforconvolutionalneuralnetworks.arXivpreprintarXiv:1905.11946,2019.[221]WeiTang,GangHua,andLiangWang.Howtotrainacompactbinaryneuralnetworkwithhighaccuracy?InProceedingsoftheAAAIConferenceonArtificialIntelligence,volume31,2017.[222]AnttiTarvainenandHarriValpola.Meanteachersarebetterrolemodels:Weight-averagedconsistencytargetsimprovesemi-superviseddeeplearningresults.arXivpreprintarXiv:1703.01780,2017.[223]JamesTeeandDesmondPTaylor.Isinformationinthebrainrepresentedincontinuousordiscreteform?IEEETransactionsonMolecular,BiologicalandMulti-ScaleCommunications,6(3):199–209,2020.[224]L.N.TrefethenandD.BauIII.NumericalLinearAlgebra.SIAM,Philadelphia,1997.[225]FrederickTungandGregMori.Clip-q:Deepnetworkcompressionlearningbyin-parallelpruning-quantization.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages7873–7882,2018.[226]MartvanBaalen,ChristosLouizos,MarkusNagel,RanaAliAmjad,YingWang,TijmenBlankevoort,andMaxWelling.Bayesianbits:Unifyingquantizationandpruning.Advancesinneuralinformationprocessingsystems,2020.[227]RufinVanRullenandChristofKoch.Isperceptiondiscreteorcontinuous?Trendsincognitivesciences,7(5):207–213,2003.[228]LavRVarshney,PerJesperSjöström,andDmitriBChklovskii.Optimalinformationstorageinnoisysynapsesunderresourceconstraints.Neuron,52(3):409–423,2006.[229]LavRVarshneyandKushRVarshney.Decisionmakingwithquantizedpriorsleadstodiscrimination.ProceedingsoftheIEEE,105(2):241–255,2016.[230]AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanNGomez,ŁukaszKaiser,andIlliaPolosukhin.Attentionisallyouneed.InAdvancesinneuralinformationprocessingsystems,pages5998–6008,2017.[231]DiwenWan,FuminShen,LiLiu,FanZhu,JieQin,LingShao,andHengTaoShen.Tbn:Convolutionalneuralnetworkwithternaryinputsandbinaryweights.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages315–332,2018.[232]DilinWang,MengLi,ChengyueGong,andVikasChandra.Attentivenas:Improvingneuralarchitecturesearchviaattentivesampling.arXivpreprintarXiv:2011.09011,2020.25[233]KuanWang,ZhijianLiu,YujunLin,JiLin,andSongHan.HAQ:Hardware-awareautomatedquantization.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,2019.[234]NaigangWang,JungwookChoi,DanielBrand,Chia-YuChen,andKailashGopalakrishnan.Trainingdeepneuralnetworkswith8-bitfloatingpointnumbers.Advancesinneuralinformationprocessingsystems,2018.[235]PeisongWang,QinghaoHu,YifanZhang,ChunjieZhang,YangLiu,andJianCheng.Two-stepquantizationforlow-bitneuralnetworks.InProceedingsoftheIEEEConferenceoncomputervisionandpatternrecognition,pages4376–4384,2018.[236]TianzheWang,KuanWang,HanCai,JiLin,ZhijianLiu,HanruiWang,YujunLin,andSongHan.Apq:Jointsearchfornetworkarchitecture,pruningandquantizationpolicy.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages2078–2087,2020.[237]YingWang,YadongLu,andTijmenBlankevoort.Differentiablejointpruningandquantizationforhardwareefficiency.InEuropeanConferenceonComputerVision,pages259–277.Springer,2020.[238]ZiweiWang,JiwenLu,ChenxinTao,JieZhou,andQiTian.Learningchannel-wiseinteractionsforbinaryconvolutionalneuralnetworks.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages568–577,2019.[239]BichenWu,XiaoliangDai,PeizhaoZhang,YanghanWang,FeiSun,YimingWu,YuandongTian,PeterVajda,YangqingJia,andKurtKeutzer.FBNet:Hardware-awareefficientconvnetdesignviadifferentiableneuralarchitecturesearch.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages10734–10742,2019.[240]BichenWu,AlvinWan,XiangyuYue,PeterJin,SichengZhao,NoahGolmant,AmirGholaminejad,JosephGonzalez,andKurtKeutzer.Shift:Azeroflop,zeroparameteralternativetospatialconvolutions.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages9127–9135,2018.[241]BichenWu,YanghanWang,PeizhaoZhang,YuandongTian,PeterVajda,andKurtKeutzer.Mixedprecisionquantizationofconvnetsviadifferentiableneuralarchitecturesearch.arXivpreprintarXiv:1812.00090,2018.[242]HaoWu,PatrickJudd,XiaojieZhang,MikhailIsaev,andPauliusMicikevicius.Integerquantizationfordeeplearninginference:Principlesandempiricalevaluation.arXivpreprintarXiv:2004.09602,2020.[243]JiaxiangWu,CongLeng,YuhangWang,QinghaoHu,andJianCheng.Quantizedconvolutionalneuralnetworksformobiledevices.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages4820–4828,2016.[244]XiaXiao,ZigengWang,andSanguthevarRajasekaran.Autoprune:Automaticnetworkpruningbyregularizingauxiliaryparameters.InAdvancesinNeuralInformationProcessingSystems,pages13681–13691,2019.[245]ChenXu,JianqiangYao,ZhouchenLin,WenwuOu,YuanbinCao,ZhirongWang,andHongbinZha.Alternatingmulti-bitquantizationforrecurrentneuralnetworks.arXivpreprintarXiv:1802.00150,2018.[246]ShoukaiXu,HaokunLi,BohanZhuang,JingLiu,JiezhangCao,ChuangrunLiang,andMingkuiTan.Generativelowbitwidthdatafreequantization.InEuropeanConferenceonComputerVision,pages1–17.Springer,2020.[247]YinghaoXu,XinDong,YudianLi,andHaoSu.Amain/subsidiarynetworkframeworkforsimplifyingbinaryneuralnetworks.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages7154–7162,2019.[248]ZheXuandRayCCCheung.Accurateandcompactconvolutionalneuralnetworkswithtrainedbinarization.arXivpreprintarXiv:1909.11366,2019.[249]HaichuanYang,ShupengGui,YuhaoZhu,andJiLiu.Automaticneuralnetworkcompressionbysparsity-quantizationjointlearning:Aconstrainedoptimization-basedapproach.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages2178–2188,2020.[250]HuanruiYang,LinDuan,YiranChen,andHaiLi.Bsq:Exploringbit-levelsparsityformixed-precisionneuralnetworkquantization.arXivpreprintarXiv:2102.10462,2021.[251]JiweiYang,XuShen,JunXing,XinmeiTian,HouqiangLi,BingDeng,JianqiangHuang,andXian-shengHua.Quantizationnetworks.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages7308–7316,2019.[252]Tien-JuYang,AndrewHoward,BoChen,XiaoZhang,AlecGo,MarkSandler,VivienneSze,andHartwigAdam.Netadapt:Platform-awareneuralnetworkadaptationformobileapplications.InProceedingsoftheEuropeanConferenceonComputerVision(ECCV),pages285–300,2018.[253]ZhaohuiYang,YunheWang,KaiHan,ChunjingXu,ChaoXu,DachengTao,andChangXu.Searchingforlow-bitweightsinquantizedneuralnetworks.Advancesinneuralinformationprocessingsystems,2020.[254]ZheweiYao,ZhenDong,ZhangchengZheng,AmirGholami,JialiYu,EricTan,LeyuanWang,QijingHuang,YidaWang,MichaelWMahoney,etal.Hawqv3:Dyadicneuralnetworkquantization.arXivpreprintarXiv:2011.10680,2020.[255]JianmingYe,ShiliangZhang,andJingdongWang.Distillationguidedresiduallearningforbinaryconvolutionalneuralnetworks.arXivpreprintarXiv:2007.05223,2020.[256]JunhoYim,DonggyuJoo,JihoonBae,andJunmoKim.Agiftfromknowledgedistillation:Fastoptimization,networkminimizationandtransferlearning.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages4133–4141,2017.[257]HongxuYin,PavloMolchanov,JoseMAlvarez,ZhizhongLi,ArunMallya,DerekHoiem,NirajKJha,andJanKautz.Dreamingtodistill:Data-freeknowledgetransferviadeepinversion.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages8715–8724,2020.[258]PenghangYin,JianchengLyu,ShuaiZhang,StanleyOsher,YingyongQi,andJackXin.Understandingstraight-throughestimatorintrainingactivationquantizedneuralnets.arXivpreprintarXiv:1903.05662,2019.[259]PenghangYin,ShuaiZhang,JianchengLyu,StanleyOsher,YingyongQi,andJackXin.Blendedcoarsegradientdescentforfullquantizationofdeepneuralnetworks.ResearchintheMathematicalSciences,6(1):14,2019.[260]ShanYou,ChangXu,ChaoXu,andDachengTao.Learningfrommultipleteachernetworks.InProceedingsofthe23rdACMSIGKDDInternationalConferenceonKnowledgeDiscoveryandDataMining,pages1285–1294,2017.[261]RuichiYu,AngLi,Chun-FuChen,Jui-HsinLai,VladIMorariu,XintongHan,MingfeiGao,Ching-YungLin,andLarrySDavis.Nisp:Pruningnetworksusingneuronimportance26scorepropagation.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages9194–9203,2018.[262]ShixingYu,ZheweiYao,AmirGholami,ZhenDong,MichaelWMahoney,andKurtKeutzer.Hessian-awarepruningandoptimalneuralimplant.arXivpreprintarXiv:2101.08940,2021.[263]OfirZafrir,GuyBoudoukh,PeterIzsak,andMosheWasserblat.Q8BERT:Quantized8bitbert.arXivpreprintarXiv:1910.06188,2019.[264]DongqingZhang,JiaolongYang,DongqiangziYe,andGangHua.Lq-nets:Learnedquantizationforhighlyaccurateandcompactdeepneuralnetworks.InEuropeanconferenceoncomputervision(ECCV),2018.[265]LinfengZhang,JieboSong,AnniGao,JingweiChen,ChenglongBao,andKaishengMa.Beyourownteacher:Improvetheperformanceofconvolutionalneuralnetworksviaselfdistillation.InProceedingsoftheIEEE/CVFInternationalConferenceonComputerVision,pages3713–3722,2019.[266]WeiZhang,LuHou,YichunYin,LifengShang,XiaoChen,XinJiang,andQunLiu.Ternarybert:Distillation-awareultra-lowbitbert.arXivpreprintarXiv:2009.12812,2020.[267]ChenglongZhao,BingbingNi,JianZhang,QiweiZhao,WenjunZhang,andQiTian.Variationalconvolutionalneuralnetworkpruning.InProceedingsoftheIEEEConferenceonComputerVisionandPatternRecognition,pages2780–2789,2019.[268]QibinZhao,MasashiSugiyama,LonghaoYuan,andAndrzejCichocki.Learningefficienttensorrepresentationswithringstructurednetworks.InICASSP2019-2019IEEEInternationalConferenceonAcoustics,SpeechandSignalProcessing(ICASSP),pages8608–8612.IEEE,2019.[269]RitchieZhao,YuweiHu,JordanDotzel,ChristopherDeSa,andZhiruZhang.Improvingneuralnetworkquantizationwithoutretrainingusingoutlierchannelsplitting.ProceedingsofMachineLearningResearch,2019.[270]AojunZhou,AnbangYao,YiwenGuo,LinXu,andYurongChen.Incrementalnetworkquantization:Towardslosslesscnnswithlow-precisionweights.arXivpreprintarXiv:1702.03044,2017.[271]AojunZhou,AnbangYao,KuanWang,andYurongChen.Explicitloss-error-awarequantizationforlow-bitdeepneuralnetworks.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages9426–9435,2018.[272]ShuchangZhou,YuxinWu,ZekunNi,XinyuZhou,HeWen,andYuhengZou.Dorefa-net:Traininglowbitwidthconvolutionalneuralnetworkswithlowbitwidthgradients.arXivpreprintarXiv:1606.06160,2016.[273]YirenZhou,Seyed-MohsenMoosavi-Dezfooli,Ngai-ManCheung,andPascalFrossard.Adaptivequantizationfordeepneuralnetwork.arXivpreprintarXiv:1712.01048,2017.[274]ChenzhuoZhu,SongHan,HuiziMao,andWilliamJDally.Trainedternaryquantization.arXivpreprintarXiv:1612.01064,2016.[275]ShilinZhu,XinDong,andHaoSu.Binaryensembleneuralnetwork:Morebitspernetworkormorenetworksperbit?InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages4923–4932,2019.[276]BohanZhuang,ChunhuaShen,MingkuiTan,LingqiaoLiu,andIanReid.Towardseffectivelow-bitwidthconvolutionalneuralnetworks.InProceedingsoftheIEEEconferenceoncomputervisionandpatternrecognition,pages7920–7928,2018.[277]BohanZhuang,ChunhuaShen,MingkuiTan,LingqiaoLiu,andIanReid.Structuredbinaryneuralnetworksforaccurateimageclassificationandsemanticsegmentation.InProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecognition,pages413–422,2019.[278]BarretZophandQuocVLe.Neuralarchitecturesearchwithreinforcementlearning.arXivpreprintarXiv:1611.01578,2016.27

版权声明:本文为zukang原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://www.cnblogs.com/zukang/p/14805278.html