Follow
Charbel Sakr
Charbel Sakr
Research Scientist, NVIDIA
Verified email at nvidia.com - Homepage
Title
Cited by
Cited by
Year
Analytical guarantees on numerical precision of deep neural networks
C Sakr, Y Kim, N Shanbhag
International Conference on Machine Learning, 3007-3016, 2017
1082017
PredictiveNet: An energy-efficient convolutional neural network via zero prediction
Y Lin, C Sakr, Y Kim, N Shanbhag
2017 IEEE international symposium on circuits and systems (ISCAS), 1-4, 2017
872017
Per-Tensor Fixed-Point Quantization of the Back-Propagation Algorithm
C Sakr, N Shanbhag
International Conference on Learning Representations, 2019
502019
An analytical method to determine minimum per-layer precision of deep neural networks
C Sakr, N Shanbhag
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
452018
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
C Sakr, N Wang, CY Chen, J Choi, A Agrawal, N Shanbhag, ...
International Conference on Learning Representations, 2019
382019
Hardnn: Feature map vulnerability evaluation in cnns
A Mahmoud, SKS Hari, CW Fletcher, SV Adve, C Sakr, N Shanbhag, ...
arXiv preprint arXiv:2002.09786, 2020
362020
Optimizing Selective Protection for CNN Resilience.
A Mahmoud, SKS Hari, CW Fletcher, SV Adve, C Sakr, NR Shanbhag, ...
ISSRE, 127-138, 2021
342021
Fundamental limits on the precision of in-memory architectures
SK Gonugondla, C Sakr, H Dbouk, NR Shanbhag
Proceedings of the 39th International Conference on Computer-Aided Design, 1-9, 2020
312020
True gradient-based training of deep binary activated neural networks via continuous binarization
C Sakr, J Choi, Z Wang, K Gopalakrishnan, N Shanbhag
2018 IEEE international conference on acoustics, speech and signal …, 2018
282018
Minimum precision requirements for the SVM-SGD learning algorithm
C Sakr, A Patil, S Zhang, Y Kim, N Shanbhag
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
242017
Optimal clipping and magnitude-aware differentiation for improved quantization-aware training
C Sakr, S Dai, R Venkatesan, B Zimmer, W Dally, B Khailany
International Conference on Machine Learning, 19123-19138, 2022
232022
A 0.44-μJ/dec, 39.9-μs/dec, Recurrent Attention In-Memory Processor for Keyword Spotting
H Dbouk, SK Gonugondla, C Sakr, NR Shanbhag
IEEE Journal of Solid-State Circuits 56 (7), 2234-2244, 2020
222020
KeyRAM: A 0.34 uJ/decision 18 k decisions/s recurrent attention in-memory processor for keyword spotting
H Dbouk, SK Gonugondla, C Sakr, NR Shanbhag
2020 IEEE Custom Integrated Circuits Conference (CICC), 1-4, 2020
182020
Signal processing methods to enhance the energy efficiency of in-memory computing architectures
C Sakr, NR Shanbhag
IEEE Transactions on Signal Processing 69, 6462-6472, 2021
132021
Fundamental limits on energy-delay-accuracy of in-memory architectures in inference applications
SK Gonugondla, C Sakr, H Dbouk, NR Shanbhag
IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2021
132021
A 95.6-TOPS/W deep learning inference accelerator with per-vector scaled 4-bit quantization in 5 nm
B Keller, R Venkatesan, S Dai, SG Tell, B Zimmer, C Sakr, WJ Dally, ...
IEEE Journal of Solid-State Circuits 58 (4), 1129-1141, 2023
112023
Facilitating neural network efficiency
C Jungwook, K Gopalakrishnan, C Sakr, S Venkataramani, Z Wang
US Patent 11,195,096, 2021
62021
Understanding the energy and precision requirements for online learning
C Sakr, A Patil, S Zhang, Y Kim, N Shanbhag
arXiv preprint arXiv:1607.00669, 2016
62016
Minimum precision requirements for deep learning with biomedical datasets
C Sakr, N Shanbhag
2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), 1-4, 2018
32018
Reducing the energy cost of inference via in-sensor information processing
S Zhang, M Kang, C Sakr, N Shanbhag
arXiv preprint arXiv:1607.00667, 2016
32016
The system can't perform the operation now. Try again later.
Articles 1–20