Hardware meets AI!
Topics often include:
- faster ways to do things
- faster ways to implement things
- combinations of the above
list of papers
- 2/10 - Geoff Hinton
- 2/17 - Karthik
- 2/24 - Prof. Deming Chen (UIUC)
- assignment 1
- Pruning
- [x] [1] Lee, N., Ajanthan, T. and Torr, P.H., 2018. Snip: Single-shot network pruning based on connection sensitivity. arXiv preprint arXiv:1810.02340.
- [x] [2] Wang, C., Zhang, G. and Grosse, R., 2020. Picking winning tickets before training by preserving gradient flow. arXiv preprint arXiv:2002.07376.
- [x] [3] Tanaka, H., Kunin, D., Yamins, D.L. and Ganguli, S., 2020. Pruning neural networks without any data by iteratively conserving synaptic flow. arXiv preprint arXiv:2006.05467.
- [x] [4] Frankle, J., Dziugaite, G.K., Roy, D.M. and Carbin, M., 2020. Pruning Neural Networks at Initialization: Why are We Missing the Mark?. arXiv preprint arXiv:2009.08576.
- [x] Bloom Filter: [1] Dai, Z. and Shrivastava, A., 2019. Adaptive learned Bloom filter (Ada-BF): Efficient utilization of the classifier. arXiv preprint arXiv:1910.09131.
- 3/3 - Jonathan Frankle (Mosaic)
- 3/10 - Dr. Safeen Huda (Google Brain)
- 3/24 - Danqi
- 3/31 - Entropy-Learned Hashing & Learnable Caching
- 4/7 - Mixture of Experts & DLRM Scaling
- 4/14 - TinyML and DataMUX
- 4/21 - Snorkel / weakly-supervised learning
- 4/28 - Federated Learning