Deep Compressive Offloading: Speeding up Neural Network Inference by Trading Edge Computation for Network Latency

Publication
Proceedings of the 18th Conference on Embedded Networked Sensor Systems

Related