Maxwell Strange, an undergraduate researcher who has been working in our research group, will be joining the PhD program at Stanford. Hope all the best for your future endeavors Max! And we hope to see you often at future conferences.
With support from the WARF Accelerator Program, her latest project is developing a deep learning accelerator in the Cloud. The goal: faster, smarter and more energy-efficient systems for deep learning, with applications like improved speech recognition.
Jialiang Zhang, Soroosh Khoram and Jing Li, “ Efficient Large-scale Approximate Nearest Neighbor Search on the OpenCL-FPGA ”, Conference on Computer Vision and Pattern Recognition (CVPR), 2018
Soroosh Khoram and Jing Li, “ Adaptive Quantization of Neural Networks”, International Conference on Learning Representations (ICLR), 2018
Prof. Li’s project, entitled “Associative In-Memory Graph Processing Paradigm: Towards Tera-TEPS Graph Traversal In a Box“, was recommended for 2018 NSF CAREER Award.
Soroosh Khoram, Yue Zha and Jing Li, “An Alternative Analytical Approach to Associative Processing,” in IEEE Computer Architecture Letters.
Yue Zha and Jing Li, “Liquid Silicon: A data-centric recongurable architecture enabled by RRAM technolog“, FPGA’18
Jialiang Zhang and Jing Li, “Degree-aware hybrid graph traversal on FPGA-HMC platform“, FPGA’18
Soroosh Khoram, Jialiang ZhangS, and Jing Li, “Accelerating graph analytics by co-optimizing storage and access on an FPGA-HMC platform“, FPGA’18
Yue Zha and Jing Li, “Liquid Silicon-Monona: A recongurable memory-oriented computing fabric with scalable multi-context support,” ASPLOS’18 (Acceptance Rate: 18.2%, 56 out of 307)