Loading...

Program Overview

Time Wednesday Thursday Friday
09:00-09:30 Registration Tutorial Part 1
LLMs at the Edge: Hype or Hope?
Tutorial Part 1
LLMs at the Edge: Hype or Hope?
09:30-09:45 Conference Opening
09:45-10:30 Opening Talk
Prof. Ronald Tetzlaff
10:30-11:00 Networking Coffee Break Networking Coffee Break Networking Coffee Break
11:00-12:15 Industrial Invited Talk
Dr. Andrea Redaelli
Keynote
Prof. Wei Lu
Keynote
Prof. Thomas Mikolajick
12:15-13:15 Lunch Lunch Lunch
13:15-15:15 Session 1
Perfprmance at Scale
Session 3
Domain Specializatoin
Session 5
In-Memory Operations
15:15-15:45 Networking Coffee Break Networking Coffee Break Networking Coffee Break
15:45-17:15 Session 2
Edge Computing
Session 4
Modeling
Session 6
AI at Scale
17:15-17:30 Closing
17:30-18:30 Break Break
18:30-20:00 Welcome Dinner Gala Dinner

Session Details

Session Date ID Title Type Time
1. Performance at Scale Oct. 08 81 Area-Efficient Heterogeneous MRAM for High-Performing AI Acceleration Long Paper 13:15
35 SPIDER: A Sparsity-Aware High-Density Compute-in-ROM Architecture for Large-Scale Nerual Networks On-Chip Deployment Short Paper 13:40
59 Exploiting LPDDR6 Metadata to Cache Byte-Addressable Non-Volatile Main Memories Short Paper 14:00
63 CMOS probabilistic computer using voltage-controlled magnetic tunnel junctions as its entropy source Extended Abstract 14:20
82 Boosting Memory Throughput with a Strided Access Pattern on Disruptive Memory Systems Extended Abstract 14:25
134 rMMU: Disaggregating Virtual Memory Extended Abstract 14:30
Q&A 14:35
2. Edge Computing Oct. 08 28 A Blueprint for Accurate, Energy-Efficient DNN Inference via Capacitive In-Memory Computing Long Paper 15:45
21 A 28nm 26.9 Mb/mm² x 43.1 TOPS/W Fully Digital Task-Flexible Compute-in-ROM/SRAM Macro for Energy-Efficient Edge AI Inference Short Paper 16:10
75 APX-DREAM-CIM: An Approximate Digital SRAM-based CIM Accelerator for Edge AI Short Paper 16:30
88 HW/SW Co-Design Methodology for Near-Memory Computing with TensorFlow Lite Integration Extended Abstract 16:50
115 Bit-Flip-Aware Regularization for Enhancing Fault Resilience in Deep Neural Networks Extended Abstract 16:55
Q&A 17:00
3. Domain Specializatoin Oct. 09 45 CryptoSRAM: Enabling High-Throughput Cryptography on MCUs via In-SRAM Computing Long Paper 13:15
30 Energy-convergence trade off for the training of neural networks on bio-inspired hardware Short Paper 13:40
41 KSPiM: 65nm Processing-near-Memory State Space based Accelerator for Keyword Spotting Short Paper 14:00
37 Hardware-Software Co-Design of Iterative Filter-Update Numerical Methods Using Processing-In-Memory Extended Abstract 14:20
73 Extended-variable probabilistic computing with p-dits Extended Abstract 14:25
80 GAPiM: Discovering Genetic Variations on a Real Processing-in-Memory System Extended Abstract 14:30
Q&A 14:35
4. Modeling Oct. 09 29 Overhead Prediction for PIM-Enabled Applications with Performance-Aware Behaviour Models Short Paper 15:45
64 Performance and Power Analysis of LPDDR6 Short Paper 16:05
96 Design Space Exploration of a Direct Cached Memory Access Controller Optimized for HBM Memory Systems using TAPRE-HBM Short Paper 16:25
56 Bridging Ideal and Real: Toward a Realistic Behavioral Model of Memristors Extended Abstract 16:45
132 A Compact Memristor Model for Hafnium-based 1T1R ReRAM Devices Extended Abstract 16:50
Q&A 16:55
5. In-Memory Operations Oct. 10 98 CIMple: Standard-cell SRAM-based CIM with LUT-based split softmax for attention acceleration Long Paper 13:15
02 An Efficient Robust Serial IMPLY-based In-Memristor Adder Short Paper 13:40
91 Fast and Scalable MAGIC-Based Wallace Tree Multiplier for In-Memory Computing Short Paper 14:00
55 Fast and Energy-Efficient Approximate Memristive Multipliers Extended Abstract 14:20
68 ISPA: In-Situ Processing within Associative Processor for Energy-Efficient Computations Extended Abstract 14:25
Q&A 14:30
6. AI at Scale Oct. 10 07 PiC-BNN: A 128-kbit 65nm Processing-in-CAM-Based End-to-End Binary Neural Network Accelerator Short Paper 15:45
72 Efficient In-Memory Acceleration of Sparse Block Diagonal LLMs Short Paper 16:05
65 UPMEM Unleashed: The Road to High-Performance and Adaptive PIM Research Extended Abstract 16:25
105 Forward-Forward Learning on RRAM: Algorithm and Low-Voltage Reset Co-Optimization Extended Abstract 16:30
148 The Logarithmic Memristor-Based Bayesian Machine Extended Abstract 16:35
150 A Reconfigurable Complete V/R-R Logic Scheme Based on Binary Memristors Extended Abstract 16:40
Q&A 16:45

**Authors should upload their presentations and set up their Q&A stand with any support materials (posters, print-out, etc.) during the morning coffee break of their presentation day.

Our Sponsors

We thank our sponsors for making CCMCC possible.

Technical Sponsors

IEEE
CEDA
CAS

Financial Sponsors

Special Sponsor
DFG
Platinum
witmem
Silver
CDG
ST
Bronze
spinncloud