Diese Seite auf DE

Event

Embedded Machine Learning [WS212400137]

Type
seminar (S)
Präsenz/Online gemischt
Term
WS 21/22
SWS
Language
Deutsch/Englisch
Appointments
0
Links
ILIAS

Lecturers

Organisation

  • KIT-Fakultät für Informatik

Part of

Note

In our seminars, students learn about cutting-edge research in the research fields presented below. Students are offered topics by the supervisors, but also can suggest their own topics in these fields. The seminar is offered in both English and German.

Machine Learning on On-Chip Systems

Machine learning and on-chip systems form a symbiosis where each research area benefits from advances in the other. In this seminar, students review cutting-edge research on both areas.

Machine learning (ML) gains importance in all aspects of information systems. From high-level algorithms like image recognition to lower-level intelligent CPU management - ML is ubiquitous. On-chip systems also benefit from advances in ML techniques. Examples include adaptive resource management or workload prediction. However, ML techniques also benefit from advances in on-chip systems. A prominent example is acceleration of neural networks in recent desktop GPUs and even smartphone chips.

In this seminar, students will review cutting-edge state-of-the-art research (publications) to a specific topic related to ML on on-chip systems. The findings will be summarized in a seminar report and presented to the other members of the course. Students are welcome to suggest own topics, but this is not required. The seminar can be held in English or German.

DNN Pruning and Quantization
As DNNs become more computationally hungry, their hardware implementation becomes more challenging, since embedded devices have limited resources. DNN compression techniques, such as pruning and quantization, can be applied for efficient utilization of computational resources. While pruning involves removing unimportant elements of a DNN structure (connections, filters, channels etc), quantization decreases the precision for representing DNN-related tensors (weights and activations). Both promise to trade-off some of the application’s accuracy for limited energy consumption and reduced memory footprint. Students will review state-of-the-art research works on hardware-aware DNN pruning and quantization. The findings will be summarized in a seminar report and presented to the other members of the course.