Diese Seite auf DE

Event

Embedded Machine Learning [SS222400137]

Type
seminar (S)
Präsenz/Online gemischt
Term
SS 2022
SWS
Language
Deutsch/Englisch
Appointments
0
Links
ILIAS

Lecturers

Organisation

  • KIT-Fakultät für Informatik

Part of

Note

In our seminars, students learn about cutting-edge research in the research fields presented below. Students are offered topics by the supervisors, but also can suggest their own topics in these fields. The seminar is offered in both English and German.

Machine learning on on-chip systems

Machine learning and on-chip systems form a symbiosis where each research area benefits from advances in the other. In this seminar, students review cutting-edge research on both areas.

Machine learning (ML) gains importance in all aspects of information systems. From high-level algorithms like image recognition to lower-level intelligent CPU management - ML is ubiquitous. On-chip systems also benefit from advances in ML techniques. Examples include adaptive resource management or workload prediction. However, ML techniques also benefit from advances in on-chip systems. A prominent example is acceleration of neural networks in recent desktop GPUs and even smartphone chips.

In this seminar, students will review cutting-edge state-of-the-art research (publications) on a specific topic related to ML on on-chip systems. The findings will be summarized in a seminar report and presented to the other members of the course. Students are welcome to suggest their own topics, but this is not required. The seminar can be held in English or German.

Approximate Computing for Efficient Machine Learning

Nowadays, energy efficiency is a first-class design constraint in the ICT sector. Approximate computing emerges as a new design paradigm for generating energy efficient computing systems. There is a large body of resource-hungry applications (eg, image processing and machine learning) that exhibit an intrinsic resilience to errors and produce outputs that are useful and of acceptable quality for the users despite their underlying computations being performed in an approximate manner. By exploiting this inherent error tolerance of such applications, approximate computing trades computational accuracy for savings in other metrics, eg, energy consumption and performance. Machine learning, a very common and top trending workload of both data centers and embedded systems, is a perfect candidate for approximate computing application since, by definition, it delivers approximate results. Performance as well as energy efficiency (especially in the case of embedded systems) are crucial for machine learning applications and thus, approximate computing techniques are widely adopted in machine learning (eg, TPU) to improve its energy profile as well as performance.

Machine Learning methods for DNN compilation and mapping
Deep neural networks have achieved great success in challenging tasks such as image classification and object detection. There is a great demand for deploying these networks in different devices, ranging from cloud servers to embedded devices.
Mapping DNNs to these devices is a challenging task since each of these devices has different characteristics in terms of memory organization, compute units, etc. . There have been efforts to automate the process of mapping/compiling DNNs to hardware with different characteristics.
In this seminar, we will discuss the efforts that have been done in mapping/compiling DNNs over hardware using machine learning methods.