Chúc mừng nhóm sinh viên CTTT2021, CTTT2020 và học viên cao học ngành HTTT có bài báo tại Hội nghị Quốc tế về Trí tuệ Nhân tạo (MIWAI 2025)
The Multi-Disciplinary International Conference on Artificial Intelligence (MIWAI), formerly called The Multi-Disciplinary International Workshop on Artificial Intelligence, is a well-established scientific venue in the field of Artificial Intelligence. MIWAI was established more than 17 years ago with the aim of being a meeting place where excellence in AI research meets the needs for solving dynamic and complex problems in the real world.
MIWAI focuses on multidisciplinary applications of AI in real-world problems such as control, planning and scheduling, pattern recognition, knowledge mining, software applications, and strategy games. The conference provides opportunities for academic researchers, developers, and industrial practitioners to present their work, share experiences, and exchange ideas.
The main purposes of MIWAI are:
- To provide a meeting place for AI researchers and practitioners.
- To inform research students about cutting-edge AI research via the presence of outstanding international invited speakers.
- To raise the standards of practice of AI research by providing researchers and students with feedback from an internationally-renowned program committee.
This year, MIWAI 2025 will be held in Ho Chi Minh City from December 3 – 5, 2025.
Link hội nghị: https://miwai25.miwai.org/
Tên bài báo: “Distilling Temporal Knowledge into a Spatially Efficient Network for Fire Segmentation: An Approach involving Kolmogorov-Arnold Networks”
Nhóm sinh viên, học viên thực hiện:
- 21521911, Lê Bá Đắc, CTTT2021
- 21521531, Nguyễn Thanh Quỳnh Tiên, CTTT2021
- 20521175, Phạm Thành Đạt, CTTT2020
- 220104018, Nguyễn Minh Nhựt, HVCH HTTT 2022.
Giảng viên hướng dẫn: PGS. TS. Nguyễn Đình Thuân
Abstract: Early and accurate fire detection is important, making fire detection systems essential for reducing damage and ensuring timely re- sponse. This work focuses on the use of semantic segmentation for ac- curate fire detection while meeting the real-time performance require- ment. We employ a teacher-student framework, where the teacher model involves MobileNetV2 as a lightweight pretrained encoder, Kolmogorov- Arnold Networks (KAN) for adaptive representation learning, and a Long Short-Term Memory (LSTM) module for temporal awareness, all within a U-Net framework; and the same model, excluding the LSTM module, serves as the student to enable efficient deployment. We also create a new dataset with 1,723 labeled fire images extracted from 57 videos and 605 non-fire images, all collected from diverse scenes and sources. Assessed on the Chino et al. [2] dataset, the proposed architecture proves its leading performance compared to previous methods, reaching 81.63 in Mean IoU. Additionally, real-time detection also comes into play where it operates at 147.02 frames per second (FPS). This study reveals that through effective knowledge distillation, a lightweight student model achieves superior per- formance while being dramatically more efficient than its teacher, requir- ing 94% fewer parameters (0.73M vs 12.61M) and 97% fewer MFLOPs (1349.30 vs 51407.73).











