Mendez R, Maier A, Emmert J (2026)
Publication Type: Journal article
Publication year: 2026
Book Volume: 26
Article Number: 425
Journal Issue: 2
DOI: 10.3390/s26020425
Highlights: What are the main findings? The Fisher matrix is suitable as a complexity reduction approach, applicable for continual learning with Orthogonal Weight Modification, on edge devices. NEig-OWM is suitable for smaller models, typically used in distributed-process optimization approaches, such as the Artificial Neural Twin. What are the implications of the main findings? It is possible to deploy an IIoT network of autonomously and continually learning devices. The distributed IIoT network can be automatically optimized though backpropagation. The number of industrial processes in which smart devices have been employed rises every day; these devices can be found performing tasks related to the automation, digitization, or optimization of the process. Generally, for these tasks, the devices need to communicate with each other and with a central unit monitored by humans, which is where Industrial Internet of Things (IIoT) comes into play, allowing a network to be built between the devices. Communication might be enough for monitoring purposes, but the optimization and automation of the process are yet to be addressed. In this study, we use an object detection sensor as an initial test subject to explore the Artificial Neural Twin (ANT) as a distributed-process optimization tool in combination with Orthogonal Weight Modification (OWM), a continual learning (CL) method used to augment self-operating devices (i.e., microcontrollers used for machine-vision sensors) with the capacity to learn new tasks autonomously. Some of these devices lack the hardware capacity to run a CL algorithm, which also motivated the comparison of the Fisher matrix, NEig-OWM, and LoRA as matrix approximations to reduce the complexity of the operations between them. Among the compared matrices, we found the Fisher matrix to be the least expensive solution with a negligible reduction in the model’s performance after CL, which makes it a viable solution for large AI models, while NEig-OWM is better suited for smaller models that require fewer hardware resources but more control over the CL algorithm.
APA:
Mendez, R., Maier, A., & Emmert, J. (2026). Comparison of Input-Data Matrix Representations Used for Continual Learning with Orthogonal Weight Modification on Edge Devices. Sensors, 26(2). https://doi.org/10.3390/s26020425
MLA:
Mendez, Ronald, Andreas Maier, and Johannes Emmert. "Comparison of Input-Data Matrix Representations Used for Continual Learning with Orthogonal Weight Modification on Edge Devices." Sensors 26.2 (2026).
BibTeX: Download