Today, the Internet of Things (IoT) is hardly a term reserved for experts. The number of IoT devices constantly increases and, thus, the resulting network traffic. However, the traffic incurs a significant cost: network bandwidth is a valuable public good, and wireless communication consumes IoT devices' (limited) energy. By leveraging machine learning (ML) models, we can reduce communication through data prediction.
However, data distributions can change over time, and ML models must adapt to maintain performance. Continual learning is an emerging research area aiming at long-term ML deployment scenarios. Still, model training and deployment are expensive and require a non-trivial cost-performance trade-off.
With this thesis, the scenario of continual learning for data reduction by data prediction is examined in detail. By implementing a simulation framework, we show the effects of parameter configurations on the required communication and accuracy in various application scenarios.