People doing Machine Learning [ML] experiments on their own Linux PCs or laptops know that the numerical training runs put a heavy load on the graphics cards and consume a lot of energy as a direct consequence. Especially in a hot summer like we have it in Germany right now, cooling of your systems may become a problem. And as energy has a high price tag here, any method to reduce the load and/or power consumption is welcome.
But I think that caring about energy consumption is a topic which we as a Linux and ML enthusiasts should keep in mind in general. Some big tech companies will probably not do it – as long as their money machinery works and as some heads follow fantasies about building small nuclear power plants for their big AI data centers. But we Opensource people would like to see more AI- and ML-services independent of the monopolists and their infrastructure, anyway. Not only for reasons of data and privacy protection.
As soon as we, however, proclaim and work for a development that favors local and resource optimized installations of AI and ML tools both for private people and companies, we have to care about side effects: We have to bring the energy consumption down for these many local installations substantially in parallel. Otherwise, centralized solutions may have a better energy efficiency than decentralized solutions.
For me as a retired person in Germany the general financial pressure is high enough to enforce a careful use of my private resources. With this post I want to draw your attention to two points which may help you, too, to save energy during your ML-experiments. (In addition to or aside of standard measures like saving certain model states during training runs to get better starting points for new runs.)