High increase in energy efficiency possible for AI applications
05.04.2024
Source: Energy & Management Powernews
Energy savings of between 31 and 65 percent were identified by the Future Energy Lab of the German Energy Agency (Dena) in the "Energy-efficient artificial intelligence" pilot project.
The Future Energy Lab of the German Energy Agency (Dena) in Berlin investigated how energy efficiency can be increased when training and using artificial intelligence (AI). Specifically, the focus was on image-classifying AI applications that are trained to recognize certain objects in images. AI, which requires a lot of computing power, is being used in more and more areas.
According to a scientific study at the University of Amsterdam, the use of AI for standard search engines alone could consume up to 29.2 billion kWh of electricity. This amount would be equivalent to Ireland's current annual consumption. According to Dena, it is therefore all the more important to exploit the potential of AI while keeping the ecological footprint within acceptable limits.
Two options investigated
The Future Energy Lab, in collaboration with the Fraunhofer Heinrich Hertz Institute (HHI), has investigated two approaches to increasing the energy efficiency of common AI applications. Firstly, when training an AI using the federated learning method and secondly, when executing an AI using a new type of interconnection of special computer chips for use in data centers.
Federated learning refers to an AI training method in which the AI model is not trained on a central server, but on various end devices. In terms of data protection, this has the advantage that the training data remains with the respective owners. A small AI model is trained on each end device, which is then sent from the end device to a central server. On this server, the models are merged into one large model.
Sending the models to the central server consumes electricity. In the project, various model compression methods that reduce the volume of data to be sent were examined for their energy efficiency. A compression method was identified that can reduce the power consumption of the training process by 65 percent.
Optimal chipset saves power
There are various computer chips that are particularly suitable for running an AI. So-called field-programmable gateway arrays (FPGA) are particularly suitable for running image-processing AI applications. The project has succeeded in using a new hardware architecture to interconnect the chips in such a way that less computing power is required to control and coordinate the chips. This reduced the power consumption for the AI applications investigated by 31 percent.
The results are summarized in the study Energy-efficient artificial intelligence for a climate-friendly future, so that interested parties, AI developers and data centre operators can view and apply them. The project was funded by the Federal Ministry for Economic Affairs and Climate Protection (BMWK). According to Dena, the results are also suitable as a basis for the development of energy efficiency standards for AI applications, similar to the energy efficiency standards for electrical appliances.
The study "Energy-efficient artificial intelligence" is available online.
Author: Susanne Harmsen
You might also be interested in