Using Reinforcement Learning to Reduce Energy Consumption of Ultra-Dense Networks With 5G Use Cases Requirements
Using Reinforcement Learning to Reduce Energy Consumption of Ultra-Dense Networks With 5G Use Cases Requirements
dc.contributor.author | Malta, S. | |
dc.contributor.author | Pinto, P. | |
dc.contributor.author | Fernandez-Veiga, M. | |
dc.date.accessioned | 2023-08-10T09:53:10Z | |
dc.date.available | 2023-08-10T09:53:10Z | |
dc.date.issued | 2023 | |
dc.description | [Policy Implication] Using machine learning their work show that is possible to optimize energy consumption with cognitive management of network elements. Reinforcement learning enables policies that allow sleep mode techniques to gradually deactivate or activate components of energy consumption. In this work, a sleep mode management is proposed, which allows the use of specific metrics to find the best tradeoff between energy reduction and Quality of Service constraints. The results of the simulations show that, depending on the target of the 5G use case, in low traffic load scenarios and when a reduction in energy consumption is preferred over QoS, it is possible to achieve energy savings up to 80%. | |
dc.description.abstract | In mobile networks, 5G Ultra-Dense Networks (UDNs) have emerged as they effectively increase the network capacity due to cell splitting and densification. A Base Station (BS) is a fixed transceiver that is the main communication point for one or more wireless mobile client devices. As UDNs are densely deployed, the number of BSs and communication links is dense, raising concerns about resource management with regard to energy efficiency, since BSs consume much of the total cost of energy in a cellular network. It is expected that 6G next-generation mobile networks will include technologies such as artificial intelligence as a service and focus on energy efficiency. Using machine learning it is possible to optimize energy consumption with cognitive management of dormant, inactive and active states of network elements. Reinforcement learning enables policies that allow sleep mode techniques to gradually deactivate or activate components of BSs and decrease BS energy consumption. In this work, a sleep mode management based on State Action Reward State Action (SARSA) is proposed, which allows the use of specific metrics to find the best tradeoff between energy reduction and Quality of Service (QoS) constraints. The results of the simulations show that, depending on the target of the 5G use case, in low traffic load scenarios and when a reduction in energy consumption is preferred over QoS, it is possible to achieve energy savings up to 80% with 50 ms latency, 75% with 20 ms and 10 ms latencies and 20% with 1 ms latency. If the QoS is preferred, then the energy savings reach a maximum of 5% with minimal impact in terms of latency. | |
dc.identifier.other | 10.1109/ACCESS.2023.3236980 | |
dc.identifier.uri | https://repositorio.inesctec.pt/handle/123456789/14324 | |
dc.language.iso | en | |
dc.publisher | IEEE Access | |
dc.title | Using Reinforcement Learning to Reduce Energy Consumption of Ultra-Dense Networks With 5G Use Cases Requirements | |
dc.type | Article | |
dspace.entity.type |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- Malta_2023_ReiforcementLearning_IEEEAccess.pdf
- Size:
- 2.26 MB
- Format:
- Adobe Portable Document Format
- Description:
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.59 KB
- Format:
- Item-specific license agreed upon to submission
- Description: