|
1.INTRODUCTIONEnergy security and safety are closely related to the national economy and people’s livelihood, and cannot be ignored as the ‘great importance of the country’. State Grid Corporation of China firmly shouldered the responsibility and mission of safeguarding national energy security, strengthened planning and overall guidance, fully served the high-quality development of new energy, accelerated the construction of ultra-high voltage and main grid, strengthened the construction of distribution network, improved the disaster prevention and disaster resistance capacity of power grid and the digital and intelligent level of power grid, and worked hard to write a new chapter of high-quality development of power grid to better support and serve Chinese path to modernization. 2.DISTRIBUTED COMPUTING FOR GRID HIGH-QUALITY DEVELOPMENT2.1Distributed computing technologyDistributed training of data parallel large models is a widely used technique in the fields of machine learning and deep learning. With the continuous growth of data volume and the increase of model complexity, traditional single machine model training can no longer meet the demand for efficient and fast training. Therefore, adopting distributed training to accelerate the model training process has become a trend.Data parallelism is a strategy that divides a large dataset into smaller sub datasets and trains them simultaneously on multiple computing nodes. Each computing node uses the same model parameters, but operates on different subsets of data. By calculating gradients separately at each node and aggregating them, the global model parameters can be quickly updated. This data parallel approach can significantly improve training speed and solve the bottleneck problem during single machine training. Distributed training of large-scale models refers to the method of training large-scale models on the basis of data parallelism. Large scale models have a huge network structure and parameter count, therefore requiring larger computing resources and storage space for training. In order to achieve distributed training of large models, it is usually necessary to use distributed storage systems and computing frameworks to support parallel processing of data and the training process of models. The principle of data parallelism is based on the concept of parallel computing, where multiple computing nodes perform computing tasks simultaneously. Each computing node has its own model copy and locally calculates the gradient of the data batch it is responsible for. After the calculation of each small batch is completed, gradients are aggregated between nodes, and the model parameters are updated by taking the average or weighted average. This iterative process alternates between nodes until the convergence condition for training is reached[1-3]. The development and evolution of new energy systems and new power systems exhibit prominent characteristics of long time span and strong uncertainty. High quality coordination of energy and power security supply and clean and low-carbon transformation is a complex systematic project, and multi-objective coordination during the transition period is crucial. Among them, the new power grid is the core platform for building new power systems and new energy systems. It focuses on the construction of backbone network structures, coordination of power grids at all levels, information technology driven development, and empowerment of digital intelligence. Continuously consolidating its four foundations is the key to building a strong Strengthen the construction of ultra-high voltage and ultra high voltage backbone network, guided by the concept of big energy, based on the energy demand and resource endowment characteristics of high-quality economic and social development, continuously improve the backbone network that adapts to the large-scale optimization and flexible scheduling requirements of multiple energy resources, and consolidate the physical foundation of digital power grid construction.gital power grid[4,5]. Continuously improving the backbone network structure and fully leveraging the energy allocation platform value of the ultra-high voltage power grid. Implement the requirements and deployment of the national plan for the development of large-scale wind and photovoltaic bases with a focus on desert, Gobi, and desert areas, adhere to the “trinity” of large-scale wind and photovoltaic bases, advanced coal-fired power, and ultra-high voltage channels, and make every effort to do a good job in the research of base transmission and power grid development pattern. Extend the main grid structure of North China and Northwest China to desert bases, improve the main grid structure of Northwest and Northeast China, accelerate the construction of the Sichuan Chongqing ultra-high voltage AC main grid, and support the safe and efficient operation of cross regional DC as depicted in Formula 1-2. 2.2Distributed computing technology for power gridBefore exploring the coupling, coordination, measurement, and evaluation of the digital economy and high-quality development, we first need to have a deep understanding of the theoretical foundations of both. The digital economy, as a new economic form in the new era, mainly relies on the development and utilization of data resources, as well as the deep integration and innovative application of information and communication technology. Its core lies in achieving digital, networked, and intelligent development of the economy through advanced technologies such as big data, cloud computing, and the Internet of Things. The digital economy has not only changed the production mode, organizational form, and business model The main elements that constitute a comprehensive evaluation are:
The mathematical model for fuzzy comprehensive evaluation is established as follows: 1) Let the triplet (U, V, R) be the comprehensive evaluation space, where: To evaluate the factor set, it represents the set of evaluation factors in comprehensive evaluation. As a rating set for comments, it represents the combination of comprehensive evaluation and composition, and essentially divides the range of changes in the evaluated object. For the fuzzy relationship matrix, R is the result of single factor evaluation, that is, the single factor evaluation matrix. It is also the object of fuzzy comprehensive evaluation.of traditional economy, but also promoted the leapfrog development of social productivity as depicted in Formula 3. 3.DISTRIBUTED COMPUTING SYSTEM MODEL DIAGRAM FOR POWER GRID HIGH QUALITY DEVELOPMENTIt consists of a database, data warehouse, data warehouse management module, data mining tools, knowledge base, knowledge discovery module, and human-computer interaction module. The main inputs of the system are data from the database and knowledge and experience from the knowledge base. The data warehouse management module completes the creation of the data warehouse and various operations such as data synthesis and extraction in the data warehouse, and is responsible for the operation of the entire system Data mining tools[10-15] are used to complete various queries, multidimensional data analysis, and data mining in practical decision-making problems. The knowledge discovery module controls and manages the knowledge discovery process, using data input and information from the knowledge base to drive the data selection process, knowledge discovery engine process, and evaluation process of discovery The human-computer interaction module provides an integrated interface that connects users and systems through natural language processing and semantic queries. Data warehouses provide a viable way of organizing data for decision support systems. The data involved in information systems mainly comes from specific daily business data within the department, which can be processed by general databases. This type of data is also known as basic data. The information required for decision-making is the overall trend reflected by basic data or the changing trend manifested over time. It is necessary to classify, extract, summarize, and process the basic data in order to obtain this information. Logically, a complete data warehouse is defined by four parts: 1) Warehouse design. It is responsible for defining and setting up the data warehouse environment. 2) Data acquisition section It extracts and transforms data from external data sources, organizing and storing them in a data warehouse manner. 3) Data management section. It completes data updates, routine warehouse maintenance, and management of distributed data. 4) Data access section. It is aimed at end-users and provides decision information and analysis reports to decision-makers in decision support systems as depicted in Fig.1. The process of establishing the decision support system can be described as follows: 1) Analyze decision requirements, determine decision themes, and describe and represent decision problems. 2) Determine the data source, reorganize the operable data records, databases, or file systems in heterogeneous environments, and establish a data warehouse. 3) Design or select effective data mining algorithms and implement them based on the category of the task to be discovered. 4) Call the data mining function to extract comprehensive data from ordinary historical data, and interact and collaborate with end-users to obtain macro level data and trend knowledge. 5) Test and evaluate the discovered knowledge, and apply consistency and utility processing to the knowledge. 6) Based on the requirements of end users, establish an integrated interface and application suitable for decision support, enabling users to apply the discovered knowledge in decision support[16. 17]. Project success evaluation refers to the comprehensive evaluation of various indicators based on the experience of evaluation experts; Or use scoring methods to make qualitative conclusions about the success of the project. The specific approach is to conduct a comprehensive and systematic evaluation based on the evaluation conclusions of the degree of goal achievement and economic benefit analysis analyzed by the logical framework method, with project goals and benefits as the core. 1. Standards for project success The success of project evaluation can be divided into five levels:
4.SCENARIO FOR POWER SYSTEM DEVELOPMENT WITH DISTRIBUTED COMPUTATIONALDistributed computing is a method of splitting a large computing task into multiple small computing tasks, distributing these tasks across several machines for computation, and then summarizing the computation results of each machine to obtain the final result. This computing method is opposite to centralized computing, which directly processes the entire computing task on one or more computers. The design methods and techniques of distributed computing are significantly different from centralized algorithms, mainly because there are essential differences in system models and structures between distributed and centralized systems as depicted in Fig.2. Specifically, the process of distributed computing can be summarized as the following steps: Task splitting: Firstly, it is necessary to split a large computing task into multiple small computing tasks, which can be processed in parallel to improve computing efficiency. Task allocation: Next, these small tasks are assigned to multiple machines, each responsible for processing a portion of the tasks. Local computing: Each machine performs local computing on the tasks assigned to it, which can be done in parallel to speed up the computation. Result summary: After each machine completes local calculations, the results are sent back to a central node or the calculation results of each machine are summarized through the network to ultimately obtain the solution for the entire large computing task as depicted in Table.1. Table 1.computational parameter.
5.CONCLUSIONDistributed computing is a method of splitting a large computing task into multiple small computing tasks, distributing these tasks across several machines for computation, and then summarizing the computation results of each machine to obtain the final result. This computing method is opposite to centralized computing, which directly processes the entire computing task on one or more computers. The design methods and techniques of distributed computing are significantly different from centralized algorithms, mainly because there are essential differences in system models and structures between distributed and centralized systems.This process is not only applicable to numerical calculations, but also widely used in fields such as data processing and machine learning. By utilizing the computing power of multiple computers or servers, it solves large-scale problems that are difficult for a single computer to handle. 6.ACKNOWLEDGMENTThis work was financially supported by the Science and Technology Project of State Grid Economic and Technological Research Institute Co. Ltd:Research support for key technologies for high-quality development of power grid (No. B3441324F00100ZQ0000003) fund. REFERENCESCambiucci W, Silveira R M, Ruggiero W V,
“Hypergraphic partitioning of quantum circuits for distributed quantum computing[C],”
(2023). https://doi.org/10.1109/QCE57702.2023.10237 Google Scholar
Yang K, Sun P, Yang A S L,
“A novel hierarchical distributed vehicular edge computing framework for supporting intelligent driving[J],”
Ad hoc networks, 153 1.1
–1.16
(2024). https://doi.org/10.1016/j.adhoc.2023.103343 Google Scholar
Zhang S, Zhang W, Du J,et al,
“Distributed reservoir computing based nonlinear equalizer for VCSEL based optical interconnects[J],”
Optics Communications, 563
(2024). https://doi.org/10.1016/j.optcom.2024.130574 Google Scholar
Envelope F Y,
“A two-level network-on-chip architecture with multicast support[J],”
Journal of Parallel and Distributed Computing, 172 114
–130
(2023). https://doi.org/10.1016/j.jpdc.2022.10.011 Google Scholar
Guoliang Z, Zexu D, Yi Z,et al,
“Real-time performance evaluation and optimization of electrical substation equipment inspection algorithm based on distributed computing[J],”
International Journal of Low-Carbon Technologies, 2024 https://doi.org/10.1093/ijlct/ctae136 Google Scholar
WuChangmao,XuZhengwei,HeXiaoming,et al,
“Proactive Caching With Distributed Deep Reinforcement Learning in 6G Cloud-Edge Collaboration Computing[J],”
IEEE Transactions on Parallel and Distributed Systems,
(2024). https://doi.org/10.1109/TPDS.2024.3406027 Google Scholar
Li J, Xie Z, Li Y,et al,
“Heralded entanglement between error-protected logical qubits for fault-tolerant distributed quantum computing[J],”
Science China(Physics,Mechanics & Astronomy), 2024
(2), https://doi.org/10.1007/s11433-023-2245-9 Google Scholar
Shaheer S, Jayaraj P B,
“Distributed H∞ Method Design and Operation using Distributed Computing[J],”
in ITM Web of Conferences,
(2023). https://doi.org/10.1051/itmconf/20235402003 Google Scholar
Adelman-Mccarthy J, Boccali T, Caspart R,et al,
“Extending the distributed computing infrastructure of the CMS experiment with HPC resources[J],”
in Journal of Physics: Conference Series,,
2438
(2023). https://doi.org/10.1088/1742-6596/2438/1/012039 Google Scholar
Li N,
“Research on Health Management Information Sharing Mechanism in Distributed Computing Environment[J],”
Applied Mathematics and Nonlinear Sciences, 9
(1),
(2024). https://doi.org/10.2478/amns-2024-2276 Google Scholar
Yang J, Sun F, Wang H,
“Distributed collaborative optimal economic dispatch of integrated energy system based on edge computing[J],”
Energy, 2023 284 Google Scholar
He J, Zhang D, Liu S,et al,
“Managing Information Updating with Edge Computing: A Distributed and Learning Approach[J],”
in ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
(2023). https://doi.org/10.1109/ICASSP49357.2023.10095129 Google Scholar
Frank S, Hoch E N,
“DISTRIBUTED FILE SYSTEM FOR VIRTUALIZED COMPUTING CLUSTERS,”
US18049269;US202200018049269;US202218049269A;US202218049269[P].US20230085566A1; US2023000085566A1;US2023085566A1;US2023085566,
(2024). Google Scholar
Cuomo D, Caleffi M, Krsulich K,et al,
“Optimized Compiler for Distributed Quantum Computing[J],”
ACM Transactions on Quantum Computing,
(2023). https://doi.org/10.1145/3579367 Google Scholar
Jones G M, Jacobsen H A,
“Distributed Quantum Computing for Chemical Applications[J],”
(2024). https://doi.org/10.1109/QCE60285.2024.10270 Google Scholar
Sazontev V V, Stupnikov S A,
“An Extensible Approach to Searching and Selecting Data Sources for Materialized Big Data Integration in Distributed Computing Environments[J],”
Pattern Recognition and Image Analysis, 33
(2), 147
–156
(2023). https://doi.org/10.1134/S1054661823020141 Google Scholar
lt,HTML>, Interface R M A,et al,
“Demonstrating interactive, large-scale High Energy Physics data analysis workflows in distributed computing environments [J],”
(2024). Google Scholar
|