Baidu Researchers Propose ‘HETERPS’ for Distributed Deep Learning with Reinforcement Learning-Based Scheduling in Heterogeneous Environments

0
Baidu Researchers Propose ‘HETERPS’ for Distributed Deep Learning with Reinforcement Learning-Based Scheduling in Heterogeneous Environments

Deep neural networks (DNNs) have achieved nice success in varied fields, together with promoting methods, pc imaginative and prescient, and pure language processing. Large fashions with many layers, neurons and parameters are sometimes skilled utilizing numerous knowledge, which drastically will increase the ultimate accuracy. For instance, the Click-Through Rate (CTR) prediction mannequin, BERT and ERNIE use many parameters; BERT, for instance, makes use of between 110 million and 340 million parameters. Large fashions typically include layers which might be each knowledge and computing intensive. For instance, CTR fashions deal with extremely dimensional enter knowledge.

The enter knowledge is high-dimensional and contains many sparse attributes. A low-dimensional embedding is produced by processing a small share of non-zero knowledge by way of an embedding layer, known as mild options. The embedding layer handles monumental volumes of knowledge, reminiscent of 10 TB or much more, ensuing in excessive enter/output (IO) prices and data-intensive processing. However, as a result of excessive computational necessities, a number of further deep neural community layers, reminiscent of absolutely related layers, have computationally costly coaching processes. For the distributed coaching of large-scale DNN fashions, it’s important to make full use of heterogeneous computing assets as processing items, reminiscent of CPUs, varied forms of GPUs, and AI processors, develop into extra heterogeneous.

Data-intensive actions are most popular by some computing assets, reminiscent of CPUs, whereas computationally intensive duties are chosen by others, reminiscent of GPUs. For distributed coaching in this case, the scheduling of actions and totally different computing assets is essential. Despite the scheduling drawback being a classical NP-hard drawback, there are already some easy options. For instance, the primary layer in this examine might be scheduled to CPUs, whereas the remaining layers might be scheduled to GPUs as a result of they usually deal with massive volumes of knowledge. This method could not work for totally different DNN constructions, since not all DNN fashions have the identical construction. While Genetics and Greedy could fall into the native optimum, which is equal to excessive price, they are often instantly utilized to resolve the layer scheduling drawback. Additionally, Bayesian Optimization (BO)-based scheduling can be utilized as a black-box optimization approach. However, BO can expertise vital unpredictability, which typically equates to excessive prices. While pipeline parallelism is rising as a possible technique to deal with massive DNN fashions, knowledge parallelism is often used to parallelize the coaching technique of large-scale DNN fashions. Parallelism can pace up the coaching course of after the roles are assigned to the suitable heterogeneous computing assets.

To obtain fine-grained parallelism, knowledge parallelism and pipeline parallelism might be coupled. The coaching knowledge is partitioned to match the variety of computing assets when utilizing the info parallelism technique. Each computing useful resource makes use of the identical DNN mannequin to deal with a separate portion of the datasets. In the pipeline approach, every stage of the DNN mannequin might be parallelized as every computing useful resource processes the coaching knowledge with a location of the mannequin. A DNN stage consists of a number of contiguous layers, and two separate phases could have knowledge dependencies the place one stage’s end result serves because the enter to the opposite stage.

Although using quite a few computing assets could end result in a better price, parallelism shortens the coaching interval. The coaching process typically has a set throughput restrict to coach a DNN mannequin inside an inexpensive time. Therefore, it’s helpful to scale back monetary bills with the throughput limitation. The elasticity of the computing assets can be utilized to make sure the throughput limitation whereas reducing the financial price because the variety of computing assets might be scaled up or down on demand. The selection of what number of computing assets to make use of for the distributed coaching in this case is essential.

They suggest that the Paddle-Heterogeneous Parameter Server in this analysis makes use of elastic heterogeneous computing assets to allow distributed coaching of large-scale DNN. The three elements that make up Paddle-HeterPS are the DNN layer scheduling module, the info administration module and the distributed coaching module. The DNN layer scheduling module generates a scheduling plan and a provisioning plan. While the scheduling plan assigns every layer to the suitable kind of computing assets, the provisioning plan specifies the variety of computing assets of every kind required for the distributed coaching course of. The knowledge administration module manages the motion of knowledge throughout a number of servers or clusters. A cluster is a group of related computing belongings.

The distributed coaching module parallelizes the mannequin’s coaching course of by combining knowledge parallelism and pipeline parallelism. The scheduling module proposes a DNN layer scheduling method to make use of heterogeneous computing assets. Multiple layers in a DNN mannequin can every have distinctive traits, reminiscent of being knowledge or computing intensive. They assign every layer to the suitable computing useful resource, reminiscent of particular CPUs or GPUs, to scale back coaching occasions. A completely related layer is usually computationally intensive as a result of its excessive processing load, however an embedding layer is often knowledge intensive. Then they mix quite a few subsequent layers in a scheduled stage for the identical form of computing assets to scale back the time it takes to move knowledge throughout a number of computing assets. A scheduled plan is created this fashion. Then, to carry out load balancing and decrease the fee whereas nonetheless assembly the throughput constraint, they construct a provisioning plan to fluctuate the variety of computing assets of every variety. They use pipeline and knowledge parallelism to parallelize the coaching course of.

The following is a abstract of their most essential contributions:

• To permit the distributed coaching of large-scale DNN with elastic heterogeneous computing assets, they supply a system referred to as PaddleHeterPS. The framework controls knowledge sharing throughout distributed computing assets and their storage.

• To schedule every layer on the proper of computing assets, whereas lowering the general price and making certain throughput, they current a reinforcement learning-based layer scheduling method. They additionally present a method to choose the suitable quantity of computing assets for distributed coaching primarily based on the scheduling technique.

• They conduct in depth experiments primarily based on DNN fashions with varied structural variations to reveal the benefits of their method in comparison with customary approaches.


Look on the paper and code. All credit score for this analysis goes to researchers on this mission. Also, do not forget to hitch our Reddit web page and disagreement channelthe place we share the most recent AI analysis information, cool AI tasks, and extra.


Aneesh Tickoo is a consulting intern at MarktechPost. He is at present pursuing his undergraduate diploma in Data Science and Artificial Intelligence from the Indian Institute of Technology (IIT), Bhilai. He spends most of his time on tasks geared toward harnessing the ability of machine studying. His analysis curiosity is picture processing and is captivated with constructing options round it. He likes to attach with folks and collaborate on fascinating tasks.



Leave a Reply

Your email address will not be published. Required fields are marked *