Пожалуйста, используйте этот идентификатор, чтобы цитировать или ссылаться на этот ресурс:
http://elar.urfu.ru/handle/10995/90693
Название: | Survey on software tools that implement deep learning algorithms on intel/x86 and IBM/Power8/Power9 platforms |
Авторы: | Shaikhislamov, D. Sozykin, A. Voevodin, V. |
Дата публикации: | 2019 |
Издатель: | South Ural State University, Publishing Center |
Библиографическое описание: | Shaikhislamov, D. Survey on software tools that implement deep learning algorithms on intel/x86 and IBM/Power8/Power9 platforms / D. Shaikhislamov, A. Sozykin, V. Voevodin. — DOI 10.14529/jsfi190404 // Supercomputing Frontiers and Innovations. — 2019. — Vol. 4. — Iss. 6. — P. 57-83. |
Аннотация: | Neural networks are becoming more and more popular in scientific field and in the industry. It is mostly because new solutions using neural networks show state-of-the-art results in the domains previously occupied by traditional methods, eg. computer vision, speech recognition etc. But to get these results neural networks become progressively more complex, thus needing a lot more training. The training of neural networks today can take weeks. This problems can be solved by parallelization of the neural networks training and using modern clusters and supercomputers, which can significantly reduce the learning time. Today, a faster training for data scientist is essential, because it allows to get the results faster to make the next decision. In this paper we provide an overview of distributed learning provided by the popular modern deep learning frameworks, both in terms of provided functionality and performance. We consider multiple hardware choices: training on multiple GPUs and multiple computing nodes. © The Authors 2019. |
Ключевые слова: | DEEP LEARNING FRAMEWORKS DISTRIBUTED TRAINING. HPC NEURAL NETWORKS DEEP NEURAL NETWORKS LEARNING ALGORITHMS NEURAL NETWORKS PROGRAM PROCESSORS SPEECH RECOGNITION SUPERCOMPUTERS COMPUTING NODES DISTRIBUTED LEARNING LEARNING FRAMEWORKS NEURAL NETWORKS TRAININGS NEW SOLUTIONS PARALLELIZATIONS SCIENTIFIC FIELDS STATE OF THE ART DEEP LEARNING |
URI: | http://elar.urfu.ru/handle/10995/90693 |
Условия доступа: | info:eu-repo/semantics/openAccess cc-by |
Идентификатор РИНЦ: | 42316501 |
Идентификатор SCOPUS: | 85079862860 |
Идентификатор PURE: | 12222306 |
ISSN: | 2409-6008 |
DOI: | 10.14529/jsfi190404 |
Сведения о поддержке: | Council on grants of the President of the Russian Federation: MK-2330.2019.9 You can use a special version of Caffe, NVCaffe, which is supported by NVidia. This version was created specifically for the use of several GPUs. User instructions can be found in [35]. For NVidia, MXNet is supported by Nvidia Cloud. MXNet also has support for CUDA and CuDNN. The results described in this paper were obtained with the financial support of the grant from the Russian Federation President Fund (MK-2330.2019.9). |
Располагается в коллекциях: | Научные публикации ученых УрФУ, проиндексированные в SCOPUS и WoS CC |
Файлы этого ресурса:
Файл | Описание | Размер | Формат | |
---|---|---|---|---|
10.14529-jsfi190404.pdf | 1,06 MB | Adobe PDF | Просмотреть/Открыть |
Все ресурсы в архиве электронных ресурсов защищены авторским правом, все права сохранены.