52 research outputs found
Self-organizing cooperative neural network experts
Neural networks are generally considered as function approximation models that map a set of input features to their target outputs. Their approximation capability can be improved through “ensemble learning”. An ensemble of neural networks decreases the error correlation of the group by having each network in the ensemble compensate for the performance of one another. One ensembling technique is the Mixture-of-Experts model, which consists of a set of independently-trained expert neural networks that specialize on their own subset of the dataset, and a gating network that manages the specialization of the expert neural networks. In this model, all the neural networks are trained concurrently, but the expert neural networks are only trained on cases in which they perform well. Some major components of the proposed architecture for this thesis are the Cooperative Ensemble, which trains its neural networks concurrently instead of independently, and the k-Winners-Take-All activation function to drive the specialization among neural network experts on a subset of the input features. This way, there is no longer a need for a centralized gating network to manage the specialization of the neural network experts. We further improve upon the k-Winners-Take-All ensemble neural network by training another neural network with the designated task of learning useful feature representations for the neural networks in the ensemble. To learn such representations, the neural network uses the Soft Nearest Neighbor Loss which engenders a simpler function approximation task for the neural networks in the ensemble. We call the resulting full architecture “Self-Organizing Cooperative Neural Network Experts” (SOCONNE), in which a set of neural networks gain the right to specialize on their own subsets of the dataset without the use of a centralized gating neural network. Numerous experiments on a variety of test datasets show that the novel architecture (1) takes advantage of the learned representations for the set of input features by learning their underlying structure, and (2) uses these learned representations to simplify the task of the neural networks in a cooperative ensemble set-up
A Neural Network Architecture Combining Gated Recurrent Unit (GRU) and Support Vector Machine (SVM) for Intrusion Detection in Network Traffic Data
Quality estimation for DASH clients by using deep recurrent neural networks
FutureWei Technologies16th International Conference on Network and Service Management, CNSM 2020, 2nd International Workshop on Analytics for Service and Application Management, AnServApp 2020 and 1st International Workshop on the Future Evolution of Internet Protocols, IPFuture 2020 -- 2 November 2020 through 6 November 2020 -- -- 165563Dynamic Adaptive Streaming over HTTP (DASH) is a technology designed to deliver video to the end-users in the most efficient way possible by providing the users to adapt their quality during streaming. In DASH architecture, the original content encoded into video streams in different qualities. As a protocol running over HTTP, the caches play an important role in DASH environment. Utilizing the cache capacity in these systems is an important problem where there are more than one encoded video files generated for each video content. In this paper, we propose a caching approach for DASH systems by predicting the future qualities of DASH clients. For the prediction, we use learning model, and the qualities that will be cached are determined by using this model. The learning model is designed using Recurrent Neural Networks (RNNs) and also Long Short Term Memory (LSTM) which is a special type of RNNs with default behavior of remembering information for long periods of time. We also utilize SDN technology to get some of the outputs for the learning algorithm. The simulation results show that predicting future qualities helps to reduce the underruns of the clients when cache storage is utilized. © 2020 IFIP.115E449ACKNOWLEDGMENT This work is funded by the Scientific and Technological Research Council of Turkey (TUBITAK) Electric, Electronic and Informatics Research Group (EEEAG) under grant 115E449
- …
