Interpretation of Deep Neural Networks
Since the emergence of Deep Neural Networks (DNNs), performance in sophisticated machine learning tasks such as speech recognition, natural language processing, and image classification has been dramatically improved. However, due to the “black box” nature of DNNs structure, it is hard to explain how these models work. This means that the model’s internal reasoning is not transparent to the user, which is a drawback as it prevents the user from being able to verify, interpret, and understand the logic of the system and how particular decisions are made from the model. In this regard, we aim to investigate the interpretability of DNNs and develop new approaches that explain why these models work well in practice.
Online Optimization
As application demands for online convex optimization accelerate, the need for designing new methods that simultaneously cover a large class of convex functions and impose the lowest possible regret is highly rising. Known online optimization methods usually perform well only in specific settings, and their performance depends highly on the geometry of the decision space and cost functions. However, in practice, lack of such geometric information leads to confusion in using the appropriate algorithm. To address this issue, some adaptive methods have been proposed that focus on adaptively learning parameters such as step size, Lipschitz constant, and strong convexity coefficient, or on specific parametric families such as quadratic regularizers. In this work, we generalize these methods and propose a framework that competes with the best algorithm in a family of expert algorithms. Our framework includes many of the well-known adaptive methods including MetaGrad, MetaGrad+C, and Ader. We also introduce a second algorithm that computationally outperforms our first algorithm with at most a constant factor increase in regret. Finally, as a representative application of our proposed algorithm, we study the problem of learning the best regularizer from a family of regularizers for Online Mirror Descent. Empirically, we support our theoretical findings in the problem of learning the best regularizer on the simplex and l2-ball in a multi-class learning problem.
- Adaptive Online Learning for Gradient-Based Optimizers,
S. Masoudian, A. Arabzadeh, M. Jafari Siavoshani, M. Jalali, A. Amouzad, CoRR, vol. abs/1906.00290.
Network Traffic Analysis Using Deep Learning
Network traffic classification has become significantly important with rapid growth of current Internet network and online applications. There have been numerous studies on this topic which have led to many different approaches. Most of these approaches use predefined features extracted by an expert in order to classify network traffic. In contrast, in this project, we propose a deep learning based approach which integrates both feature extraction and classification phases into one system. Our proposed scheme, called “Deep Packet,” can handle both traffic characterization, in which the network traffic is categorized into major classes (e.g., FTP and P2P), and application identification in which identification of end-user applications (e.g., BitTorrent and Skype) is desired. Contrary to the most of current methods, Deep Packet can identify encrypted traffic and also distinguishes between VPN and non-VPN network traffic.
- Deep Packet: A Novel Approach For Encrypted Traffic Classification Using Deep Learning,
M. Lotfollahi, R. Shirali, M. Jafari Siavoshani, M. Saberian, Soft Computing, Springer Berlin Heidelberg. (An earlier version on ArXiv: CoRR, vol. abs/1709.02656).
Storage, Communication, and Load Balancing Trade-off in Distributed Cache Networks
In this problem, we consider load balancing in a network of caching servers delivering contents to end users. Randomized load balancing via the so-called power of two choices is a well-known approach in parallel and distributed systems. In this framework, we investigate the tension between storage resources, communication cost, and load balancing performance. To this end, we propose a randomized load balancing scheme which simultaneously considers cache size limitation and proximity in the server redirection process.
- Coded Load Balancing in Cache Networks,
M Jafari Siavoshani, F. Parvaresh, A. Pourmiri, and S. P. Shariatpanahi, IEEE Transactions on Parallel and Distributed Systems, 2019.
(Related GitHub Repository: Coded-Load-Balancing) - Storage, Communication, and Load Balancing Trade-off in Distributed Cache Networks,
M. Jafari Siavoshani, A. Pormiri, and S. P. Shariatpanahi, IEEE Transactions on Parallel and Distributed Systems, vol. 29, iss. 4, pp. 943–957, 2018.
(Related GitHub Repository: Balls-and-Bins) - On Communication Cost vs. Load Balancing in Content Delivery Networks,
M. Jafari Siavoshani, S. P. Shariatpanahi, H. Ghasemi, and A. Pourmiri, to appear in IEEE Symposium on Computers and Communications (ISCC), Greece, 2017. - Proximity-Aware Balanced Allocations in Cache Networks,
A. Pourmiri, M. Jafari Siavoshani, S. P. Shariatpanahi, IEEE International Parallel & Distributed Processing Symposium (IPDPS), USA, 2017.
Multi-terminal Secret Key Agreement
In this project, we study the problem of sharing a secret key among multiple parties from information theoretical point of view. More precisely, we consider a group of m trusted and authenticated nodes that aim to create a shared secret key K over a broadcast channel in the presence of an eavesdropper Eve. In addition, all of the trusted nodes can also discuss over a cost-free, noiseless and unlimited rate public channel which is also overheard by Eve. For this setup, we investigate the maximum achievable secret key generation rate and aim to propose efficient algorithms to achieve such rate.
- Multi-party secret key agreement over state-dependent wireless broadcast channels
M. Jafari Siavoshani, S. Mishra, C. Fragouli, and S. N. Diggavi - Multi-terminal Secrecy in a Linear Non-coherent Packetized Network
M. Jafari Siavoshani and C. Fragouli - Group secret key agreement over state-dependent wireless broadcast channels
M. Jafari Siavoshani, S. Mishra, C. Fragouli, and S. N. Diggavi - Group Secret Key Generation over Broadcast Erasure Channels
M. Jafari Siavoshani, C. Fragouli, S. Diggavi, U. Pulleti, and K. Argyraki
Index Coding
In this work, we study the problem of linear index coding from graph homomorphism point of view. We show that the decision version of linear (scalar or vector) index coding problem is equivalent to certain graph homomorphism problem. Using this equivalence expression, we can recover some of the already known and prove some new results.
- On linear index coding from graph homomorphism perspective,
J. B. Ebrahimi and M. Jafari Siavoshani, Information Theory and Applications Workshop (ITA), pp. 220–229, 2015. - Linear Index Coding via Graph Homomorphism,
J. B. Ebrahimi and M. Jafari Siavoshani, International Conference on Control, Decision and Information Technologies (CoDIT), pp. 158–163, 2014. - On Index Coding and Graph Homomorphism,
J. B. Ebrahimi and M. Jafari Siavoshani, IEEE Information Theory Workshop (ITW), pp. 541–545, 2014.