Please use this identifier to cite or link to this item: https://idr.l4.nitk.ac.in/jspui/handle/123456789/17033
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorTalawar, Basavaraj.-
dc.contributor.authorKumar, Anil.-
dc.date.accessioned2022-01-29T12:57:23Z-
dc.date.available2022-01-29T12:57:23Z-
dc.date.issued2021-
dc.identifier.urihttp://idr.nitk.ac.in/jspui/handle/123456789/17033-
dc.description.abstractAs hundreds to thousands of Processing Elements (PEs) are integrated into Multiprocessor Systems-on-Chip (MPSoCs) and Chip Multiprocessor (CMP) platforms, a scalable and modular interconnection solution is required. The Network-on-Chip (NoC) is an e ective solution for communication among the On-Chip resources in MPSoCs and CMPs. Availability of fast and accurate modelling methodologies enable analysis, development, design space exploration through performance vs. cost tradeo studies, and testing of large NoC designs quickly. Unfortunately, though being much more accurate than analytical modelling, conventional software simulators are too slow to simulate large-scale NoCs with hundreds to thousands of nodes. Machine Learning (ML) approaches are employed to simulate NoCs to address the simulation speed problem in this thesis. A Machine Learning framework is proposed to predict performance, power and area for di erent NoC architectures. The framework provides chip designers with an e cient way to analyze NoC parameters. The framework is modelled using distinct ML regression algorithms to predict performance parameters of NoCs considering di erent synthetic tra c patterns. Because of the lack of trace data from large-scale NoC-based systems, the use of synthetic workloads is practically the only feasible approach for emulating large-scale NoCs with thousands of nodes. The ML-based NoC simulation framework enables a chip designer to explore and analyze various NoC architectures considering both 2D & 3D NoC architectures with various con guration parameters like virtual channels, bu er depth, injection rates and tra c pattern. In this thesis, four frameworks have been presented which can be used to predict the design parameters of various NoC architectures. The rst framework named Learning-Based Framework (LBF-NoC) which predicts the performance, power, area parameters of direct (mesh, torus, cmesh) and indirect (fat-tree, at y) topologies. i LBF-NoC was tested with various regression algorithms like Arti cial Neural Networks with identity and relu activation functions, di erent generalized linear regression algorithms, i.e., lasso, lasso-lars, larsCV, bayesian-ridge, linear, ridge, elastic-net and Support Vector Regression (SVR) with linear, Radial Basis Function, polynomial kernels among these SVR provided the least error hence, it was selected for building the framework. The existing framework was enhanced by using multiprocessing scheme named Multiprocessing Regression Framework (MRF-NoC) to overcome the issue of simulating NoC architecture `n' number of times for 2D Mesh and 3D Mesh in the second framework. The third framework named Ensemble Learning-Based Accelerator (ELBA-NoC) is designed to predict worst-case latency analysis and to predict the design parameters of large scale architectures using the random forest algorithm. It was designed to predict results of ve di erent NoC architectures which consist of both 2D (Mesh, Torus, Cmesh) and 3D (Mesh, Torus) architectures. Later the fourth framework named Knowledgeable Network-on-Chip Accelerator (K-NoC) is presented to predict two types of NoC architectures one with a xed delay between the IPs and another with the accurate dealy and it was build using random forest algorithm. The results obtained from the frameworks has been compared with the most widely software simulators like Booksim 2.0 and Orion. The LBF-NoC framework gave an error rate of 6% to 8% for both direct and indirect topologies. It also provided a speedup of 1000 for direct topologies and speedup of 5000 for indirect topologies. By using MRF-NoC all the various NoC con gurations considered can be simulated in a single run. ELBA-NoC was able to predict the design parameters of ve di erent architectures with an error rate of 4% to 6% and a minimum speedup 16000 when compared to the cycle-accurate simulator. later, K-NoC was able to predict both NoC architectures considered one with xed delay and another with the accurate delay. It gave a speedup of 12000 and error rate less than 6% in both the cases.en_US
dc.language.isoenen_US
dc.publisherNational Institute of Technology Karnataka, Surathkalen_US
dc.subjectDepartment of Computer Science & Engineeringen_US
dc.subjectNetwork-on-Chipen_US
dc.subject2D NoCen_US
dc.subject3D NoCen_US
dc.subjectSimulationen_US
dc.subjectPerformance modellingen_US
dc.subjectMachine Learningen_US
dc.subjectPredictionen_US
dc.subjectRegressionen_US
dc.subjectSupport Vector Regressionen_US
dc.subjectEnsemble Learningen_US
dc.subjectRandom Foresten_US
dc.subjectBooksimen_US
dc.subjectPerformanceen_US
dc.subjectPoweren_US
dc.subjectAreaen_US
dc.subjectRouteren_US
dc.subjectTraffic Patternen_US
dc.titleMachine Learning based Design Space Exploration of Networks-on-Chipen_US
dc.typeThesisen_US
Appears in Collections:1. Ph.D Theses

Files in This Item:
File Description SizeFormat 
Anil Kumar thesis.pdf1.93 MBAdobe PDFThumbnail
View/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.