2. Thesis and Dissertations
Permanent URI for this communityhttps://idr.l4.nitk.ac.in/handle/1/10
Browse
Item AB Initio Studies of the Ground State Structure and Properties of Boron Carbides and Ruthenium Carbides(National Institute of Technology Karnataka, Surathkal, 2016) G, Harikrishnan; K. M, AjithThis work investigates the ground state structure and properties of Boron Carbides (B12C3 and B13C2 stoichiometries) and Ruthenium Carbides (RuC, Ru2C and Ru3C stoichiometries), each belonging to a class of hard materials. Exhaustive crystal structure search using evolutionary algorithm and density functional theory is performed in each of these stoichiometries. The lowest energy structures emerging from the structure search are further relaxed and their ground properties are computed using DFT. The work in B12C3 stoichiometry provides the first independent confirmation using structure search that B11Cp(CBC) is the ground state structure of this stoichiometry. It is established that mechanically and dynamically stable structures with base-centered monoclinic symmetry can be at thermodynamical equilibrium at temperatures up to 660 K in B12C3, raising the possibility of identifying the monoclinic symmetry in experimental measurements. A demonstration of experimentally identifiable signatures of monoclinic symmetry is provided through the computed cumulative infrared spectrum of some of the systems. The work in B13C2 stoichiometry has conclusively solved the long standing problem of the discrepancy between the DFT calculations and the experimental observations over the semiconducting nature of B13C2. The remarkable success of a newly identified 30-atomcell structure in explaining many of the experimental data on B12C3 and B13C2 provides the first definitive evidence that structures with larger unit cells, are associated with crystals of these stoichiometries even at the ground state. The work in Ruthenium Carbide stoichiometries has gathered into a coherent perspective the widely varying structures proposed from experimental reports of synthesis, computational modeling and crystal structure search and provided conclusive structural candidates to be pursued in experiments. The study of the pressure-induced variation of their stability and properties has set indicators and benchmarks for future experimental investigations. The estimation of hardness of all the systems has underlined their importance in many applications, with nearly superhard values for some of them.Item Acoustic Emission Signal Based Investigations Involving Laboratory and Field Studies Related To Partial Discharges & Hot-Spots in Power Transformers(National Institute of Technology Karnataka, Surathkal, 2017) Shanker, Tangella Bhavani; Punekar, G. S.; Nagamani, H. N.Power transformers are important and vital components of ac power systems. It is essential to monitor the condition of these transformers periodically in order to ascertain the performance for continuous operation for its expected average life of 25-30 years. The defects in power transformers lead to the deterioration of insulation and eventual premature failure. The deterioration of insulation of power transformers can be assessed by carrying out the condition monitoring tests periodically. The condition monitoring test techniques can be off-line or on-line. The off-line test techniques are being followed as given in IEEE Std. 62(1995). These tests require outage of the transformer, thereby causing interruption of power supply. Whereas, on-line test techniques do not require any outage. Hence, on-line diagnostic techniques have gained importance. Literature review shows application of Acoustic Emission (AE) detection technique as a promising on-line tool for condition monitoring/diagnosis of the power transformers. The general guidelines for the application of AE technique for this purpose are outlined in IEEE Std. C57.127 (2007). Few typical case studies of AE signal measurements are discussed involving (i) two identical transformers, (ii) same transformer on different occasions (years) in power stations in India are reported. Some case studies with AE signals, involving On-Load Tap Changer (OLTC) and cooling system pump are also reported. These case studies also help in comprehending the efficacy of integrating the Dissolved Gas Analysis (DGA) data with the AE test results. Laboratory experimental work is carried out by simulating the most probable defects like Partial Discharge (PD) and hot-spots (leading to heat-waves) in order to capture AE signals in the range of 0-500 kHz. The classification and characterization of the defects based on the energy distribution of AE signals over the different frequency ranges is carried out using Discrete Wavelet Transform (DWT) utilizing the MATLAB toolbox. The eight-level decomposition revealed that the dominant frequency ranges for the energy distribution of the AE signals due to PD and heat-wave are 125 kHz-250 kHz and 62.5 kHz–125 kHz, respectively. The AE signal data from the transformers (field test) involving PD and hotspots are also analyzed using DWT. The laboratory based characterization of PD and heatwave got validated through the analysis of field data. The proposed method of identifying defects by AE signal analysis using DWT would complement the DGA of the transformeroil. Thus this would be a better substitute for DGA based analysis as AE based technique can be adopted in real time. The Acoustic Emission Partial Discharge (AEPD) signal parameters such as discharge magnitude and peak frequencies are studied using Fast Fourier Transform (FFT) to understand the behavior of AE signals at temperatures ranging from 30°C to 75°C. The results reported are intended to give an understanding of behavior of AEPD signals over the entire working temperature range of a transformer. At temperatures above 65°C a reduction in AEPD magnitude and peak frequencies are observed. Such behavior is noticed and probably being reported for the first time. An attempt is also made to explain the same.Item Acoustic Scene Classification Using Speech Features(National Institute of Technology Karnataka, Surathkal, 2020) Mulimani, Manjunath.; Koolagudi, Shashidhar G.Currently, smart devices like smartphones, laptops, tablets, etc., need human intervention in the effective delivery of the services. They are capable of recognizing stuff like speech, music, images, characters and so on. To make smart systems behave as intelligent ones, we need to build a capacity in them, to understand and respond to the surrounding situation accordingly, without human intervention. Enabling the devices to sense the environment in which they are present through analysis of sound is the main objective of the Acoustic Scene Classification. The initial step in analyzing the surroundings is recognition of acoustic events present in day-to-day environment. Such acoustic events are broadly categorized into two types: monophonic and polyphonic. Monophonic acoustic events correspond to the non-overlapped events; in other words, at most one acoustic event is active in a given time. Polyphonic acoustic events correspond to the overlapped events; in other words, multiple acoustic events occur at the same time instance. In this work, we aim to develop the systems for automatic recognition of monophonic and polyphonic acoustic events along with corresponding acoustic scene. Applications of this research work include context-aware mobile devices, robots, intelligent monitoring systems, assistive technologies for hearing-aids and so on. Some of the important issues in this research area are, identifying acoustic event specific features for acoustic event characterization and recognition, optimization of the existing algorithms, developing robust mechanisms for acoustic event recognition in noisy environments, making the-state-of-the-art methods working on big data, developing a joint model that recognizes both acoustic events followed by corresponding scenes etc. Some of the existing approaches towards solutions have major limitations of using known traditional speech features, that are sensitive to noise, use of features from two-dimensional Time-Frequency Representations (TFRs) for recognizing the acoustic events, that demand high computational time;use of deep learning models, that require substantially huge amount of training data. Many novel approaches have been presented in this thesis for recognition of monophonic acoustic events, polyphonic acoustic events and scenes. Two main challenges associated with the real-time Acoustic Event Classification (AEC) are addressed in this thesis. The first one is the effective recognition of acoustic events in noisy environments, and the second one is the use of MapReduce programming model on Hadoop distributed environment to reduce computational complexity. In this thesis, the features are extracted from the spectrograms, which are robust compared to the traditional speech features. Further, an improved Convolutional Recurrent Neural Network (CRNN) and a Deep Neural Network-Driven feature learning models are proposed for Polyphonic Acoustic Event Detection (AED) in real-life recordings. Finally, binaural features are explored to train Kervolutional Recurrent Neural Network (KRNN), which recognizes both acoustic events and a respective scene of an audio signal. Detailed experimental evaluation is carried out to compare the performance of each of the proposed approaches against baseline and state-of-the-art systems.Item Adaptive Distance Relay for Statcom Connected Transmission Lines - Development of Dsp Based Relay Hardware, Relaying Schemes and Hil Testing Procedures(National Institute of Technology Karnataka, Surathkal, 2013) M.V., Sham; Panduranga Vittal, K.Flexible AC Transmission System (FACTS) devices are used to enhance the transient stability limit and power transfer capacity of the existing transmission lines. Static Synchronous Compensator (STATCOM) a shunt type FACTS device is used to maintain the voltage at the point of common coupling on the transmission lines. A STATCOM has fast response of about 1-2 fundamental cycles, which matches with the typical response time of the protection subsystem. Hence, its functional characteristics and associated control system introduce dynamic changes during fault conditions in a transmission line. It is important that distance relays perform correctly irrespective of such dynamic changes introduced during faults, as it defeats the purpose of STATCOM installation. The work presented in this thesis is aimed at detailed study on the influence of STATCOM on the performance of distance relay under normal and abnormal operating conditions of the power systems. The work also put forth adaptive distance relaying schemes to mitigate the adverse of impact of STATCOM on distance relay. Its performance is compared with the conventional standalone mho type distance relay, through simulations on a realistic study power system using EMTDC/PSCAD package. A relay hardware to implement, the adaptive relaying scheme has been developed using TMS320F28335 digital signal processor and a simultaneous sampling ADS8556 analog to digital converter. The real time hardware in the loop test bench has been developed, using Doble F6150 power system simulator, to test the performance of the newly developed relaying schemes and relay hardware. The simulation results obtained from EMTDC/PSCAD are used as test signals for this purpose. The evaluation results have clearly demonstrated, the efficacy of the adaptive relaying schemes in mitigating the adverse impact of STATCOM on the distance relay performance.Item Adaptive Resource Management in SLA Aware Elastic Clouds(National Institute of Technology Karnataka, Surathkal, 2019) S, Anithakumari; Chandrasekaran, K.In recent years, there has been an increasing interest in solving the over-provisioning and under-provisioning of elastic cloud resources because of the Service Level Agreement (SLA) violation problem. The recent studies have reported that federated cloud services may serve as a better elastic cloud model over a single provider model. A major problem with the federated cloud is the interoperability between multiple cloud service providers. Therefore in this thesis, a proactive SLA aware adaptive resource management approach is proposed for elastic cloud services. Aim of this thesis is to develop a suitable SLA monitoring framework to predict the SLA violations and adaptively allocate the cloud resources to improve the elasticity. It achieves the mutual benefits for cloud consumers and service providers by means of calculating and reducing penalty cost. Our framework has been implemented and validated on a private cloud using OpenNebula 4.0. The results have shown that the proposed proactive approach has significantly reduced the SLA violations compared to a reactive approach. As an additional contribution, the presented work solves the interoperability issues of the federated cloud using an innovative SLA matching algorithm. The simulation results of this work show that the said approach performs better than its counterparts.Item Aerodynamic Performance of Low Aspect Ratio Turbine Blade in the Presence of Purge Flow(National Institute of Technology Karnataka, Surathkal, 2021) Babu, Sushanlal.; S, Anish.In aero engines, purge flow is directly fed from the compressor which bypasses the combustion chamber and introduced into the disk space between blade rows to prevent the hot ingress. Higher quantity of purge gas fed through the disk space can provide additional thermal protection to passage endwall and blade surfaces. Moreover interaction of the purge air with the mainstream flow can alter the flow characteristics of turbine blade passage. The objective of the present investigation is to understand the secondary vortices and its aerodynamic behavior within a low aspect ratio turbine blade passage in the presence of purge flow. An attempt is made to understand the influence of velocity ratios and purge ejection angles on these secondary vortices. The objective is broadened by investigating the unsteadiness generated by upstream wakes over the secondary vortex formations in th presence of purge flow. Further the thesis aims to judge the feasibility of implementing endwall contouring to curb the additional losses generated by the purge flow. To accomplish these objectives, a combination of experimental measurements and computational simulations are executed on a common blade geometry. The most reliable commercial software ANSYS CFX which solves three dimensional Reynolds Averaged Navier Stokes Equations together with Shear Stress Transport (SST) turbulence model has been used to carry out computational simulations. Along with steady state analysis, in order to reveal the time dependent nature of the flow variables, transient analysis has been conducted for certain selected computational domains. The numerical results are validated with experimental measurements obtained at the blade exit region using five hole probe and Scanivalve. The experimental analysis is conducted for the base case without purge (BC) and base case with purge (BCp) configurations having flat endwalls. vi In the present analysis, it is observed that with an increase in the velocity ratio, the mass averaged total pressure losses also increases. In an effort to reduce the losses, purge ejection angle is reduced to 350 from 900 with a step size of 150. Significant loss reduction and improved endwall protection are observed at lower ejection angles. Numerical investigation of upstream disturbances/wakes explore the interaction effects of two additional vortices, viz. the cylinder vortex (Vc) and the purge vortex (Vp). Steady state analysis shows an increase in the underturning at blade exit due to the squeezing of the pressure side leg of horseshoe vortex (PSL) towards the pressure surface by the cylinder vortices (Vp). The unsteady analysis reveals the formation of filament shaped wake structures which breaks into smaller vortical structures at the blade leading edge for stagnation wake configuration (STW). On the contrary, in midpassage wake configuration (MW), the obstruction created by the purge flow causes the upper portion of cylinder vortices bend forward, creating a shearing action along the spanwise direction. Investigation of contoured endwall geometries shows that, endwall curvature either accelerate or decelerate the flow thereby a control over the endwall static pressure can be obtained. Out of three contoured endwalls investigated, the stagnation zones generated at the contour valleys has resulted in the additional loss generation for the first two profiles. Reduced valley depth and optimum hump height of the third configuration has effectively redistributed the endwall static pressure. Moreover an increase in the static pressure distribution at the endwall near to pressure surface has eliminated the pressure side bubble formation. Computational results of URANS (Unsteady Reynolds Averaged Navier Stokes) simulations are obtained for analyzing transient behaviour of pressure side bubble, with more emphasis on its migration on pressure surface and across the blade passage.Item Ai-Based Clinical Decision Support Systems Using Multimodal Healthcare Data(National Institute of Technology Karnataka, Surathkal, 2022) Mayya, Veena; S, Sowmya KamathHealthcare analytics is a branch of data science that examines underlying patterns in healthcare data in order to identify ways in which clinical care can be improved – in terms of patient care, cost optimization, and hospital management. Towards this end, Clinical Decision Support Systems (CDSS) have received extensive re- search attention over the years. CDSS are intended to influence clinical decision making during patient care. CDSS can be defined as “a link between health obser- vations and health-related knowledge that influences treatment choices by clinicians for improved healthcare delivery”.A CDSS is intended to aid physicians and other health care professionals with clinical decision-making tasks based on automated analysis of patient data and other sources of information. CDSS is an evolving system with the potential for wide applicability to improve patient outcomes and healthcare resource utilization. Recent breakthroughs in healthcare analytics have seen an emerging trend in the application of artificial intelligence approaches to assist essential applications such as disease prediction, disease code assignment, disease phenotyping, and disease-related lesion segmentation. Despite the signifi- cant benefits offered by CDSSs, there are several issues that need to be overcome to achieve their full potential. There is substantial scope for improvement in terms of patient data modelling methodologies and prediction models, particularly for unstructured clinical data. This thesis discusses several approaches for developing decision support sys- tems towards patient-centric predictive analytics on large multimodal healthcare data. Clinical data in the form of unstructured text, which is rich in patient- specific information sources, has largely remained unexplored and could be poten- tially used to facilitate effective CDSS development. Effective code assignment for patient clinical records in a hospital plays a significant role in the process of stan- dardizing medical records, mainly for streamlining clinical care delivery, billing, and managing insurance claims. The current practice employed is manual cod- ing, usually carried out by trained medical coders, making the process subjective, error-prone, inexact, and time-consuming. To alleviate this cost-intensive pro- iii cess, intelligent coding systems built on patients’ unstructured electronic medical records (EMR) are critical. Towards this, various deep learning models have been proposed for improving the diagnostic coding system performance that makes use of patient clinical reports and discharge summaries. The approach involved multi channel convolution networks and label attention transformer architectures for au- tomatic assignment of diagnostic codes. The label attention mechanism enabled the direct extraction of textual evidence in medical documents that mapped to the diagnostic codes. Medical imaging data like ultrasound, magnetic resonance imaging, computed tomography, positron emission tomography, X-ray, retinal photography, slit lamp microscopy, etc., play an important role in the early detection, diagnosis, and treatment of diseases. Presently, most imaging modalities are manually inter- preted by expert clinicians for disease diagnosis. With the exponential increase in the volume of chronic patients, this process of manual inspection and interpre- tation increases the cognitive and diagnostic burden on healthcare professionals. Recently, machine learning and deep learning techniques have been utilized for designing computer based analysis systems for medical images. Ophthalmology, pathology, radiology, and oncology are a few fields where deep learning techniques have been successfully leveraged for interpreting imaging data. Ophthalmology was the first field to be revolutionized and most explored in health care. To- wards this, various deep learning models have been proposed for improving the performance of ocular disease detection systems that make use of fundoscopy and slit-lamp microscopy imaging data. Patient data is recorded in multiple formats, including unstructured clinical notes, structured EHRs, and diagnostic images, resulting in multimodal data that together accounts for patients’ demographic information, past histories of illness and medical procedures performed, diseases diagnosed, etc. Most existing works limit their models to a single modality of data, like structured textual, unstruc- tured textual, or imaging medical data, and very few works have utilized multi- modal medical data. To address this, various deep learning models were designed that can learn disease representations from multimodal patient data for early dis- ease prediction. Scalability is ensured by incorporating content based learning models for automatically generating diagnosis reports of identified lung diseases, reducing radiologists’ cognitive burden.Item Algorithms for Color Normalization and Segmentation of Liver Cancer Histopathology Images(National Institute of Technology Karnataka, Surathkal, 2021) Roy, Santanu.; Lal, Shyam.With the advent of Computer Assisted Diagnosis (CAD), accuracy of cancer detection from histopathology images is significantly increased. However, color variation in CAD system is inevitable due to variability of stain concentration and manual tissue sectioning. Small variation in color may lead to misclassification of cancer cells. Therefore, color normalization is the first step of Computer Assisted Diagnosis (CAD), in order to reduce the inter-variability of background color among a set of source images. In this thesis, first a novel color normalization method is proposed for Hematoxylin and Eosin (H and E) stained histopathology images. Conventional Reinhard algorithm is modified in our proposed method by incorporating fuzzy logic. Moreover, mathematically it is proved that our proposed method satisfies all three hypotheses of color normalization. Furthermore, several quality metrics are estimated locally for evaluating the performance of various color normalization methods. Experimental result reveals that our proposed method has outperformed all other benchmark methods. The second step of CAD is nuclei segmentation which is the most significant step since it enables the classification task computationally efficient and simple. However, automatic nuclei detection is fraught with problems due to highly textured nuclei boundary and various size and shapes of nuclei present in histopathology images. In this thesis, a novel edge detection technique is proposed for segmenting the nuclei regions in liver cancer Hematoxylin and Eosin (H and E) stained histopathology images, based on the notion of computing local standard deviation value. Moreover, the edge-detected image is converted into a binary image by using local Otsu thresholding and thereafter, it is refined by an adaptive morphological filter. The experimental result indicates that proposed segmentation method overcomes the limitations of existing unsupervised methods and subsequently its performance is also comparable with deep neural models. To the best of our knowledge, our proposed method is the only unsupervised method iii which achieves nuclei detection accuracy closest to 1 (0.9516). Furthermore, two more quality metrics are computed in order to measure the performance of nuclei segmentation methods quantitatively. The mean value of quality metrics reveals that our proposed segmentation method outperforms other existing methods both qualitatively and quantitatively.Item Algorithms for Super-Resolutoin and Restoration of Noiseless and Noisy Depth Images(National Institute of Technology Karnataka, Surathkal, 2019) Balure, Chandra Shaker; M, Ramesh KiniOver the last decade, along with intensity images depth images are also gaining popularity because of its demand in applications like robot navigation, augmented reality, 3DTV, etc. The distinctive characteristic of depth image is that each pixel value represents the distance from the camera position, unlike optical image where each pixel represent intensity values. The prominent features of depth images are the edges and the corners, but it lacks texture unlike optical images. The modern high-end depth cameras provide depth map with higher spatial resolution and higher bit-width, but they are bulky and expensive. However, on the other hand, the commercial low-end depth cameras provide lower spatial resolution, smaller bit width, and are relatively inexpensive. Moreover, the depth images captured by such cameras are noisy and may have some missing regions. To deal with problems like noise and missing regions in the images, the image processing methods like image denoising and image inpainting can be used. Super-resolution (SR) methods address the problem of lower spatial resolution by taking low-resolution (LR) input image and produce high-resolution (HR) image with minimal perturbation in image details. In literature, several super-resolution (SR) and depth reconstruction (DR) methods have been proposed to address the problems associated with these low-end depth camera. We propose few methods to address the above mentioned issues related to the process of super resolution and restoration of depth images. Wavelets have been used for decades for image compression, image denoising and image enhancement because of its better localization in time (space) and frequency. In the proposed work, a wavelet transform based single depth image SR method has been proposed. It uses discrete wavelet transform (DWT), stationary wavelet transform (SWT), and the image gradient. The proposed method is an intermediate stage for obtaining the high-frequency contents from different subbands obtained through DWT, viiSWT and gradient operations on the input LR image and estimates the SR image. For super-resolution by larger factors, i.e. ×4 or ×8 or higher, the guided approach has been used in literature which makes use of the corresponding HR guidance colour image which are easy to capture. In this work, we propose a HR colour-image guided depth image SR method that makes use of the segment cues from the HR colour image. The cues are obtained by segmentation of the HR colour image using popular segmentation methods such as mean-shift algorithm (MS) or simple linear iterative clustering (SLIC) segmentation algorithms. Like other guidance image based methods, it is assumed that the prominent edges in the depth image coincides with the edges in the HR guidance colour image. The median of a segment in the initial estimated depth image corresponding to the segment in the guiding HR colour image is computed. This median value replaces the depth value in that identified segment of the initial estimated depth image. After processing all the segments, we get a final SR output with better edge details and reduced noise. Bilateral filtering can be applied as post processing to smooth the variations at the abutting segment regions. The initial estimate of the SR depth image is derived from LR depth image using the following two approaches. The first one is with bicubic interpolation to the required spatial resolution and the SR process which uses this is referred as LRBicSR method in this work. The other method maps the LR points on to the HR grid and super resolves; this method is referred to as LRSR method. Processing of sparse depth images involves two stages namely DR and SR in that order. This framework of DR followed by SR is called as DRSR method and is challenging. The sparse depth images used for processing may have sparseness range between 1% and 15% of the total pixels. Processing of very sparse depth images of the order of 1% is highly challenging and has been reasonably reconstructed. The corresponding RGB images have been used for guiding the reconstruction process. Two approaches have been proposed to estimate the unknown depth values in the sparse depth input. First one being the plane fitting approach (PFit) and the other being the median filling approach (MFill). This work also shows that guidance based methods are useful in overcoming the effect of noise in depth images and inpainting of the missing regions in viiithe depth images. Literature contains SR methods for intensity images that use a set of training images to learn the HR-LR relationship. In this work, a learning based method has been proposed where algorithm learns the image details from the HR and LR pairs of training images using Gaussian mixture model (GMM). It has been observed from the conducted experiments that, for larger SR factors, the learned parameters do not help much in learning the finer details. So, hierarchical approach has been proposed for such factors and the approach tend to give better SR image quality. The anisotropic total generalized variation method (ATGV) available in the literature is an iterative method and the quality of the SR image so obtained with this method is dependent on the number of iterations used. A simple and less computationally intensive Residual interpolation method (RI) has been used as a preprocessor for ATGV. The computational complexity of RI is comparable to the computational intensity of classical bicubic interpolation method. RI provides a better initial estimate to the ATGV. It has been observed that the proposal of cascading the RI as a preprocessor reduces the number of iterations, converges faster to achieve the better SR image quality. For experimentation, we have used the freely available Middlebury depth dataset, which has depth images along with their corresponding registered colour image. Another dataset used is Kitti dataset which has depth images of outdoor scenes. Real-time depth images captured from Kinect camera and ToF camera has also been used in the experiments to show the robustness of the proposed methods. The LR image is generated from the ground truth (GT) image by blurring, downsampling and adding noise to it. Several performance metrics e.g. peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and root mean square error (RMSE) have been used to evaluate the performance.Item Analysis and Design of Fixed-Frequency Controlled LCL-T Type DC-DC Soft-Switching Power Converter for Renewable Energy Applications(National Institute of Technology Karnataka, Surathkal, 2021) G, Vijaya Bhaskar Reddy.; Harischandrappa, Nagendrappa.Electrical power is one of the important requirements for sustainable development of any nation. A wider gap is being created between the power supply and the ever increasing power demand. The available conventional energy sources are either insufficient or cannot sustain for long to meet the current power demand as they are depleting in nature. Renewable energy sources (RESs) have been the most attractive alternate sources of energy for meeting the ever increasing power demand. Power generation from renewable energy sources depend on atmospheric conditions and hence the power produced is highly fluctuating in nature. To convert this fluctuating power into usable constant power, a power conditioning system is essential. DC-DC converter is one of the important components of the power conditioning system. This research is to find a suitable DCDC resonant power converter topology that can be used in solar power generation applications and investigate on its performance. Therefore, in this work, the literature survey on resonant converter topologies, power controlling methods, and analysis methods are presented. Fixed-frequency control makes the design of magnetic components and filters simple for effective filtering. Therefore, in this study, two fixedfrequency control schemes have been proposed. The first fixed-frequency control scheme is phase-shifted gating (PSG) control and the second is modified gating signal (MGS) control. The proposed PSG and MGS control schemes are experimentally validated and the choice between schemes is made by comparing the performance of the converter. It is found that both the gating schemes are effective in regulating the output voltage for variable input voltage and loading conditions. However, the efficiency of the converter is found to be higher with MGS due to the fact that only one switch loses ZVS as compared to two with the PSG when operated with maximum input voltage. Also, the variation in pulse-width angle (δ) required to regulate the output voltage is small in MGS as compared to that with PSG. The complete behavior of the resonant converter at different intervals of the operation can be predicted by analysing the circuit in steady-state iii and transient state. Two steady-state analysis methods have been proposed in this work. Firstly, fundamental harmonic approximation (FHA) method, and second, Fourier series (FS) method. The proposed steadystate analysis methods are experimentally validated. The performance of the LCL-T converter designed by using the FHA and FS analysis methods is compared. Fourier series method gives efficient results since it considers n-harmonic components of voltages and currents as compared to the fundamental harmonic approximation (FHA) method where, only fundamental component is considered. In order to understand the complete behavior of the converter for fluctuations in the input, load, and control parameters, small-signal modeling of the converter is essential. Therefore, an extended describing function (EDF) method available in the literature is used in this work for small signal modeling of the converter. It is convenient to derive all small-signal transfer functions and improve the accuracy by using the EDF method since it combines both the time-domain and frequency-domain analyses.Item Analysis and Design of GPGPU-based Secure Visual Secret Sharing (VSS) Schemes(National Institute of Technology Karnataka, Surathkal, 2022) M, Raviraja Holla; Pais, Alwyn RVisual Secret Sharing (VSS) stands for sharing confidential image data across the par- ticipants as shares. As each share is not self-sufficient to compromise the secrecy, it is also called a shadow. Only the authorized participants can successfully recover the secret information with their shadows based on the type of VSS schemes. Recent advancements in VSS schemes have reduced the computation cost of encryption and decryption of the images. Still, real-life applications can not utilize these techniques with resource-constrained devices as they are computationally intensive. Another well- identified problem with VSS modalities is that they result in the distorted quality of the recovered image. The solution to improve the contrast of the decrypted image demand additional extensive processing. Therefore an ideal remedy is to exploit the advance- ments in hardware like GPGPU to propose novel parallel VSS schemes. The existing VSS schemes reconstruct the original secret image as a halftone image with only a 50% contrast. The Randomized Visual Secret Sharing (RVSS) scheme overcomes this disadvantage by achieving contrast of 70% to 90% for noise-like and 70% to 80% for meaningful shares (Mhala et al. 2017). But RVSS is computationally expensive. As a remedy we presented a GPGPU-based RVSS (GRVSS) technique that harness the high-performance computing power of the General-Purpose Computation on Graphics Processing Unit (GPGPU) for the data-parallel tasks in the RVSS. The presented GRVSS achieved a speedup range of 1.6× to 1.8× for four shares and 2.63× to 3× for eight shares, for different image sizes. To further enhance the visual quality of the retrieved image, we presented an effec- tive secret image sharing with super-resolution that leverage the deep learning super- resolution technique. The presented model outperformed the benchmark model in many-objective parameters with the recovered secret image contrast of 96.6% - 99.8% for noise-like and 64% - 80 % for meaningful shares. We used GPGPU to conserve the training time of the neural network model to achieve a speedup of 1.92× over the counterpart CPU super-resolution. Also, we presented an effective VSS model with super-resolution utilizing a Convo- lution Neural Network (CNN) intending to increase both the contrast of the recovered image and the speedup. The objective quality assessment proved that the presented model produces a high-quality reconstructed image with the contrast of 97.3% - 99.7% for noise-like and 78.4% - 89.7 % for meaningful shares having the Structural Simi- larity Index (SSIM) of 89% - 99.8% for the noise-like shares and 71.6% - 90% for the meaningful shares. The presented technique achieved an average speedup of 800× in comparison with the sequential model. The application of VSS schemes to medical images poses additional challenges due to their inherent poor quality. Also, medical imaging demands real-time VSS schemes to save the life of patients. We extended these concepts to propose a Medical Im- age Secret Sharing (MISS) model that deems fit for the Computer-Aided Diagnostic (CAD) tools. The presented fused feature extractor with a random forest classifier demonstrated that our VSS scheme reconstructs the secret medical images suitable for CAD tools. The result analysis confirmed the high-performance of the MISS with a 99.3% contrast and a 98% SSIM of the reconstructed image. MISS achieved an aver- age speedup of 800× in comparison with the sequential model. We used cluster sizes of 300, 500, and 1000 to obtain the MISS model’s CAD performance measures such as accuracy, sensitivity, specificity, precision, and F-measure using a presented fused feature extractor and a random forest classifier. The achieved precisions of 99.45%, 99.83%, and 100% for 300, 500, and 1000 cluster sizes prove the presented model’s suitability for the CAD systems.Item Analysis and Design of Reliable and Secure Chaotic Communication Systems for Optical and Wireless Links(National Institute of Technology Karnataka, Surathkal, 2014) Abdulameer, LWAA Faisal; Sripati, U; Kulakarni, MuralidharThere has been growing interest in the use of chaotic techniques for enabling secure communication in recent years. A number of researchers have focused their energies to develop communication strategies based on the discipline of chaotic mechanics. This need has been motivated by the emergence of a number of wireless services which require the channel to provide very low Bit-Error-Rate (BER), high bandwidth efficiency along with information security. Simultaneous provision of these three conflicting requirements is difficult to achieve with conventional communication strategies. This has motivated researchers in the Communication Engineering community to explore new domains in their search for efficient and secure communication techniques. This work reported in this thesis has aimed at the study, design and validation (via analysis and simulation) of techniques derived from chaotic mechanics to enhance security and BER performance at physical layer for wireless communication. Both RF and Optical Wireless system domains have been included in our study. Conventional techniques aiming to provide security enhancement at the physical layer have employed spreading sequences. The use of these techniques requires bandwidth expansion, and the amount of security is limited. Further, the security provided by these techniques comes with a penalty in BER performance and bandwidth efficiency. As a consequence of rapidly increasing demand for wireless services and limited licensed bandwidth, there is a strong need for bandwidth efficient secure systems. In our work, we have designed and verified (by analysis and simulation) chaos-based systems with enhanced BER performance and bandwidth efficiency similar to that offered by conventional PN sequence based systems. We have also proposed techniques that are applicable to the emerging domain of Free Space Optical (FSO) communication because this technology has the potential of providing fiber like unlicensed bandwidth for high speed short distance communication links. We have started the discussion with a study of the issues involved in synchronization between master and slave chaotic systems. We have suggested the use of Low Densityv Parity Check (LDPC) error correcting code in the system to reinforce the ability of the system resist noise and facilitate the synchronization between master and slave systems in presence of AWGN. In addition, it is shown that synchronization can be achieved even when the spreading factor is decreased to low values ( ). We have proposed a dual chaotic encryption algorithm to solve the dynamical degradation problem. An important feature in the analysis of the dynamical systems is system stability, which can be determined using the Lyapunov Exponent (LE). We have computed the LE for the single and dual chaotic maps. We have also investigated the BER for different types of dual and single chaotic maps by employing Chaos Shift Keying (CSK) modulation scheme with Multiple-InputMultiple-Output (MIMO) communication system under AWGN channel. Simulation results indicate that the single tent map gives acceptable security and superior BER performance as compared to dual tent map which gives the superior security but with relatively lesser BER performance. Although the chaotic sequences are more secure as compared to PN sequences, they are inferior in terms of bandwidth efficiency and BER performance. In order to overcome this limitation, we have proposed the use of a chaotic modulation schemes in MIMO channels. The BER performance of coherent and non-coherent chaotic modulation schemes combined with and Alamouti schemes over AWGN channel and Rayleigh fading channel have been evaluated and compared. Continuing further in our efforts to propose superior communication strategies, we have proposed a concatenated scheme involving the combination of LDPC and MIMO schemes based on chaotic technique. The security and BER performance of this Chaotic-LDPC scheme with two transmit antennas and two receive antennas under various channel models has been evaluated. We have discussed the theory and carried out detailed analysis pertaining to encoding/decoding of chaotic modulation schemes, the use of suitable LDPC codes and MIMO schemes for providing secure and reliable communication over the AWGN channel, the Rayleigh fading channel and the Gamma-Gamma fading channel.vi To improve security and reliability with enhanced throughput, we have proposed a Quadrature Chaos Shift Keying (QCSK) modulation scheme with high rate STBC. The bandwidth efficiency of chaos based communication schemes is inferior to that of the traditional communication schemes. To address this problem, we have designed a rate- and rate- full diversity orthogonal STBC for QCSK and 2 transmit antennas and 2 receive antennas. Simulation results indicate that these high rate codes achieve better throughputs in the high SNR region. It is seen that a rate- code achieves a 25% improvement in information rate and - code achieves a 50% improvement in information rate increase compared to the traditional Alamouti scheme for Differential Chaos Shift Keying (DCSK). To evaluate the performance of these techniques in multi-user environment, we have analyzed and evaluated the anti-jamming performance of CSK in a MIMO channel. The BER performance analysis for three common types of jamming, namely singletone jamming, pulsed sinusoidal jamming and multi-tone jamming under different levels of noise power over AWGN channel has been derived and evaluated. We have also discussed the design and evaluated the performance of a communication system that combines a MIMO scheme with a chaotic sequence based Direct Sequence Code Division Multiple Access (DS-CDMA) scheme. In the last part of our work, we have considered the application of the chaotic techniques in the Free-Space Optical (FSO) communication system. The design analysis, simulation and BER performance evaluation of different optical chaotic modulation schemes with MIMO-FSO communication system are presented. Simulations were carried out using available simulators from Rsoft, OPTSIM version 5.2. The main aim of this work is to assess the feasibility of employing Space-Time Coded chaotic communications over MIMO communication channels (both RF and Optical). Our analyses and simulations show that it is feasible to develop reliable and secure communication systems based on chaotic modulation schemes combined with MIMOvii and channel codes. These systems can provide the benefits of information integrity, security and enhanced throughput. It is hoped that the use of tools from chaotic mechanics will enable communication engineers to devise strategies that will allow wide dissemination of wireless services to all of humankind.Item Analysis and Design of Secure Visual Secret Sharing Schemes with Enhanced Contrast(National Institute of Technology Karnataka, Surathkal, 2021) Mhala, Nikhil Chandrakant.; Pais, Alwyn Roshan.The Visual Secret Sharing (VSS) scheme is a cryptography technique, which divides the secret image into multiple shares. These shares are then transmitted over a network to respective participants. To recover the secret image, all participants must have to stack their shares together at the receiver end. Naor and Shamir (1994a) first proposed basic VSS scheme for binary images using threshold scheme. The scheme generated shares with increased sizes, hence it suffered from the problem of expanded share. To overcome the problem of expanded shares, Block-based Progressive Visual Secret Sharing (BPVSS) scheme was proposed by Hou et al. (2013a). BPVSS is an effective scheme suitable for both gray-scale and color images. Although BPVSS scheme recovered secret image with better quality, it still suffers from the problems like 1) The restored image obtained by joining all the shares together always results in a binary image. 2) The maximum contrast achievable by BPVSS is 50%. This thesis presents various mechanisms to improve reconstruction quality and the contrast of a secret image transmitted using BPVSS. First technique proposed by thesis is Randomised Visual Secret Sharing (RVSS) (Mhala et al. 2018). The RVSS is an encryption technique that utilises block-based progressive visual secret sharing and Discrete Cosine Transform (DCT) based reversible data embedding technique to recover a secret image. The recovery method is based on progressive visual secret sharing, which recovers the secret image block by block. The existing block based schemes achieve the highest contrast level of 50% for noise-like and meaningful shares. The presented scheme achieves a contrast level of 70-90% for noise-like and 70-80% for meaningful shares. The enhancement of contrast is achieved by embedding additional information in the shares using DCT-based reversible data embedding technique. Experimental results showed that the proposed scheme restores the secret image with better visual quality in terms of human visual system based parameters Although RVSS scheme recovers secret images with a better contrast; the scheme still suffers from the problems of blocking artifacts. To further improve the reconstruction quality of the RVSS, this thesis presents a novel Super-resolution based Visual Secret Sharing (SRVSS) technique. The SRVSS scheme used super-resolution concept along with data hiding technique to improve the contrast of the secret images. The experimental results showed that the SRVSS scheme achieves the contrast of 70-80% for meaningful shares and 99% for noise-like shares. Also, scheme recovers the secret image free from blocking artifacts. Nowadays, medical information is being shared over the communication networks due to ease of technology. The patient’s medical information has to be securely communicated over a network for Computer Aided Diagnosis (CAD). Most of the communication networks are prone to attacks from an intruder thus compromising the security of patients data. Therefore, there is a need to transmit medical images securely over a network. Visual secret sharing scheme can be used to transmit the medical images over a network securely. This thesis has applied the super-resolution based VSS scheme on the medical images to transmit them over a network. The experimental results showed that, scheme recovers medical images with better contrast. The experimental results showed that the presented system is able to reconstruct the secret image with the contrast of almost 85-90% and similarity of almost 77%. Additionally, the performance of the presented system is evaluated using the existing CAD systems. The reconstructed images using the presented super-resolution based VSS scheme achieves the similar classification accuracy as that of existing CAD system. Nowadays, underwater images are being used to identify various important resources like objects, minerals, and valuable metals. Due to the wide availability of the Internet, the underwater images can be transmitted over a network. As underwater images contain important information, there is a need to transmit them securely over a network. Visual secret sharing (VSS) scheme is a cryptographic technique, which is used to transmit visual information over insecure networks. Randomized VSS (RVSS) scheme recovers Secret Image (SI) with a Self-Similarity index (SSIM) of 60-80%. But, RVSS is suitable for general images, whereas underwater images are more comii plex than general images. The work presented in the thesis to share underwater images over a network uses a super-resolution based VSS scheme. Additionally, it has removed blocking artifacts from the reconstructed secret image using Convolution Neural Network (CNN)-based architecture. The presented CNN-based architecture uses a residue image as a cue to improve the visual quality of the SI. The experimental results show that the presented VSS scheme can reconstruct SI with almost 86-99% SSIM. Hence can be used to transmit complex images over a insecure channels.Item Analysis of Influence of Land Use Land Cover and Climate Changes on Streamflow of Netravati Basin, India(National Institute Of Technology Karnataka Surathkal, 2023) Jose, Dinu Maria; G S, DwarakishMassive Land Use/Land Cover (LULC) change is a result of human activities. These changes have, in turn, affected the stationarity of climate, i.e., climate change is beyond the past variability. Studies indicate the effect of LULC change and climate change on the hydrological regime and mark the necessity of its timely detection at watershed/basin scales for efficient water resource management. This study aims to analyse and predict the influence of climate change and LULC change on streamflow of Netravati basin, a tropical river basin on the south-west coast of India. For future climate data, researchers depend on general circulation models (GCMs) outputs. However, significant biases exist in GCM outputs when considered at a regional scale. Hence, six bias correction (BC) methods were used to correct the biases of high-resolution daily maximum and minimum temperature simulations. Considerable reduction in the bias was observed for all the BC methods employed except for the Linear Scaling method. While there are several BC methods, a BC considering frequency, intensity and distribution of rainfall are few. This study used an effective bias correction method which considers these characteristics of rainfall. This study also assessed and ranked the performance of 21 GCMs from the National Aeronautics Space Administration (NASA) Earth Exchange Global Daily Downscaled Projections (NEX-GDDP) dataset and bias-corrected outputs of 13 Coupled Model Inter-comparison Project, Phase 6 (CMIP6) GCMs in reproducing precipitation and temperature in the basin. Four multiple-criteria decision-making (MCDM) methods were used to identify the best GCMs for precipitation and temperature projections. For the CMIP6 dataset, BCC-CSM2-MR was seen as the best GCM for precipitation, while INM-CM5-0 and MPIESM1-2-HR were found to be the best for minimum and maximum temperature in the basin by group ranking procedure. However, the best GCMs for precipitation and temperature projections of the NEX-GDDP dataset were found to be MIROCESM-CHEM and IPSL-CM5A-LR, respectively. Multi-Model Ensembles (MMEs) are used to improve the performance of GCM simulations. This study also evaluates the performance of MMEs of precipitation and temperature developed by six methods, including mean and Machine Learning (ML) techniques.The results of the study reveal that the application of an LSTM model for ensembling performs significantly better than models. In general, all ML approaches performed better than the mean ensemble approach. Analysis and mapping of LULC is essential to improve our understanding of the human-nature interactions and their effects on land-use changes. The effects of topographic information and spectral indices on the accuracy of LULC classification were investigated in this study. Further, a comparison of the performance of Support Vector Machine (SVM) and Random Forest (RF) classifiers was evaluated. The RF classifier outperformed SVM in terms of accuracy. Finally, the classified maps by RF classifier using reflectance values, topographic factors and spectral indices, along with other driving factors are used for making the future projections of LULC in the Land Change Modeler (LCM) module of TerrSet software. The results reveal that the area of built-up is expected to increase in the future. In contrast, a drop in forest and barren land is expected. The SWAT model is used to study the impacts of LULC and climate change on streamflow. The results indicate a reduction in annual streamflow by 2100 due to climate change. While an increase in streamflow of 13.4 % is expected due to LULC change by the year 2100 when compared to the year 2020. The effect of climate change on streamflow is more compared to LULC change. A reduction in change is seen in the streamflow from near to far future.Item An Analysis of Pricing Efficiency of Exchange Traded Funds (ETFS) in India(National Institute of Technology Karnataka, Surathkal, 2020) C, Buvanesh.; H, Rajesh Acharya.An ETF is a marketable security, which is traded similar to a common stock in the stock exchange that tracks an index, a commodity, or a basket of assets. ETFs are index funds representing a basket of securities, that include stocks, bonds, and other assets traded in the stock exchange. An ETF is designed to track a particular stock or bond index. Nifty Bees' based on S&P CNX Nifty, was the first ETF launched in India in the year 2001 (December) by the Benchmark Mutual Fund. The current study focuses on the pricing efficiency of equity ETFs in India. Data period was covered from the inception date of ETFs to 31st December 2018. Seventeen equity ETFs were examined in the study. The four major objectives of the study includes the pricing efficiency of ETFs and its underlying benchmark indices, the speed of adjustment of ETFs and underlying benchmark indices to its intrinsic values. Further, the study continues to examine the persistence of premiums and discounts. The study also investigates on the volatility and returns spillover between ETFs and underlying benchmark indices. The current study employs the ARDL model to examine the long-run relationship of ETFs market price and underlying index price, ETF's market price, and NAV. Also, the present study uses the ARMA estimator for assessing the speed of adjustment. Finally, the study employs the ARMA-GARCH and ARMA -EGARCH for volatility spillover of ETFs and underlying benchmark indices. Empirical results suggested that ETFs have a long-run relationship with underlying benchmark index prices, and single and multiple structural breaks had an impact on the results compared to those without structural break. The results of the second objective showed that ETFs and underlying benchmark index prices did not reflect full information in 20 days. The results of the third objective showed that most ETFs are trading in discount than premium, except a few ETFs. The bounds test result also confirmed that all the ETFs had a long-run relationship between ETF price and NAV. The finding of the fourth objective shows that volatility persistence existed in all the ETFs and their respective indices. Leverage term was negative and significant in mostvii of the ETFs and their respective indices, which further confirmed the asymmetric volatility present in the data. In most of the cases, the spillover of returns was unidirectional from index return to ETF returns and not vice versaItem Analysis of Shoulder and Knee Joint Muscles using Developed CPM Machine and Finite Element Method(National Institute of Technology Karnataka, Surathkal, 2016) Sidramappa, Metan Shriniwas; Mohankumar, G. C.; Krishna, PrasadShoulder and knee joints pain, injury and discomfort are public health and economic issues world-wide. As per the Indian orthopedic association survey, there are about 50% of the patient visits to doctors' offices because of common shoulder and knee injuries such as fractures, dislocations, sprains, and ligament tears. Shoulder and knee are the most complex, maximum used and critical joints in the human body. The shoulder and knee joint muscle behaviour during different exercises is one of the major concerns to the orthopedic surgeon for analysing the exact healing and duration of the injury. Quantification of mechanical stresses and strains in the human joints and the musco-skeletal system is still a big concern for the researchers. The injury mechanisms and analysis of the post-operative progress is one of the most critical studies for orthopedic surgeons, biomechanical engineers and researchers. In the present work a classical 3D Finite Element Method (FEM) modelling technique has been used to investigate the stresses induced in the shoulder joint muscles during abduction arm movement and knee joint muscles during flexion leg movement for different range of motion. 3D model provides valuable information for analysing complex bio-mechanical systems and characterization of the joint mechanics. Reverse modelling method was used for generating fast, accurate and detailed contours of the shoulder and knee model. Scanning of the complicated shoulder and knee joint bones were made by 3D scanner (ATOS III) to generate ‘.stl’ file. Accurate and detailed 3D bone geometry of the shoulder and knee joint models was done using CATIA V5 software from the scanned ‘.stl’ file. The higher order geometrical features (curve and surfaces) were designed by filtering and aligning number of cloud points, tessellation of polygonal model, recognition and defining the referential geometrical entities. According to quadratic dependency, a non-homogeneous bone constitutive law was implemented. Different muscles were then added on the shoulder and knee joint models in CATIA V5. 3D models were then imported in ‘.igs’ format into ANSYS workbench for the stress analysis. A 3D FEM model was developed for the five important shoulder joint muscles, namely deltoid, supraspinatus, subscapularies, teres minor and infraspinatus. Thekinematics for shoulder abduction arm movement was prescribed as an input to finite element simulations and the Von Mises stresses and equivalent elastic strain in the shoulder muscles were plotted. Individual and group muscle analysis was done to evaluate the Von Mises stresses and equivalent elastic strain of the shoulder muscles during the abduction arm movement. During the individual muscle analysis, the Von Mises stresses induced in deltoid muscle was maximum (4.2175 MPa) and in group muscle analysis it was (2.4127MPa) compared to other individual four rotor cuff muscles. During the individual muscle analysis, the equivalent elastic strain induced in deltoid muscle was maximum (3.5146 mm/mm) and in group muscle analysis it was (2.0106 mm/mm) compared to other individual four rotor cuff muscles. The percentage analysis of individual muscles contribution for abduction arm movement predicted by FEM analysis was maximum (46.85%) in the deltoid muscle. The results showed that deltoid muscle was the most stressed muscle in both individual and group muscle analysis. The Surface Electromyography (SEMG) test was conducted on the shoulder prone subjects using the developed low cost shoulder Continuous Passive Motion (CPM) machine. The percentage analysis of individual muscles contribution for abduction arm movement predicted by SEMG analysis was maximum (48.15%; 46.15% and 47.05%) in the deltoid muscle. Deltoid was the most contracted (stressed) muscle observed during the SEMG analysis amongst the five shoulder muscles. The results showed by both FEM and SEMG methods that deltoid muscle was the most sensitive amongst the five shoulder joint muscles during abduction arm movement. FEM analysis was done to investigate the Von Mises stresses in two important knee joint muscles such as the rectus femoris and biceps femoris muscle during the flexion leg movement. During the muscle analysis, the Von Mises stresses induced in rectus femoris muscle was the maximum (1.5579 MPa). The results showed that rectus femoris muscle was the most stressed muscle than the biceps femoris muscle during flexion leg movement. The SEMG test was conducted on the knee prone subjects using the developed low cost knee CPM machine. The average percentage contraction (stress distribution) exhibited by SEMG analysis on the rectus femoris muscle was 70% of the totalmuscles contraction. The results by both FEM and SEMG methods showed that rectus femoris was the most stressed muscle during the flexion leg movement. The present work provides in depth information to the researchers and orthopedicians for the better understanding of the shoulder and knee joint mechanism in human anatomy. It predicts the most stressed muscle in the shoulder joint during the abduction arm movement and in the knee joint during the flexion leg movement at different range of motion.Item Analytical and Experimental Dynamic Analysis of A Four Wheeler Vehicle With Semi Active Suspension System(National Institute Of Technology Karnataka Surathkal, 2023) N P, Puneet; Kumar, Hemantha; K V, GangadharanAdvances in the automobile industries in several engineering aspects have opened up never ending challenges and scopes. One such interesting challenge is to achieve better ride quality which intends to provide more comfort to the passengers. The road profile randomness is not uniform around the globe. Therefore, achieving a good ride comfort has always been a task for researchers over the years. A key component responsible for ride quality is the suspension system of the vehicle, a major combination of spring and damper. The nature and magnitude of energy dissipation from the damper provides suitable ride quality to the vehicle. Passive dampers provide constant response against any kind of road disturbances since the fluid properties cannot be altered with any external input. Hence, replacing passive damping medium with semi-active medium will provide added advantage to the suspension system in providing greater ride comfort. Magneto-rheological (MR) fluid is one such smart fluid which is known for its semi-active nature when the external magnetic field is varied. This research study deals with synthesis of magnetorheological fluid and its application in damper of a light motor vehicle. In the primary part of this study, a passive damper was extracted from the suspension system of the commercially available light motor vehicle. This passive damper was characterized in the dynamic testing machine (DTM) to understand the dynamic response of the damper towards varying cyclic input. The damping force response from the passive damper was considered as the benchmark for development of MR fluid damper particular for the test vehicle. A quarter car model was developed using the MATLAB/ Simulink and the response of the passive damper characterization was employed in the damping element of the model. As a preliminary study of the MR fluid damper, a small stroke MR damper was designed and developed. For this purpose, an MR fluid was prepared in-house and used as the damping medium in the MR damper. This prototype was then characterized using dynamic testing machine subjected to different amplitude, frequency and DC current inputs. A mathematical model was established which could iv relate the damping force and the current which was then used in quarter car simulation. Based on the above preliminary works, a prototype MR damper with actual scale was then designed using optimization technique under certain geometrical constraints. The designed MR damper piston was analyzed by using finite element magnetic methods (FEMM) to verify the magnetic flux development in the fluid flow gap. MR fluid as the damper fluid was synthesized in-house using electrolytic iron particle (EIP) and paraffin oil. Rheological study of the synthesized MR fluid was conducted to analyze the shear stress as well as viscosity variation against the shear rate and the current inputs. The developed MR damper was then characterized under various dynamic and DC current inputs to study the force versus displacement nature. The hysteresis of the damper was mathematically represented using parametric modeling technique called Kwok model. The parameters of the model were determined for each condition by using optimization method. This model was then used in quarter car simulation to analyze the effect of suspension under off-state, constant current and through skyhook control. The validation of this simulation was carried out by using the suspension with MR damper in a quarter car test rig and the deviation in the results was analyzed. As an important part of this research work, the suspension with the developed MR damper was tested on-road by using a test vehicle. The passive damper in the front suspension of the test vehicle was replaced with MR damper and the suspension was tested at two different velocities. Also, the ride comfort at different conditions was analyzed. As an extended part of the study, a control logic involving single sensor technique was developed. The performance of developed control was tested using the quarter car set up and the comparison of the responses through different current inputs was also presented.Item Analytical Solutions for the Thermo-Elastic Analysis of FGM Plates Using Higher Order Refined Theories(National Institute of Technology Karnataka, Surathkal, 2017) D. M, Sangeetha; Swaminathan, K.Analytical formulations and solutions are presented for the thermo-elastic analysis of Functionally Graded Material (FGM) plates based on a set of higher order refined shear deformation theories. The displacement components in these computational models are based on Taylor’s series expansions, which incorporates parabolic variation of transverse strains across the plate thickness. The displacement model with twelve degrees of freedom considers the effects of both transverse shear and normal strain/stress while the other model with nine degrees of freedom includes only the effect of transverse shear deformation. Besides these, a higher order model and a first order model with five degrees of freedom that are developed by other investigators and are reported in the literature are also used in the present investigation for evaluation purposes. A simply supported FGM plate subjected to thermal load is considered throughout as a test problem. The material properties are mathematically modeled based on power law function. The temperature is assumed to vary nonlinearly and obey one-dimensional steady state heat conduction equation throughout the plate thickness while in-plane is sinusoidal. Along with this constant and linearly varying temperatures are also considered in the study. The equations of equilibrium are derived using the Principle of Minimum Potential Energy (PMPE) and closed form solutions are obtained using Navier’s solution technique. Firstly, numerical results obtained using various displacement models are compared with the three-dimensional elasticity solutions available in the literature inorder to establish the accuracy of higher order models considered in the study. After establishing the accuracy of the solution method benchmark results and comparison of solutions are presented for Monel/Zirconia, Titanium-Alloy/Zirconia and Aluminium/Alumina FGM plate by varying edge ratio, slenderness ratio and power law parameter. Numerical and graphical results are presented for in-plane, transverse displacements and stresses for all the models by considering different temperature profiles.Item Analytical Tools for Strength Prediction of Thermally Deteriorated HPC(National Institute of Technology Karnataka, Surathkal, 2014) Bhygayalaxmi, Kulkarni Kishor Sitaram; Yaragal, Subhash C; Narayan, K. S. BabuAnalytical tools for strength prediction of thermally deteriorated HPC” is an experimental study on development of analytical tools for strength prediction of High Performance Concrete (HPC) exposed to elevated temperatures. The prime objective is to study the behaviour of HPC at different exposure durations and temperatures. The work also focuses on the residual strength assessment of concrete exposed to elevated temperature by non-destructive testing. Exhaustive review of literature has been done to understand the state of the art, to identify the points needing further research and then to design the experimental investigation. First phase of the study deals with properties of four types of HPC mixes that include unblended and blended mixes, with partial replacement of cement by Fly Ash (FA) and Ground Granulated Blast Furnace Slag (GGBFS), at exposure temperature range of 100°C-800°C and exposure duration of 1, 2 and 3 hours. Colour change and crack patterns have been observed. Porosity and density determination, Ultrasonic Pulse Velocity (UPV) measurements to assess the quality of concrete, have been made. Residual compressive and splitting tensile strengths have been determined by destructive testing. Second phase explores the potential of drilling resistance test on thermally deteriorated concrete as an NDT tool. Drilling time for a designated depth of drilling and sound measurement while drilling have been recorded. Determination of residual compressive strength of plain and reinforced concrete, exposed to elevated temperature has been carried out in the third phase of experiments by core recovery tests to understand the behavioural differences. From the above investigation very interesting conclusions have been drawn that highlight the superiority of blended concrete’s fire endurance properties. The potential use of drilling time and sound levels as an NDT tool, nomographs that can be used as valid decision making tools in failure forensics and also elaborate the care and caution necessary in conducting and interpretation of core test results of fire damaged structural elements.Item ANN Modeling and Optimization of Power output from Horizontal Axis Wind Turbine(National Institute of Technology Karnataka, Surathkal, 2019) Rashmi; A, Sathyabhama.; P, Srinivasa Pai.Integration of wind energy with energy with existing power sources has been restricted due to its intermittent and stochastic nature. Hence, there is a great need to develop an accurate and reliable site-specific prediction model. Forecasting of wind speed which is an important parameter affecting turbine power output, will help the wind energy industry in proper planning, scheduling and controlling. Artificial Neural Network (ANN) has proved its capability in mapping such complex non-linear inputoutput relations. The main objective of the wind energy industry, is to reduce the cost and increase the power generation by optimizing the controllable parameters affecting the turbine power output. The metaheuristic optimization algorithms, which are robust to dynamic changes are proved to be successful in solving such complex real-world problems. This research work has been carried out in three different phases namely wind power prediction, wind power optimization and wind speed forecasting (WSF).The data for this research work has been collected from the Supervisory Control and Data Acquisition System (SCADA) of 1.5 MW, pitch regulated, three bladed, horizontal axis wind turbine, located in a large wind farm present in central dry zone of Karnataka, India. In the present study, different conventional and ANN models have been used to predict the power output of a turbine. ANN models have been developed based on batch learning and Online Sequential Extreme Learning Machine (OSELM) algorithms, by considering carefully selected variables affecting power output, namely wind speed, wind direction, blade pitch angle, density and rotor speed. Maximizing the power output of the wind turbine by optimizing the only controllable parameter namely blade pitch angle has been achieved using three different metaheuristic optimization algorithms. A vihybrid ANN multistep WSF model, which is a combination of OSELM, Cuckoo Search (CS) and Optimized Variational Mode Decomposition (OVMD) method, hence named OVMD-CS-OSELM has been proposed in the present study. The performance of this hybrid model has been then compared with the benchmark models. From this study it has been found that, the models based on Extreme Learning Machine (ELM) converge extremely faster with better generalization performance and generate a compact network structure compared to Backpropagation learning. Out of the fifteen models based on batch learning, the fully optimized RBF model with ELM learning resulted in good performance with Root Mean Square Error (RMSE) value of 1.73%. The detailed study of OSELM algorithm showed a RMSE value of 1.96%, which is slightly higher than the fully optimized RBF model. However,for the present application due to the online nature of the wind data, OSELM algorithm is highly preferable. CS optimization algorithm is found to be suitable in optimizing the blade pitch angle of the turbine and accordingly the optimization of the power output, due to its fast convergence and a highest Mean relative PG value of 17.329%. In comparison with benchmark models, the proposed WSF model showed clear benefits of OSELM over ELM, OVMD over Emperical Mode Decomposition and CS over Partial Autocorrelation function for modeling, data pre-processing and input feature selection, with percentage improvements in Mean Absolute Percentage Error (MAPE) of 3.35%, 48.19% and 12.05% respectively for 1-step ahead forecasting. The proposed model has been validated using a standard database, which is from a meteorological station located in Portugal, thereby establishing its use in WSF. This research work thus proposes efficient models based on ANN for wind power prediction, optimization and WSF, which is useful in proper planning, integration and scheduling in the wind energy industry, thereby making it more competitive and a promising renewable energy source.