Volume 6 Number 3 December 2016

1

Malicious Node Identification Scheme for MANET using Rough Set Theory
S. Sathish, M. Saranya

Abstract- A mobile ad hoc network is a collection of wireless nodes that can dynamically be set up anywhere and anytime without using any pre-existing network infrastructure. Due to its nature it has challenging issues to improve the performance of the network. One of the challenges is to identify the misbehaving node in the network. The misbehaving nodes plays a vital role in degrade the performance of the network and may allow the data loss. There are some situations when one or more nodes in the network become selfish or malicious and tend to annihilate the capacity of the network. The aim of this work is to detect the malicious nodes using rough set theory. With the help of route cache table the malicious node are identified based on the transmission history. Every node in the network maintains the cache table and transmission history about its neighbor node. To find out the transmission history of node based on transmission metrics calculated such as packet delivery ratio, throughput, end-to-end delay, number of dropped packet, error rate. To set the source and destination runs the node in simulation environment with different speeds. Based on the values of transmission history of nodes runs with different speed are taken to construct the information table. Based on the table the rules are derived to take a decision whether the nodes are good or bad. After classification apply rough set theory to identify the malicious node. The path having bad node the packet send an alternate route in a shortest path. Our experiment results reveals that the rough set based approach increases the network capacity like packet delivery ratio, as well as decrease end-to-end delay and throughput.

2

Medical Image Denoising Based on ICM PCNN
S. Shajun Nisha, G. L. Latha Rani

Abstract- Image denoising is the crucial task in the area of image processing and computer vision. During the image acquisition, versatile intrinsic and extrinsic stimuli effectuate superfluous signals that lead to noisy image. Especially in medical images the quality of the image is never compromised as it is a life-sustaining issue in diagnosis. It is critical and most important to decimate the occurrence of noise in medical images. DWT (Discrete Wavelet Transform) and ICM PCNN (Intersecting Cortical Model Pulse Coupled Neural Network) model are employed to extinguish the occurrence of noise in medical images. DWT decomposes the input image into detailed and approximate coefficients at three levels resulting in best localization of the given image. The proposed jointure of ICM PCNN with DWT model classifies the image variance and detail variance without commoving the original image data. Filters are introduced to eliminate the noise that corrupted the input image. Wiener Filter, Adaptive Bilateral Filters and Boundary Discriminative Noise Detection (BDND) are used to denoise the speckle noise and salt and pepper noise present in the CT scan and Ultra Sound image. Results are assessed by estimating Peak Signal to Noise Ratio (PSNR), SSIM (Structural Similarity Index Measure), CoC (Coefficient of Correlation) and EPI (Edge Preserving Index) for Medical images corrupted with noise.

3

Implementation of the Association Rule Mining Algorithm for the Recommender System of E-Business
D. Santhi Jeslet, M. Lilly Florence

Abstract - Through the dramatic development of computers and internet (web), business has moved into online. This connected the customer and company very close without their presence. The internet has become a part of individual human being to run their day-to-day life. So, to run an effective e-business web is playing a vital role. To run a successful e-business, it is necessary to uphold the websites. To uphold the website, it is required to identify the frequently visited and associated pages. This can be found by applying the association rule mining algorithms of data mining. In this work, the apriori association rule mining is applied on the web usage data. The web usage data is collected from the server log. Then, it is preprocessed in such a way that it is suitable for the algorithm. Through the association rules generated by the algorithm, one can identify the frequently visited and associated pages. These rules recommend the website administrator to restructure or redesign the website which in turn results in increasing the number of visitors/customers and retains the existing customer.

4

Forecasting Stock Trend Using Technical Indicators with R
J. Sharmila Vaiz, M. Ramaswami

Abstract- Technical analysis is a well proven method in monitoring the price action of free markets that have broad participation in order to gain insight into the future price trend. This paper attempts to study on the effectiveness of several technical indicators on share trading to assess the company financial performance over period of time. The major technical indicators in Technical analysis includes Moving Averages, Moving Average Convergence/Divergence, Average Directional Index, Trend Detection Index, Aroon Indicator, Vertical Horizontal Filter, Relative Strength Index, Stochastic, Stochastic Momentum Index, Williams %R, Commodity Channel Index, Chande’s Momentum Oscillator, Bollinger Bands, Average True Range, Donchain Channel, Chaikin Money Flow, On balance Volume and Money Flow Index. This study will help the investors to gain knowledge about the usage of these technical indicators so as to increase their proportion of profitable trading and improve investment returns.

5

Weighted Rough Classification for Imbalanced Gene Expression Data
E. N. Sathishkumar, K. Thangavel, P. S. Raja

Abstract— Classification is an important task in data mining and its applications. In this paper, a novel method Weighted Rough Classification (WRC) using Neural Network is proposed to handle inconsistent and uncertain dataset. This method exploits the framework of rough set theory. The rough set is defined as a pair of lower approximation and upper approximation. The difference between upper and lower approximation is defined as boundary. In the traditional rough set, all samples have equal weight without considering the distribution of samples. Here, we apply Class equal Sample Weighting (CSW) to build a weighted information table. By using this method, samples belonging to majority class have smaller weight while samples in the minority class have larger weight. Based on the weighted information system we perform the ‘inference decision making’. The weighted values are to be discretized since rough set based classification is proposed. The boundary values of weighted information system are considered as uncertain values, and inference decision making is done based on the similarity measure. The similarity measure is evaluated for elements of boundary with the centroid of each class lower approximation, and the decision value is updated according to the closest centroid. Experimental analysis shows that the Novel Weighted Rough Classification algorithm is effective and suitable for evaluating the classification. Further, experimental studies shows that WRC based technique outperformed, compared with the existing techniques.

6

Performance Analysis of Haar Wavelet Transform and Huffman Coding Compression Techniques for Human Object
S. Gomathi, T. Santhanam

Abstract- Image compression is one of the most important steps in image transmission and storage. The need for an efficient technique for the compression of images is ever increasing because the raw images need large amounts of disk space is a big disadvantage during transmission and storage. In order to have an efficient utilization of disk space and transmission rate, images need to be compressed. For this purpose there are basically two types of methods introduced, namely, lossy and lossless image compression techniques. In this proposed work a Huffman algorithm is analyzed in detail and compared with another common compression technique HAAR wavelet transform. The results are analyzed to compare the efficiencies of the compression between the two methods. Different parameters like compression ratio, elapsed time and size have been used to compare. The result shows that the time of compression using lossy compression wavelet method is less than Huffman method and the compression ratio in the wavelet is greater than the Huffman coding.

7

Data Analytics Framework and Methodology for WhatsApp Chats
Transliteration of Thanglish and Short WhatsApp Messages

P. Sudhandradevi, V. Bhuvaneswari

Abstract- Data Analytics has emerged as an important domain in the digital space due to the explosion of tremendous volume of data by various sources such as social media, sensors and business organizations. Social media contribute in generation of huge varied data formats in various representations. WhatsApp has attracted large volume of users because of the easy chat conversations. The chat conversations in WhatsApp support multi languages where user has made their own short conventions in representing communications. In current scenario WhatsApp is used for small scale business, understanding the context in this chat text is important to identify the insights. WhatsApp data is left behind unnoticed as there exist no standard to represent in conventional machine understandable text. The objective of this paper is to design a methodology and framework for transliteration of WhatsApp Chat short messages and thanglish (English and Tamil language combined) text. A MapReduce framework is proposed for the transliteration process. The dataset is acquired from known WhatsApp group. A corpus is created from the framework and the experimental results were found to be interesting. The frequent terms are visualized using word cloud. Around 635 WhatsApp texts are replaced by English words.

8

Improving the Generalization Performance of the Back Propagation Neural Network using Projection based Learning Algorithm
K. Jayanthi, P. Suresh, K. Velusamy

Abstract- The Projection Based Learning algorithm is used to optimize the connection weights with Back Propagation Neural Network is proposed in this paper. The proposed model referred as PBL-BPNN that employ for predicting the future closing price of the stock related to real time stock market data. The stock price prediction is a demanding research problem in the financial sector that has received a significant amount of attention in machine learning to make a decision about buying or selling the product on the right time to improve the profit of the client. The process of the training and the testing are applied to real time stock market data. In experimental results show that the proposed Projection Based Learning-Back Propagation Neural Network algorithms have been produced better results compared with standard Back Propagation Neural Network in term of statistical accuracy and trading efficiency of the stock price.