Volume 6 Number 4 March 2017
1 |
Mining Big Data - An Analysis of SMS Text Data Using R Abstract- A communication is said to be resilience when it is effectively used at the time of calamity management. As we are in the global warning era, Natural Disasters are common around the world and help lines are created for victims to communicate with the Disaster Management Systems at the time of emergency. Huge number of peoples who are trapped and affected by catastrophe would try to communicate with an automated or computerized SMS help lines at the state of emergency for rescue, medical or fire emergencies and food supply etc. Humongous and discernable SMS will be generated by the fatalities which leaves the digital map out of BIG DATA. Harnessing this huge and unstructured data will yield several interesting paradigm and valuable information for contemporary decision making and predictive analysis. This paper proposes a frame work for receiving, analysing and visualizing SMS text data and discovering patterns with clustering algorithms. This paper also discusses how the uncovered hidden value becomes serviceable information for current decision making and prophetical study to reduce risks in Disaster Management and Mitigation. |
2 |
Text Preprocessing on Extracted Text from Audio/Video using R Abstract- E-learning is an evolving new education approach that increases learning experiences by integrating multimedia and network technologies. As an integrated part of e-learning, the sources of e-learning content are text, audio, video, images and animations. Most of the instructional content is in the form of lecture videos. Effective use of these videos, however, remains a challenging task. To overcome this issue, the video content has to be analyzed effectively. For analyzing the video content, the audio part of the video can be converted to text. The textual form of content enables the content developers in analyzing the content in a manner with reduced time and space. The accuracy of content analysis may be improved in the textual form of the content than other forms. Therefore first the text has to be converted from audio and video using the existing tools and techniques such as Google Docs, Braina, Apple Dictation, Dragon Naturally Speaking, etc. As the converted text is with errors they may be preprocessed before it is used. Hence this paper uses the tool R to preprocess the text obtained from audio and video through Google Docs open source. This preprocessing paves a way to reduce the error rate. It is observed that the improvement of 0.94% precision accuracy of converted text; while the improvement of recall accuracy is 0.91% after preprocessing. Therefore, the false rate has reduced after preprocessing. Hence preprocessing reduces the error rate around 1% on the converted text from audio and video. |
3 |
Steganalysis on Images using SVM with Selected Hybrid Features of t-test Feature Selection Algorithm
Abstract- Steganography techniques are classified into many categories based on embedding method used. In spatial domain steganography techniques, image pixels values are converted into binary values and some of the bits are changed for hiding secret data. This work attempts to detect the stego images created by Wavelet Obtained Weights (WOW) algorithm by Steganalysis on Images using statistical attack. It is based on the classification of selected Hybrid image feature sets using Support Vector Machine (SVM) and feature selection algorithm through t-test (SVM-HT) with the combined features of Chen Features, Subtractive Pixel Adjacency Mode (SPAM) Features and Cartesian-calibrated Pev [Ccpev] Features. It uses the first 1000 principal features for training and testing. The proposed approach produced results prove low probability of error rate. |
4 |
A Survival Study on Software Failure Prediction Using Adaptive Dimensional Gene Model Abstract- Software reliability is a probability of software system to function under operating conditions over period of time. Software reliability concentrates on prediction of residual faults, estimation of failure intensity, reliability and cost. Software faults causes due to the software failures generated through the developmental process of software. Software system failures have large drawbacks on time and resources for recovery. A software system failure is predictable and large attention is given to software system failure prediction.However, the probability of failure estimation remained unsolved. The research work is carried out to perform the fast prediction of software failures through Genetic-based Bayesian model for improving the true positive rate. |
5 |
A Model for Prediction of Crop Yield Abstract- Data Mining is emerging research field in crop yield analysis. Yield prediction is a very important issue in agricultural. Any farmer is interested in knowing how much yield he is about to expect. In the past, yield prediction was performed by considering farmer's experience on particular field and crop. The yield prediction is a major issue that remains to be solved based on available data. Data mining techniques are the better choice for this purpose. Different Data Mining techniques are used and evaluated in agriculture for estimating the future year's crop production. This research proposes and implements a system to predict crop yield from previous data. This is achieved by applying association rule mining on agriculture data. This research focuses on creation of a prediction model which may be used to future prediction of crop yield. This paper presents a brief analysis of crop yield prediction using data mining technique based on association rules for the selected region i.e. district of Tamil Nadu in India. The experimental results shows that the proposed work efficiently predict the crop yield production. |
6 |
A Survival Study for Software Test Suite Generation using Derived Genetic Algorithm Abstract- Software has evolved as an innovative solution for several applications. Unit test suites are mainly used for increasing the software quality using the techniques like search-based software test. Search-based testing generates the unit test suites automatically for object oriented code. Many testing tools like unit testing, integrity testing, is redesigned to check the correctness of software results and to produce the test suites with high coverage. However, performing specific test is impractical due to higher execution time and less coverage capability. In this work, Search based Test Suite Generation using Derived Genetic Algorithm (STSG-DGA) is designed to increase the coverage and to reduce the redundancy for test suite generation. Initially, an initial population of randomly produced candidate solutions is used as search operators. Then, the parent selection is carried out based on fitness function. After that, reproduction is performed by crossover and mutation operation with probabilities. Finally, fitness of population gets increased in DGA and the process gets repeated till the optimal solution is found. Our research work helps to reduce execution time and computational complexity with minimum redundancy for test suite generation. |
7 |
Weight Optimization of Multilayer Perceptron Neural Network using Hybrid PSO for Improved Brain Computer Interface Data Classification
Abstract- A Brain-Computer Interface (BCI), also known as Brain Machine Interface (BMI), is a communication system that lets the users to interact with electronic devices by means of control signals acquired from Electroencephalographic (EEG) activity without engaging peripheral nerves and muscles. The preliminary motivation for BCI research was to develop assistive devices for people with locked-in disabilities. Nowadays, researchers are exploring BCI as a novel anthropomorphic interaction channel for daily applications such as robotics, virtual reality, and games. This paper investigates the effect of weight optimization using Hybrid Particle Swarm Optimization (PSO) for Multilayer Perceptron Neural Network classifier which uses features selected by Principal Component Analysis and Hybrid PSO. |
8 |
Improved Flooding with QoS Metrics in MANET Grid Abstract- Mobile Ad Hoc Networks (MANETs) exemplify a complex distributed network, which is characterized by the lack of any infrastructure. The lack of infrastructure though on one hand purports many significant advantages over the infrastructure-based networks, these networks have additional constraints that conventional networks do not have. For example, the connection establishment is costly in terms of time and resource where the network is mostly affected by connection request flooding. The proposed approach presents a way to reduce flooding in MANETs with Speed by applying Grid Fisheye state protocol (GFSR). With certain Quality of Service (QOS), this protocol is compared and analyzed in Grid FSR in NS3 simulator |
9 |
A Review on Security Challenges and Issues of Big Data Abstract- Lack of Security is the major problem all over the world in each sector. The term big data is due to the terrific usage of data in our day to day life. Among these data, highly sensitive data such as Payment Card Information (PCI), Personally Identifiable Information (PII), and Protected Health Information (PHI) must be handled more securely. Routine steps such as Prevention, Detection, Remediation and Investigation should be followed in order to get a safe and secure big data environment. In this paper, we provide an extensive survey of major and minor security issues and challenges of Big Data, while highlighting the specific concerns in big data security. We have compared the traditional Security Information and Event Management System (SIEM) with the current Security Information and Event Management System (SIEM) for big data. We have discussed the tools and techniques available for securing big data. We have provided a survey of existing solutions, identified research gaps, and suggested future research areas. |
10 |
An Automatic Satellite Image Enhancement using Advanced Genetic Algorithm Abstract- Satellite images are clearless due to large number of reasons. Henceforth the necessary information which serves as input in the images will be unclear. Image enhancement is the technique which can improve a satellite image information which is hidden. Image enhancement method is used to improve the quality of the image to get useful information. General issues of satellite images are enhancement of gray-scale/colour, noise, artifacts, distortion, image large size, resolution, weak colour information, high frequency content and many more. In this paper, sample images before enhancement is taken from satellite and enhancement is done without producing unnatural and unclear images and retaining image with good natural contrast. The principle of preserving the natural basic colours is applied here and the originality of the natural image is retained by properly identifying the non matching ones and discarding them. |