Volume 1 Number 2 September 2011
1 |
Enhancement of Dynamic Encoded Multispectral Images for Effective Visualization Abstract: The paper proposes a novel method which dynamically encodes the multispectral images followed by image enhancement. The compression or encoding of images works with its unique spectral characteristics. The encoding algorithm shows its performance on the basis of image content. An image can be categorized as semantic classes or datatype classes. Semantic classes are nothing but the regions like clouds, mountains, rivers etc. Datatype classes are smooth regions and textured regions. If a particular region of interest is concerned for the application, semantic classes are exposed and compressed. The proposed work concentrates on datatype classes. Initially Principal Component Analysis and Wavelet transformation are carried out in the spatial and spectral domain respectively. The transformed image is then segmented into smooth and textured regions. Based on the region, compression technique is applied dynamically. SPIHT is used for smooth regions since it well suits for images with more smooth regions and similarly wavelet method for textured regions thereby incorporating the advantages of both the algorithms. A variety of enhancement techniques are applied to the compressed image and the results are compared using the metrics WQM and CEF. |
2 |
Integrating E-Governance Systems Using MDM Framework Abstract: E-Governance is an electronic and efficient way of delivering services and information to the citizens. Accurate and up-to-date data exchange gives a tremendous boost to Government to remain agile and competitive. However, the emerging needs of the cross functional data exchange have made data management a complex process, and new generation strategic initiatives like Master Data Management (MDM) helps to establish the standards, policies, framework and protection to organization. MDM is used to manage complex data management scenarios, develop and protect data as an enterprise asset. It also streamline data sharing across systems and provide everyone in the system with a single, consistent view of critical data by using both technology and data governance techniques. This research work deals with redundant E-Governance database is corrected by data profiling, cleansing and compared with existing redundant database. The corrected data is stored in MDM central repository by defining data rules, standards and applied best-of- breed approach in MDM central repository. |
3 |
An Approach: Modality Reduction and Face-Sketch Recognition Abstract: To recognize face sketch through face photo database is a challenging task for today’s researchers. Because face photo images in training set and face sketch images in testing set have different modality. Difference between two face photos of difference person is smaller than the difference between same person in a face photo and face sketched. In this paper, for reduction of the modality between face photo and face sketch we first bring face photo and face sketch images in a new dimension using 2D Discrete Haar wavelet transform with scale 3 followed by a negative approach. After that, extract features from transformed images using Principal Component Analysis (PCA). Thereafter, we use SVM classifier and K-NN classifier for better classification. Our proposed method is experimentally verified by its robustness against faces that are captured in a good lighting condition and in a frontal pose. The experiment has been conducted with 100 male and female face images as training set and 100 male and female face sketch images as testing set collected from CUHK training and testing cropped photos and CUHK training and testing cropped sketches. |
4 |
Multiple service providers sharing Spectrum dynamically via Cognitive Radio in a Heterogeneous Wireless Networks Abstract: The current utilization of the spectrum is quite inefficient; consequently, if properly used, there is no shortage of the spectrum that is at present available. Therefore, it is anticipated that more flexible use of spectrum and spectrum sharing between radio systems will be key enablers to facilitate the successful implementation of future systems. Cognitive radio, however, is known as the most intelligent and promising technique in solving the problem of spectrum sharing. In this paper, we consider the problem of spectrum sharing among users of service providers to share the licensed spectrum of licensed service provider. We formulated the problem based on bandwidth sharing in which each service provider’s users makes use of the amount of spectrum and each primary user may assign the spectrum among secondary users by itself according to the information from secondary users without degrading its own performance. Service provider central systems suffer from low utility performance defined in terms of spectrum efficiency, blocking rate, and free spectrum. |
5 |
Optimization and Wireless Networks with Jamming Characteristics Abstract: In Wireless 802.11 networks, Multiple-path source routing allows data source node to distribute the total traffic among the possible available paths. However, in this case jamming effects were not considered. Recent work has presented jamming mitigation scheme, anti-jamming Reinforcement System on 802.11 networks by assessing physical-layer functions such as rate adaptation and power control. Rate adaptation algorithms significantly degrade network performance. Appropriate tuning of carrier sensing threshold allows transmitter to send packets even on jam that enable receiver to capture desired signal. Efficient schedules need to be investigated for redundant transmission to perform well in presence of jammer. In this paper, the proposal in our work presents an Efficient Time and Transmission Schedule Scheme for wireless 802.11 networks in presence of jamming that guarantee low waiting time and low staleness of data. Schedules are optimal even jamming signal has energy limitations. Each packet is encoded by an error-correcting code (Reed-Solomon). Reed solomon code allow schedule to minimize waiting time of the clients and staleness of the received data. Jammers have restrictions on length of jamming pulses and length of intervals between subsequent jamming pulses. |
6 |
Confidential Numeric Data Protection in Privacy Preserving Data Mining Abstract: Data mining is used to extract hidden knowledge from the large data repositories. This knowledge is very essential and useful for solving the complicated problems and tackles the difficult situations in a simple way. In many circumstances, the knowledge extracted from data mining can be misused for variety of purposes. This condition raises the concerns of performing the data mining tasks in a secured manner. Privacy preserving data mining is a novel research area in the field of data mining and it mainly concentrates on the side effects which are generated during the data mining process. To perform data mining tasks, most of the times, the data may be shared among people for various reasons. To maintain data privacy, first we have to modify the original confidential data and then the modified data may be shared by others. In the literature, many protection techniques are proposed for modifying the confidential data items. In this research work, we have proposed two new techniques namely Bit++ and Bit-- for protecting the confidential numeric attribute. The performances of the proposed techniques are compared with the existing techniques additive noise and micro aggregation. |
7 |
Learning Software Component Model for Online Tutoring Abstract: Web services are interface elements which allow applications to render functional services to requested clients using open standard protocols. Interactive Learning is a tutorial scheme that integrates social networking and urban computing into course design and delivery. Many interactive learning services are presented through online. To make an online tutoring scheme more effective, the previous work used web services and application programs like instant messaging based on environments in which students reside. But the downside is that it is difficult to maintain the service request queues online. The services and data storage processes are inefficient. To overcome all the above issues, a learning software component model (LSCM) framework is formed in the present work to build a component model based on communication services. In addition to this, the proposed software component modeled with learning object (LO) aspects to integrate related sub hierarchical components. Based on LSCM, training schedulers are identified efficiently. The proposed LSCM framework is experimented to show the performance improvement with the previous online tutoring scheme based on web services in terms of delivery report, maintenance of tutoring sessions and reliability. |
8 |
An Analysis On The Impact Of High Fluoride Levels In Potable Water In Human Health Using Classification Data Mining Technique Abstract: Data Mining is the process of extracting information from large data sets through using algorithms and Techniques drawn from the field of Statistics, Machine Learning and Data Base Management Systems. Traditional data analysis methods often involve manual work and interpretation of data which is slow, expensive and highly subjective Data Mining, popularly called as knowledge discovery in large data, enables firms and organizations to make calculated decisions by assembling, accumulating, analyzing and accessing corporate data. It uses variety of tools like query and reporting tools, analytical processing tools, and Decision Support System. This article explores data mining techniques in health care. In particular, it discusses data mining and its application in areas where people are affected severely by using the under- ground drinking water which consist of high levels of fluoride in Krishnagiri District, Tamil Nadu State, India. This paper identifies the risk factors associated with the high level of fluoride content in water, using classification algorithms and finds meaningful hidden patterns which give meaningful decision making to this socio-economic real world health hazard. |
9 |
A Study on Text Search and Filtering in Anti-Money Laundering Systems Abstract: Management of unstructured data is viewed as one of the major unsolved problems. Nearly eighty percentage of enterprise data resided in unstructured formats such as text files, email, customer profile information, external information, video documents and audio samples in various fields including search, prediction, business intelligence, financial service industry – FSI and in sematic web, which aims at converting web of unstructured data into a “web of data that can be processed directly and indirectly by machines.” The reason behind is that the tools and techniques that have proved so successful in transforming structured and unstructured data in FSI and actionable information to resolve increasing complexity for the transacting database information. With the newly launched FSI regulations, the issue has drawn close attention from governments, financial institutions and research scholars. A few fundamental and important questions confront us in resolving the ever increasing complexities in this issue. However in FSI, there exists, many issues in the Suspicious Activity Report – SARs such as large value reporting, delivering, analysing processes and traditional investigations consuming large volume of man-hours. |
10 |
Fault Detection in Textile Web Materials using Machine Vision Technique Abstract: Quality control is an important aspect in modern industrial manufacturing. In textile production, automatic fabric inspection is important for maintaining the fabric quality. At present the fabric defect inspection process is carried out with human visual inspection, or by imported machines. But it is a time consuming and costly. The detection of fabric defects is one of the most intriguing problems in computer vision. The inspection of fabric defects is particularly challenging due to large number of fabric defect classes which are characterized by their vagueness and ambiguity. The videos of the knitted fabric that is rolled are being captured. The captured video is converted into individual frame and the key frames are extracted for processing. Then the extracted frames are processed to find defects in it and the defects are classified using various machine vision techniques. This paper presents the detailed report about the classification of various types of fabric faults. |
11 |
Efficient Current Mode Sense Amplifier for Low Power SRAM Abstract: Sense amplifiers are one of the most vital circuits in the margin of CMOS memories. Their performance influences both memory access time and overall memory power dissipation. The existing Current-Mode Sense Amplifier coupled with a simplified read-cycle-only memory system has the ability to quickly amplify a small differential signal on the Bit-Lines (BLs) and Data-Lines (DLs) to the full CMOS logic level without requiring a large input voltage swing. The Current-Mode Sense Amplifier has two levels of sensing schemes. This hierarchical two-level sensing scheme helps in reducing both power consumption and sensing delay imposed by the bit-lines and the data-lines on high density SRAM designs. This type of Current-Mode Sense Amplifier improves the sensing speed and reliability of the previously published designs and at the same time reduces the power consumption to a considerable extent. In order to further improve the performance of the existing current-mode sense amplifier, an efficient current-mode sense amplifier is proposed in this research. The proposed research work uses the clamped bit-line sense amplifier. |
12 |
TRUS Image Classification for Prostate cancer using Computational Intelligence Abstract: In medical image analysis image classification is key task for diagnosing the disease of the patient. The classification results assist the doctors to treat the patient according to the severances of the diseases. TRUS imaging is one of the most important medical imaging technologies for scanning the prostate to detect the dissimilarity in the images. It is hard to extort the region of interest from the original TRUS prostate image, since the problems with images are low of intensity contrast and inherent with speckle noise. The M3-Filter is applied to remove the speckle noise. Then, the region of interest is extorted using DBSCAN clustering with morphological operators. The twenty two features are extracted from Gray Level Co-occurrence Matrix (GLCM) which is constructed using extracted ROI. Further, QR-ACO feature selection algorithm is adapted to select the optimum features from the constructed features set. This paper proposes Complex-valued Support Vector Machine (C-SVM) for the classification of TRUS prostate cancer images. And also investigates proposed approaches with SVM according to prostate cancer based on the underlying texture contained within the region of interest. Receiver Operating Characteristic (ROC) analysis is used to evaluate the performance of the proposed classifiers. Experimental results demonstrate that the proposed approach gives the best performance compare to SVM. |