Author Archives: Kamran Khan

Design Artifact’s, Design Principles, Problems, Goals and Importance

Ved Prakash Agnihotri

View:    Abstract |  HTML Full Text |  PDF |  XML|

Designing human computer interaction interface is an important and a complex task, but it could be simplified by decomposing task into subcomponents and maintaining relationships among those subcomponents. Task decomposition is a structured approach, applicable in both Software Engineering and Human Computer Interaction (HCI) fields depending on specific processes and design artifacts. Using artifacts applications could be made for analysis and design by making the hand draw sketches to provide high level of logical design based on user requirements, usage scenarios and essential use cases. To design hand draw sketches there are some strategies to be followed i.e., planning, sequential work flow, and level of details. In this research paper I am presenting design artifacts, goals, principles, guidelines and currently faced problems to human computer interaction design community. Moreover in the end concluded with assessed observations in case study.

Hide Abstract

The Effect of Unstable Models on Robotics

S. Minhaj Ali, Sana Iqbal and Roohi Ali

View:    Abstract |  HTML Full Text |  PDF |  XML|

Biologists agree that pervasive theory is an interesting new topic in the field of networking, and system administrators concur. In fact, fact that such a claim at first glance seems perverse; it is derived from known results. few information theorists would disagree with the emulation of redblack trees. Despite the Moke, our new application for SCSI disks is the solution to all of these challenges.

Hide Abstract

Literature Review on Information and Communication Technologyin Education

Pallavi M. Dessai¹ and R.V. Kulkarni²

View:    Abstract |  HTML Full Text |  PDF |  XML|

Education is one of the main keys to economic development and improvements in human welfare. ICTs stand for information and communication technologies. Globalization and technological change—processes that have accelerated over the past fifteen years—have created a new global economy “powered by technology, fuelled by information and driven by knowledge.” Information and communication technologies (ICTs)—which include radio and television, as well as newer digital technologies such as computers and the Internet—have been touted as potentially powerful enabling tools for educational change and reform. The main purpose of ICT in Education means implementing of ICT Equipments and Tools in Teaching-Learning process as a media and methodology.

Hide Abstract

A Review of Peer-to-Peer Networking on the Internet

C.R. Rachana

View:    Abstract |  HTML Full Text |  PDF |  XML|

Peer-to-Peer is a model of communication where every node in the network acts alike. It is as opposed to the Client-Server model, where one node provides services and other nodes use the services. Peer-to-peer computing takes advantage of existing desktop computing power and networking connectivity, allowing economical clients to leverage their collective power to benefit the entire enterprise.Peer-to-peer computing has been envisaged to solve computing scenarios which require spatial distribution of computation, spatial distribution of content, real-time collaboration, scalability or fault-tolerance at reduced costs. All these factors have influenced the emergence of stronger computing-capable peer-to-peer systems. Peer-to-peer (P2P) systems enable computers to share information and other resources with their networked peers in large-scale distributed computing environments. The resulting overlay networks are inherently decentralized, self-organizing, and self-coordinating. Well-designed P2P systems should be adaptive to peer arrivals and departures, resilient to failures, tolerant to network performance variations, and scalable to huge numbers of peers (tens of thousands to millions). As P2P research becomes more mature, new challenges emerge to support complex and heterogeneous decentralized environments for sharing and managing data, resources, and knowledge with highly dynamic and unpredictable usage patterns. Peer-to-peer computing has been successful in attracting more peers due to its rich contents, fast response time and trust worthy environment. The enormous applications available on the internet are further strengthened with the application of peer-to-peer computing. This paper intends to review the background, challenges and future of P2P Networking.

Hide Abstract

The Effect of Web 2.0 on the Teaching and Learning Processes in a Developing Country’s Universities

B. Okike

View:    Abstract |  HTML Full Text |  PDF |  XML|

Teaching and learning in all aspects of education have taken a dramatic change all over the world today due to the advent of the Internet. Before the Internet came into existence, teaching and learning have always been carried out within classroom environments. But with the Internet, teaching and learning may take place outside the classroom environments. The Internet has made e-learning a reality because students may receive their lectures through the Internet irrespective of geographic locations. Ideally, distance learning programmes are meant for people who are engaged in some full-time jobs. These people may not easily leave their jobs for full-time programmes. Often times, distance learning programmes are online in nature, hence they are usually referred to e-learning. E-learning is one directional in nature and is usually teacher-centered. In other words, the learners are passively involved in the learning process, and as such cannot make contributions to the learning process in which they are directly involved. The learning process in web 2.0 is participatory, hence both the teacher and the learner may engage in dialogue through a web application. This makes the learning process an active one. This study will examine the effect of web 2.0 on the teaching and learning processes of a developing country such as Nigeria’s universities with emphasis on University of Abuja. In the study, questionnaires are to be randomly distributed to teachers and students of University of Abuja to ascertain the effect of web 2.0 in the teaching and learning processes within the university.Data to be collected will be analyzed in other to find the effect of web 2.0 in the teaching and learning processes in the university.

Hide Abstract

Automatic Bone Marrow White Blood Cell Classfication Using Morphological Granulometric Feature of Nucleus

Shraddha Shivhare and Rajesh Shrivastava

View:    Abstract |  HTML Full Text |  PDF |  XML|

The differential counting of white blood cell provides invaluable information to doctors for diagnosis and treatment of many diseases. manually counting of white blood cell is a tiresome, time-consuming and susceptible to error procedure due to the tedious nature of this process, an automatic system is preferable. in this automatic process, segmentation and classification of white blood cell are the most important stages. An automatic segmentation technique for microscopic bone marrow white blood cell images is proposed in this paper. The segmentation technique segments each cell image into three regions, i.e., nucleus, cytoplasm, and background. In this paper, we investigate whether information about the nucleus alone is adequate to classify white blood cells. This is important because segmentation of nucleus is much easier than the segmentation of the entire cell, especially in the bone marrow where the white blood cell density is very high. Even though the boundaries between cell classes are not well-defined and there are classification variations among experts, we achieve a promising classification performance using neural networks with fivefold cross validation in which Bayes’ classifiers and artificial neural networks are applied as classifiers.The classification performances are evaluated by two evaluation measures: traditional and classwise classificationrates. we compare our results with other classifiers and previously proposed nucleus-based features. The results showthat the features using nucleus alone can be utilized to achieve aclassification rate of 77% on the test sets. Moreover, the classification performance is better in the class wise sense when the a priori information is suppressed in both the classifiers.

Hide Abstract

An Overview of Data Mining

H.B. Basanth Kumar 

View:    Abstract |  HTML Full Text |  PDF |  XML|

Organizations in the world wide generate huge amount of data which is mostly unorganized. This unorganized data requires some processing to generate meaningful and useful information. In order to organize the huge amount of data, we implement the database management system concept such as SQL Server. Structured Queury Language (SQL) is a query language used to retrieve and manipulate the data that are stored in relational database management systems. However, use of SQL is not always adequate to meet the end user requirements of sophisticated information from unorganized data bank. This paper describes the concepts of data mining, its process, techniques and some of its applications. 

Hide Abstract

A Review of Architecture for Providing Knowledge as a Service (Kaas), A New Paradigm on Academic Cloud

Bhawana Mathur

View:    Abstract |  HTML Full Text |  PDF |  XML|

Knowledge plays vital role in every stage of learning and research .The academic cloud providing education services to students ,teachers and researchers can further be equipped to provide knowledge as a service to the users for their knowledge enhancement and problem solutions. This is feasible through development and embedding of knowledge database and extracting relevant knowledge from the academic cloud system .This paper proposes an architecture that uses a new paradigm KaaS on cloud environment . The architecture takes care of insertion of data ,conversion of data into knowledge , knowledge extraction and providing various services. It also incorporates the requirement to pay polices for the knowledge services used. In this paper, we will review what the cloud computing infrastructure will provide in the educational arena, especially in the universities where the use of computers are more intensive and what can be done to increase the benefits of common applications for students and teachers.

Hide Abstract

A Fault Tolerant Power Constraint AODV Protocol for MANET

K.Vanaja¹ and R. Umarani²

View:    Abstract |  HTML Full Text |  PDF |  XML|

The rapid development in the wireless technology, enormous availability of mobile devices make the people expectation of their communication with each other without any interruption. Mobile Adhoc Network is a collection of mobile devices which communicate with each other without any infrastructure. The communication networks suffer due to frequent changes in topology because of mobility and scalability. The main objective of this protocol is to resoleve this issue by proposing an enhanced reliable,fault tolerant routing protocol for MANET based on the reactive routing protocol AODV, and is called Fault Tolerant Power Constraint Adhoc On Demand Distance Vector Routing (FTPC-AODV). The proposed FTPC-AODV deals with how the mechanism adapt to topology changes due to mobility induced link break by building back up paths between the source and destination by considering battery power as a constraint. If the primary path fails, it automatically switches to the backup path and improves the data transfer rate. The protocol is implemented using Network Simulator (NS-2) and simulation results are analyzed based on the quantitative metrics. The derived results shows that the performance of Adhoc Newtork significantly improved by means of good packet delivery ratio, through put and reduced packet loss, and delay.

Hide Abstract

Virtual Machines No Longer Considered Harmful

S. Minhaj Ali, Roohi Ali and Sana Iqbal

View:    Abstract |  HTML Full Text |  PDF |  XML|

Many scholars would agree that, had it not been for multicast applications, the investigation of RAID might never have occurred. In fact, few physicists would disagree with the visualization of model checking, which embodies the extensive principles of theory [24]. In order to address this issue, we examine how Moore's Law can be applied to the deployment of simulated annealing.

Hide Abstract

An Overview of Character Recognition Systems

K.B. Geetha

View:    Abstract |  HTML Full Text |  PDF |  XML|

Character recognition is the processing by machine of text based input patterns to produce some meaningful output. Character recognition lies at the core of the discipline of pattern recognition where the aim is to represent a sequence of characters taken from alphabets. The advancements in pattern recognition has accelerated recently due to the many emerging applications which are not only challenging, but also computationally more demanding, such as Character Recognition, Document Classification, Computer Vision, Data mining, Shape recognition and Biometric Authentication etc. The area of Optical Character Recognition (OCR) is becoming an integral part of document scanners and is used in many applications like postal processing, script recognition, banking security. Where OCR is a part of the character recognition. The research in this area has been ongoing for over half a century and the outcomes have been astounding with successful recognition rates for printed characters 99%, with significant improvements in performance for handwritten cursive character recognition where recognition rates have exceeded the 90%. Nowadays, many organizations are depending on OCR systems to eliminate the human interactions for better performance and efficiency. Because of this, it is necessary to know about the character recognition system. This paper helps to beginners, by presenting an overview of the character recognition system and its functional components.

Hide Abstract

FPGA-Based Multi-Focus Image Fusion Techniques

M.A. Mohamed1 and B.M. El-Den2

View:    Abstract |  HTML Full Text |  PDF |  XML|

Image fusion is a process which combines the data from two or more source images from the same scene to generate one single image containing more precise details of the scene than any of the source images. Among many image fusion methods like averaging, principle component analysis and various types of Pyramid Transforms, Discrete cosine transform, Discrete Wavelet Transform special frequency and ANN and they are the most common approaches. In this paper multi-focus image is used as a case study. This paper addresses these issues in image fusion: Fused two images by different techniques which present in this research, Quality assessment of fused images with above methods, Comparison of different techniques to determine the best approach and Implement the best technique by using Field Programmable Gate Arrays (FPGA). First a brief review of these techniques is presented and then each fusion method is performed on various images. In addition experimental results are quantitatively evaluated by calculation of root mean square error, entropy; mutual information, standard deviation and peak signal to noise ratio measures for fused images and a comparison is accomplished between these methods. Then we chose the best techniques to implement them by FPGA.

Hide Abstract

Challenges and Benfits of Cloud Computing

Azeem Haider¹ and Ashwani Kumar²

View:    Abstract |  HTML Full Text |  PDF |  XML|

Today organisations are performing most of the work through computers and internet. Online application multiplies the business of the organisation with automatic work. Cloud provides the solution of nonstop working of the systems and application around the world and around the clock. Organisations choose cloud as because it is very helpful in fast deployment of the application, secure access and without much more costing on computing environment. No doubt challenges associated with this new buzz. Customer can choose the model according to their requirement and according to their budget.

Hide Abstract

The Effects of Social Networking on Nigerian Universities Research Works (A Case Study of University of Abuja)

Okike Benjamin

View:    Abstract |  HTML Full Text |  PDF |  XML|

Social Networking is a structure made up of individuals or organizations referred to as nodes and is connected by one or more specific types of interdependencies. Such interdependencies may include: friendships, kinships, sexual relationships, belief relationships, knowledge, etc. Knowledge is a discovery or invention made by an individual or group. When this knowledge is propagated so that others may learn it, then that may lead to scientific knowledge which is the subject of this study. It is strongly believed that social networking sites are made possible due to the existence of Information and Communication Technology (ICT). The study intends to investigate the effects of social networking on scientific knowledge from research works in Nigerian Universities with University of Abuja as the case study. The choice of the case study is because only University of Abuja has fairly equal representations of students from every nook and cranny of the Nigeria nation. The study deployed the administration of questionnaires to collect data from students and staff who are involved in social networking as well as research works. These data are analyzed to arrive at a conclusion. 

Hide Abstract

Pattern Generation of Digital Images Using two Dimensional Cellular Automata, Nine Neighborhood Model

Fasel Qadir*, M. A. Peer , K.A. Khan

View:    Abstract |  HTML Full Text |  PDF |  XML|

Creating algorithmic approach for generating patterns of digital images is important and difficult task. Researchers face with many challenges in developing tiling algorithms such as providing simple and applicable algorithm to describe complex patterns. This paper used cellular automata with nine neighbourhood model to generate patterns of digital images. The proposed approach leads to accurate and scalable algorithm for generating patterns of digital images. The results of implemented algorithms demonstrate our approach with a variety of patterns.

Hide Abstract

A Practical Approach to Implement Education Technologies in New Universities

Mohammed Kafaji

View:    Abstract |  HTML Full Text |  PDF |  XML|

The integration of educational technologies to support operational models for modern universities is playing an increasing and crucial role in developing university visions and educational strategy. This is particularly true for newly established universities and educational organizations. In this paper, the author presents a scenario-based conceptual framework to relate technology with classroom learning and at the same time maintaining alignment with the organizational strategy. The approach is used to discuss the practical aspects of implementing and managing educational technologies. As a case-study, a modern private not-for-profit university in Saudi Arabia was used to illustrate the applicability of this approach. In order to evaluate the stage at which the university is operating, the Technology Acceptance Model (TAM) was used. Usage of these models in a coordinated manor helped the university to evaluate current state and ‘visualize’ future options. This in turn,helped to consolidate the views of key stakeholders and facilitate effective decision making. The whole approach was useful to maintaintwo-way alignment with the formal university strategy, with particular emphasis on the operational and logistical perspectives rather than financial.

Hide Abstract

Paper Presentation on Computer Networks

D.G. Krishnamohan

View:    Abstract |  HTML Full Text |  PDF |  XML|

The primary purpose of a computer network is to share resources. A computer network is referred to as client/server if (at least) one of the computers is used to "serve" other computers referred to as "clients". Besides the computers, other types of devices can be part of the network. In the early days of networking, there will be once central server that contains the data and all the clients can access this data through a Network Interface Card. Later on Client server architecture came into existence, where still burden is there on the server machine. To avoid the disadvantages in distributed computing was introduced which reduces the burden on the server by providing work sharing capabilities1 . This paper describes how the concept of distributed computing came into existence based on the advantages and disadvantages that raised in earlier networking concepts. The concept of distributed computing speaks that once data is available within the server(s), it should be able to be accessed and processed from any kind of client device like computer, mobile phone, PDA, etc.

Hide Abstract

The Enterprise Ontology for Modeling E-Business

Ashish Misra, Anand Kumar Dixit and Manish Jain

View:    Abstract |  HTML Full Text |  PDF |  XML|

In this paper we introduce The Enterprise Ontology for e-Business model which is a set of carefully defined concepts that are widely used for describing enterprise in general and can help companies to understand, communicate, measure, imitate and find out more about the different aspects of e-business in their firm. The Enterprise ontology model highlights the appropriate e-business issues and elements that firms have to think of, in turn to operate successfully in the age of Internet. The Enterprise Ontology contains four main pillars of a business model these are Product Innovation, Infrastructure Management, Customer Relationship, and Financials.

Hide Abstract

Importance of Information Retrieval

C.S. Naga Manjula Rani

View:    Abstract |  HTML Full Text |  PDF |  XML|

Information retrieval (IR) technique stands today at a crossroads. With the enormous increase in recent years in the number of text databases available on-line, and the consequent need for better techniques to access this information, there has been a strong resurgence of interest in the research done in the area of information retrieval (IR). Originally an outgrowth of librarianship, it has expanded into fields such as office automation, genome databases, fingerprint identification, medical image management, knowledge finding in databases, and multimedia management. This paper deals with importance of IR and classical models of IR.

Hide Abstract

Semantic Based Smarter Natural Language Interface for Database

Keshav Niranjan

View:    Abstract |  HTML Full Text |  PDF |  XML|

The paper presents semantics based search paradigm to be embedded in Natural Language Interface (NLI) systems. The classical Information Retrieval (IR) models were based on lexical mapping and approximation based searches which suffered from obvious weaknesses as follows - 1. The queries used predefined lexical mapping or approximations and would skip any direct or indirect references via semantic alternatives. Homonymous lexemes can give many meanings leading to ambiguous queries and failed processes or ambiguous results if user is using the hyper word query. No intelligent mechanism is present in the NLI by which it will interpret the query. 2. When we write the query, then each lexeme gives only individual meaning of the word but lexemes are related to each other and produce a collocated meaning of the entire sentence. The classical IR model does not consider this aspect of IR. To get over with these inadequacies in the classical IR mode, the NLI has to be made smarter with adequate semantic capabilities. Therefore we will provide the inferential capability to the existing NLI by providing the knowledge base to the system. This knowledge base will consist of the facts, concepts, synonymy, homonymy, hypernymy, discourse, and the contextual information and will help in generating appropriate and accurate results.

Hide Abstract

Packet Loss Detection Using Constant Packet Rearranging

Ankur Lal, Sipi Dubey and Bharat Pesswani

View:    Abstract |  HTML Full Text |  PDF |  XML|

When we rearrange the packet the most standard implementation of the TCP gives poor performance. In this paper loss of packets in TCP is detected using two diverse methods CPR (Constant Packet Re-arranging) and WCPR (Without Constant Packet Re-arranging). Constant packet rearranging does not depend or rely on the duplicate acknowledgement to detect the packet loss. Instead the timer is used to maintain how long packet is transmitted.

Hide Abstract

A Method for Palmprint Verification Using Array of Mean Values of the Pixels in The Grids of ROI

Srinivas Rao Kanusu¹ and Ratnakumari Challa²

View:    Abstract |  HTML Full Text |  PDF |  XML|

An approach for extracting texture pattern as features for Palmprint verification is proposed in this paper. The features to classify the texture pattern of the Palmprints are calculated as an array of mean values of the pixels from the grided ROI . The Palm print image is processed through the various stages of the system to generate the feature vector which is useful to classify the texture for Palmprint verification.

Hide Abstract

Decision Support System (DSS) Heart Component for MIS Development

Md. Sadique Shaikh

View:    Abstract |  HTML Full Text |  PDF |  XML|

Designers often have ignored the human and organizational element, and concentrated on the technical implementation of the hardware/software mix (Hutchinson, 2000). Every next wave of technology brings its own expectations and surrounding hype; the field of decision support is no exception: on one hand, the need for reliable decision-making clues stays permanently; and, on the other hand, substantial development supply of decision support tools does not seem to play in exact tune with the above need.

Hide Abstract

Role of Metadata in Data Warehousing for Effective Business Support

Shiv Kumar Gupta¹ and Ritu Vijay²

View:    Abstract |  HTML Full Text |  PDF |  XML|

Today large organizations are being served by different types of area processing and information systems. It is important to create an integrated repository of what these system contain and do it in order to use them collectively and effectively. The repository contains metadata of source systems effectively. The repository contains metadata of source systems data warehouse and also business data. Metadata, usually called data about data, is an important part. Metadata is supposed to be a helping hand to all co-workers in an organization that work directly or indirectly with the data warehouse. The main purpose of this paper is to determine through feedback from the business end user.

Hide Abstract

Human System Integration (HSI), A Wise Solution of Human Computer Co-Ordination for Mutual Business Decision Making

Md. Sadique Shaikh

View:    Abstract |  HTML Full Text |  PDF |  XML|

Human system Integration (HSI) creates an effective communication medium between a human (Business organization) and a computer (IT & Advanced electronic communication). HSI basically any MIS, AI, BIS or DSS with data pool like super servers , Data marts / warehouses , but why it’s need , and answer is that if software is difficult to understand and use , if it forces you into mistakes in decision- making process to run business with the support of IT & Business – HCI (Human computer Interface) often called GUI (Graphic User Interface) , but the concept is slightly different for HSI , though HSI is HCI (or GUI) but 100% under stable to end users to help in decision making and work mutually & sure simultaneously with computer not probably which almost found in poor HCI to run whole business with completely Human- computer co-ordination, and this Human- computer co-ordination only possible in those case where HSI is 100% accurate. Thus to change / modify or generate effective & useful HSI, Interface design must have to focuses on three major concern. For effective Integration in between human (i.e. organization) and system (i.e. computer) and are 1) The design of interfaces between software components 2) The design of interfaces between software and other non-human producers & consumers of information (i.e. other external entities), and 3) The design of interface between a human (i.e. users / business organization) and computer. In HCI /GUI consideration equal emphasis to all these three aspects, but HCI/GUI with the intention to develop effective HSI for business organizations for decision –making process, major concern is third one. This paper covered this emerging need of effective decision making with some models & discussion.

Hide Abstract

Modified Fuzzy C-Means Algorithm and its Application

D. Suganthi

View:    Abstract |  HTML Full Text |  PDF |  XML|

The accurate and effective algorithm for segmenting image is very useful in many fields, especially in medical image. In this paper we introduced a novel method that focus on segmenting the brain MR Image that is important for neural diseases. Because of many noises embedded in the acquiring procedure, such as eddy currents, susceptibility artifacts, rigid body motion, and intensity inhomogeneity, segmenting the brain MR image is a difficult work. In this algorithm, we overcame the inhomogeneity shortage, by modifying the objective function with compensating its immediate neighborhood effect using Gaussian smooth method for decreasing the influence of the inhomogeneity and increasing the segmenting accuracy. With simulate image and the clinical MRI data, the experiments shown that our proposed algorithm is effective.

Hide Abstract

Deconstructing Web Browsers

Syed Minhaj Ali¹,Roohi Ali² and Sana Iqbal³

View:    Abstract |  HTML Full Text |  PDF |  XML|

Many security experts would agree that, had it not been for the analysis of local-area networks, the investigation of the Turing machine might never have occurred. Given the current status of permutable modalities, analysts daringly desire the natural unification of DHCP and E-commerce. Our focus in our research is not on whether forward-error correction and DHCP are entirely incompatible, but rather on presenting a novel heuristic for the analysis of multi-processors (Opah).

Hide Abstract

A Site Rank-Based Swarming Ordering Approach

Maya Ram Atal*, Roohi Ali, Ram Kumar and Rajendra Kumar Malviya

View:    Abstract |  HTML Full Text |  PDF |  XML|

Search engines are in performance a major essential role in discovering information nowadays. Due to limitations of network bandwidth and hardware, search engines cannot obtain the entire information of the web and have to download the most essential pages first. In these paper, we propose a swarming ordering strategy, which have based on SiteRank, and compare it with several swarming ordering strategies. All the four strategies make an optimization for the naive swarming more or less. At the beginning of the swarming process, all the strategies can crawl the pages with high PageRank. When downloading 48% of the pages, the sum of PageRank is over 58% even for the worst one. At the later phase of swarming, the sum of PageRank varies slowly and reaches to unique finally. The objective of these strategies is to download the most essential pages early during the crawl. Experimental results indicate that SiteRank-based strategy can work Efficiently in discovering essential pages under the PageRank evaluation of page quality.

Hide Abstract

A Survey of Current Trends in Cloud Computing

M. Lawanya Shri

View:    Abstract |  HTML Full Text |  PDF |  XML|

Cloud computing is a virtualized image containing Internet based technology that has become an increasingly important rend, by offering the sharing resources that include infrastructures, software, applications and business processes to the market environment to match elastic demand and supply. In today’s competitive environment, the service dynamism, elasticity, choices and flexibility offered by this scalable technology are too attractive that makes the cloud computing to steadily becoming an integral part of the enterprise computing environment. This paper presents a survey of the current state of Cloud Computing, This includes a discussion of the evolution process of cloud computing, characteristics of Cloud, Current technologies adopted in cloud computing, This paper also presents a Comparative study of cloud computing platforms (Amazon and Google), and its challenges.

Hide Abstract

HSRP (Hot Stand By Routing Protocol) Reliability Issues Over the Internet Service Provider’s Network

Abhishek Kumar Singh and Abhay Kothari

View:    Abstract |  HTML Full Text |  PDF |  XML|

With the appearance and expansion of Internet subscribers all over the world, ISPs services are becoming more popular. The rapid increase of connection-demand and highly traffic network is the main reason behind the need to scale reliable network. To offer better solutions, a new theoretical and practical approach should be considered that can cover the reliable network.

Hide Abstract

A Comparative Study of Mobile Wireless Networks

Shiv Kumar Gupta¹, Ramesh C. Poonia¹ and Ritu Vijay²

View:    Abstract |  HTML Full Text |  PDF |  XML|

As the coming generation is going to be the mobile communication technology Generation. If we talk about the communication, at present we are fully equipped with 2G and have started using 3G. But time is not so far when we deal with 4G and it’s beyond mobile communication technology. In this paper we have just tried to present the brief overview and limitations of existing systems and how generation next mobile communication technology is going to start new evolution. The G Next revolution, we call it 4G. In this I have discuss the overview, features of 4G, its future vision and scope. This paper will help the new scholars to enhance their knowledge about the different mobile communication existing and G Next systems. 

Hide Abstract

Broadcasting Routing Protocols in VANET

Poonam Dhamal

View:    Abstract |  HTML Full Text |  PDF |  XML|

Vehicular Ad Hoc Networks (VANET) is a subclass of Mobile ad hoc networks which provides a distinguished approach for Intelligent Transport System (ITS). The survey of routing protocols in VANET is important and necessary for smart ITS. This paper discusses the advantages / disadvantages and the applications of various routing protocols for vehicular ad hoc networks. It explores the motivation behind the designed, and traces the evolution of these routing protocols. This paper discusses the main 5 types of protocols for VANET Topology Based, Positioned Based, Geo Cast, Broad Cast, and Cluster Based Protocols. It also discusses the types of Broadcast Protocols like multi hop and reliable broadcast protocols.

Hide Abstract

An Approach to Improve Quality of Document Clustering by Word Set Based Documenting Clustering Algorithm

Sandeep Sharma, Ruchi Dave and Naveen Hemrajani

View:    Abstract |  HTML Full Text |  PDF |  XML|

This paper presents a technique to improve the quality of Document Clustering based on Word Set Concept. The proposed Technique WDC (word set based document clustering), a clustering algorithm work with to obtain clustering of comparable quality significantly more efficiently more than the state of the art text clustering algorithm. The proposed WDC algorithms utilize the semantic relation ship between words to create concepts. The Word sets based Document Clustering (WDC) obtains clustering of comparable quality significantly more efficiently than state-of-art approach is efficient and give more accurate clustering result than the other methods.

Hide Abstract

Enhanced Cache Gird Partitioning Technique for K-NN Queries

Shatadal Patro¹ and Asha Ambhaikar²*

View:    Abstract |  HTML Full Text |  PDF |  XML|

Mobile database applications through wireless equipments e.g., PDAs, laptops, cell phones and etc. are growing rapidly. In such environment, clients, servers and object may change their locations. A very applicable class of query is continuous k-NN query which continuously returns the k nearest objects to the current location of the requester. Respect to limitations in mobile environments, it is strongly recommended to minimize number of connections and volume of data transmission from the servers. Caching seems to be very profitable in such situations. In this paper, a enhanced cache grid partitioning technique for continuous k-NN queries in mobile DBSs is proposed. In this, by square grid partitioning the complete search space is divided into such grid areas so that we can impose a piecemeal ordering on the query targets. Simulation results show that the proposed cache grid partitioning schema provides a considerable improvement in response time, number of connections and volume of transferred data from DB server.

Hide Abstract

Automatic Fault Detection in JC Bamford (JCB) Machines in A Construction Industry by the Application of Neural Network System

Suzen S. Kallungal

View:    Abstract |  HTML Full Text |  PDF |  XML|

Automatic fault detection is mainly for applications in the automotive industry. A fault detection system based on multivariate data analysis is needed to increase data reliability and for the purpose of monitoring and controlling of test equipment. The detection scheme has to process different measurements at a time and check them for consistency. An important requirement for the fault detection scheme is that it should be able to automatically adapt itself to new data with high level of accuracy that may not always be achieved manually. The project related to this paper was intended to work on real-time parameters read from high power automotives, especially JCBs used in construction industry. Various parameters including: temperatures; pressures; oil levels; states of the valves are monitored and sent to a server. Results showed that automatic fault detection through neural network system is useful as it saves time, cost and detects faults accurately.

Hide Abstract

Nanotechnology: Aspects, Risk Analysis and Implications

Subul Aijaz¹, Mayuri Pandey¹ and Sana Iqbal²

View:    Abstract |  HTML Full Text |  PDF |  XML|

Many words have been written about the dangers of advanced nanotechnology. Most of the threatening scenarios involve tiny manufacturing systems that run amok, or are used to create destructive products. A manufacturing infrastructure built around a centrally controlled, relatively large, self-contained manufacturing system would avoid these problems. A controlled nanofactory would pose no inherent danger, and it could be deployed and used widely. Cheap, clean, convenient, on-site manufacturing would be possible without the risks associated with uncontrolled nanotech fabrication or excessive regulation. Control of the products could be administered by a central authority; intellectual property rights could be respected. In addition, restricted design software could allow unrestricted innovation while limiting the capabilities of the final products. The proposed solution appears to preserve the benefits of advanced nanotechnology while minimizing the most serious risks.

Hide Abstract

Land Use or Land Cover and Geomorphological Characteristics Study Using Remotesensing and GIS: A Model Study from Arunachal Pradesh

S.S. Asadi1*, B.V.T. Vasantha Rao2, M.V. Raju3 and P. Neela Rani4

View:    Abstract |  HTML Full Text |  PDF |  XML|

The demand for Natural resources is increasing day by day due to increasing population , rapid urbanization, industrial growth and agricultural utilization. The levels of Groundwater is decreasing over years due to all the above activities and decreasing of annual rainfall year by year due to climatic changes and increasing runoff due to urbanization and deforestation. Hence, it is necessary to increase the land and water resources levels for future demands. Keeping this in view, we have done a model study for Socio-Economic condition and mapping of Landuse/Land cover and geomorphology characteristics study . The Study area is situated at East siang district of Arunachalpradesh falling in SOI toposheet no. 83I/13,14,82L/5,10,11,14,16,82P/ 2,3,4,7,8,11,12,83M/1,5,9. The present study was carried out to delineate Landuse/Land cover and Geomorphology cares tics IRS-ID PAN and LISS-III geocoded data on 1:50000 scale. Geographical Information System was used to prepare database on the above layers, analysis of relationship and integrated map preparation. The study area has a complex geomorphology. On the basis of geomorphic characteristics. The study has focused the utility of remote sensing data in creation of socioeconomic condition data and identification of Land use/Land cover Geomorphology class even in a complex terrain like the study area. The result in the form of integrated map could be properly analyzed using the advantage of technology like GIS as the methodology, which includes analysis of many resources and their interpretation. In the final maps, identified different class of Land use/ Land cover and geomorphology in the study areas to meet future demand and proper utilization of resources.

Hide Abstract

Risk and Remedies of E-Governance Systems

Abhishek Roy and Sunil Karforma

View:    Abstract |  HTML Full Text |  PDF |  XML|

With the advancement of Information and Communication Technology (ICT), Information has become the most easily accessible yet very valuable commodity. Since the successful implementation of various electronic mechanisms like E-Governance, E-Commerce, E-Learning, E-Health, MGovernance, M-Insurance, etc are totally dependable on the security and authenticity of the information, it is very much susceptible to interceptions and alterations caused by the hackers. In this paper the authors have done a through study of the various risk factors of the information security and their probable remedies using various cryptographic algorithms so that the above mentioned E-mechanisms can be implemented with utmost Privacy, Integrity, Non-Repudiation and Authentication (PINA).

Hide Abstract

Application of Artificial Neural Network in the Ratio Prediction of Axis Bank

Roli Pradhan

View:    Abstract |  HTML Full Text |  PDF |  XML|

The prediction of corporate bankruptcies is an important and widely studied topic since it can have significant impact on bank lending decisions and profitability. This work presents two contributions. First we review the topic of bankruptcy prediction, with emphasis on different models. Second, Inspired by the traditional credit risk models developed, we propose novel indicators for the NN system. Thereafter, this paper using the tailored back-propagation neural network endeavors to predict the financial ratios expressing the position of a firm to regulate the bankruptcy and assess the credit risks. It first estimates the financial ratio for a firm from 2001- 2008 to the train the BPNN and uses the estimates of the year 2009 and 2010 values for the validation process. Finally it dwells to draw predictions for the period 2011-2015 and emphasizes the growing role of BPNN application based prediction models for banking sector with a case study of AXIS bank. We conclude with practical suggestions on how best to integrate models and research into policy making decisions. 

Hide Abstract

Energy Efficient Cluster Based Key Management Technique for Wireless Sensor Networks

T. Lalitha¹* and R. Umarani²

View:    Abstract |  HTML Full Text |  PDF |  XML|

Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. In this paper, we develop a cluster based technique for key management in wireless sensor network. Initially, clusters are formed in the network and the cluster heads are selected based on the energy cost, coverage and processing capacity. The sink assigns cluster key to every cluster and an EBS key set to every cluster head. The EBS key set contains the pairwise keys for intra-cluster and inter-cluster communication. During data transmission towards the sink, the data is made to pass through two phases of encryption thus ensuring security in the network. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.

Hide Abstract

Design Architecture Class Diagram for A Comprehensive Testing Tool

Sarita Singh Bhadauria¹*, Abhay Kothari² and Lalji Prasad³

View:    Abstract |  HTML Full Text |  PDF |  XML|

Object-orientation involving class and object concepts and their properties play an important role in constructing any object-orientated system. In this research work, a comprehensive class diagram is provided that may help in designing a comprehensive software testing tool. A requirement specification for a comprehensive software testing tool is established that would involve studying the feature set offered by existing software testing tools, along with their limitations. The requirement set thus developed will be capable of overcoming the limitations of the limited feature sets of existing software tools and will also contribute to the design of a comprehensive architecture class diagram for a software testing tool that includes most of the features required for a software testing tool (most of the testing techniques came from procedural and object-oriented programming system development). In addition, because different user interfaces are provided by different tools, an effort has been made to use them in the present system that is being designed.

Hide Abstract

Data Mining for Banking and Finance

M.B. Hammawa*

View:    Abstract |  HTML Full Text |  PDF |  XML|

Currently, huge electronic data repositories are being maintained by banks and other financial institutions. Valuable bits of information are embedded in these data repositories. The huge size of these data sources make it impossible for a human analyst to come up with interesting information (or patterns) that will help in the decision making process. A number of commercial enterprises have been quick to recognize the value of this concept, as a consequence of which the software market itself for data mining is expected to be in excess of 10 billion USD. This note is intended for bankers, who would like to get aware of the possible applications of data mining to enhance the performance of some of their core business processes. In this note, the author discusses broad areas of application, like risk management, portfolio management, trading, customer profiling and customer care, where data mining techniques can be used in banks and other financial institutions to enhance their business performance.

Hide Abstract

Ontology Based Knowledge Grid in Semantic Web to Discover Knowledge in Distributed Environment

Muqeem Ahmed*, S.Z. Hussain and S.A.M. Rizvi

View:    Abstract |  HTML Full Text |  PDF |  XML|

In spite of various current research and investigations the development of advenced information technology is not the key issue. The different information technologies available now days but the major issue is how to got more advantages and utilization of these technologies for academic purpose in distributed environment where faculty and students communicate with software technology rather than with individual. Knowledge Based Grid was introduced for publishing, managing, sharing and utilizing different amount of knowledge base resources on the semantic web in distributed environment. The Knowledge discovery from heterogeneous information sources available on knowledge Grid environment is a major challenging research and development issue. This paper mainly concerns all aspects of the knowledge discovery, sharing process and integrates grid data resource by ontology server for educational institutes and university in distributed environment to address these issues and challenge.

Hide Abstract

Emulating Rasterization and Von Neumann Machines With Retina

Bijan Rouhi¹*, Peiman Ghasemi² and Amin Ghorbani³

View:    Abstract |  HTML Full Text |  PDF |  XML|

In recent years, much research has been devoted to the deployment of cache coherence; however, few have developed the construction of e-business [26], [26]. After years of confusing research into DHTs, we validate the synthesis of multicast solutions, which embodies the appropriate principles of compact complexity theory. We propose new concurrent epistemologies, which we call Retina. Although it at first glance seems unexpected, it is buffetted by related work in the field 

Hide Abstract

Applying Data Mining Research Methodologies on Information Systems

M.B. Hammawa* and G. Sampson

View:    Abstract |  HTML Full Text |  PDF |  XML|

In this paper we considered several frameworks for data mining. These frameworks are based on different approaches, including inductive databases approach, the reductionist statistical approaches, data compression approach, constructive induction approach and some others. We considered advantages and limitations of these frameworks. We presented the view on data mining research as continuous and never- ending development process of an adaptive DM system towards the efficient utilization of available DM techniques for solving a current problem impacted by the dynamically changing environment. We discussed one of the traditional information systems frameworks and, drawing the analogy to this framework, we considered a data mining system as the special kind of adaptive information system. We adapted the information systems development framework for the context of data-mining systems development.

Hide Abstract

Reconstruction of A Binary Search Tree from Its Preorder Tree Traversal With The Unique Non-Recursive Approach

Manoj C. Lohani, Upendra S. Aswal and Ramesh S. Rawat

View:    Abstract |  HTML Full Text |  PDF |  XML|

This paper presents a new approach of reconstruction of Binary search tree using its Pre order tree-traversal only. There are many approaches given with the help of combination of two- tree traversals. But, in this paper we have not used any other combination of tree traversals to reconstruct the Binary search tree. Our work shows the implementation of this algorithm in C language. Our algorithm is found to be very simple and faster than other non recursive algorithms due to its unique implementation. Due to this reason the time and space complexities are significantly reduced.

Hide Abstract

Reliability Through Simulation: Goals and Limitations

S.M.K. Quadri and Aasia Quyoum

View:    Abstract |  HTML Full Text |  PDF |  XML|

Software Reliability is an important component of software quality. A number of software reliability models have been proposed since 1970s, but there is no single model that can be used in all the situations. To reduce the risk, it is better to experiment with the model of the system rather than with the system itself. Simulation, offers an attractive alternative to analytical models as it describes a system being characterized in terms of its artifacts, events, interrelationships and interactions in such a way that one may perform experiments on the model, rather than on the system itself. Simulation strives for achieving its goals but it does have certain limitations. This research paper focuses on the goals and limitations of using simulation in software reliability.

Hide Abstract

A New Approach for Palmprint Matching Using Statistical Parameters

Kanusu Srinivas Rao¹ and Ratnakumari Challa²

View:    Abstract |  HTML Full Text |  PDF |  XML|

Intelligent e-voting data has been shown to pose a lot of benefit to e-voting especially in the area of security and recounting. After the election and balloting processes, valuable knowledge can still be extracted from this data. This work provides a framework model as roadmap for developers to follow in future development of such a system. The Perl based sample tested showed optimum performance and hence proves the viability of the methodology.

Hide Abstract

Analysis of Framework on Evaluation of Qualitative Models of Software Development System

Ashish Rastogi

View:    Abstract |  HTML Full Text |  PDF |  XML|

As we know that the Software market is growing very fast. The main purpose of the most software producers is produce the software of very high Quality. Software quality is a multi-dimensional content which is easily distinguishable and measurable. Although the Quality of the software is dependent on Functional and Non Functional Requirement of the user. To determine this content more exact, the qualitative models have been presented in which different aspects of this matter are investigated. But the existences of different models and using different expressions have made the comprehension of this content a little hard. In this research paper we try to introduce models and their analytical comparison, determine software qualification and its qualitative characteristics more clearly.

Hide Abstract