M. A. H. Akhand, Tanzima Sultana, M. I. R. Shuvo and Al-Mahmud
Vehicle Routing Problem (VRP) is a real life constraint satisfaction problem to find minimal travel distances of vehicles to serve customers. Capacitated VRP (CVRP) is the simplest form of VRP considering vehicle capacity constraint. Constructive and clustering are the two popular approaches to solve CVRP. A constructive approach creates routes and attempts to minimize the cost at the same time. Clarke and Wright’s Savings algorithm is a popular constructive method based on savings heuristic. On the other hand, a clustering based method first assigns nodes into vehicle wise cluster and then generates route for each vehicle. Sweep algorithm and its variants and Fisher and Jaikumar algorithm are popular among clustering methods. Route generation is a traveling salesman problem (TSP) and any TSP optimization method is useful for this purpose. In this study, popular constructive and clustering methods are studied, implemented and compared outcomes in solving a suite of benchmark CVRPs. For route optimization, Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Velocity Tentative Particle Swarm Optimization (VTPSO) are employed in this study which are popular nature inspired optimization techniques for solving TSP. Experimental results revealed that parallel Savings is better than series Savings in constructive method. On the other hand, Sweep Reference Point using every stop (SRE) is the best among clustering based techniques.
Kamalpreet Kaur* and O.P. Gupta
Maturity checking has become mandatory for the food industries as well as for the farmers so as to ensure that the fruits and vegetables are not diseased and are ripe. However, manual inspection leads to human error, unripe fruits and vegetables may decrease the production . Thus, this study proposes a Tomato Classification system for determining maturity stages of tomato through Machine Learning which involves training of different algorithms like Decision Tree, Logistic Regression, Gradient Boosting, Random Forest, Support Vector Machine, K-NN and XG Boost. This system consists of image collection, feature extraction and training the classifiers on 80% of the total data. Rest 20% of the total data is used for the testing purpose. It is concluded from the results that the performance of the classifier depends on the size and kind of features extracted from the data set. The results are obtained in the form of Learning Curve, Confusion Matrix and Accuracy Score. It is observed that out of seven classifiers, Random Forest is successful with 92.49% accuracy due to its high capability of handling large set of data. Support Vector Machine has shown the least accuracy due to its inability to train large data set.
Chetan R. Dudhagara and Hasamukh B. Patel
In a recent era of modern technology, there are many problems for storage, retrieval and transmission of data. Data compression is necessary due to rapid growth of digital media and the subsequent need for reduce storage size and transmit the data in an effective and efficient manner over the networks. It reduces the transmission traffic on internet also. Data compression try to reduce the number of bits required to store digitally. The various data and image compression algorithms are widely use to reduce the original data bits into lesser number of bits. Lossless data and image compression is a special class of data compression. This algorithm involves in reducing numbers of bits by identifying and eliminating statistical data redundancy in input data. It is very simple and effective method. It provides good lossless compression of input data. This is useful on data that contains many consecutive runs of the same values. This paper presents the implementation of Run Length Encoding for data compression.
Chetan R. Dudhagaraand Mayur M. Patel
In recent years there has been widely increase the use of digital media everywhere. To increase the use of digital media, there is a huge problem of storage, manipulation and transmission of data over the internet. These digital media such as image, audio and video require large memory space. So it is necessary to compress the digital data to require less memory space and less bandwidth to transmission of data over network. Image compressions techniques are used to compress the data for reduce the storage requirement. It plays an important role for transfer of data such as image over the network. Two methods are used in this paper on Barbara image. This compression study is performed by using Set Partitioning In Hierarchical Trees (SPIHT) and Embedded Zero tree Wavelet (EZW) compression techniques. There are many parameters are used to compare this techniques. Mean Square Error (MSE), Pick Signal to Noise Ration (PSNR) and Compression Ratio (CR) are used at different level of decompositions.
Sachin Lalar1, Shashi Bhushan2 and Surender3
Wireless Sensor Networks (WSNs) are developing very fast in the wireless networks. The wireless sensor network has the characteristics of limited memory, small size and limited battery. WSNs are vulnerable to the different types of attacks due to its characteristics. One of the attacks is clone node attack in which attacker capture the nodes from the network and stoles the information from it and replicates it in the network. From the clone nodes, the attacker can easily launch the different type of attacks in the network. To detect the clone node, different methods has been implemented .Each method having advantages and limitations. In the this paper, we explain the different methods to detect the clone nodes in the static wireless sensor network and compare their performance based on the communication cost and memory.