Friday, 6 January 2017

Security Threats to Cloud and Corresponding Solutions

Vol. 3  Issue 2
Year: 2016
Issue:Feb-Apr 
Title:Security Threats to Cloud and Corresponding Solutions
Author Name:D. Raghavaraju
Synopsis:
Distributed computing frameworks speak to a standout amongst the most complex processing frameworks as of now in presence. Current uses of Cloud include broad utilization of disseminated frameworks with shifting level of network and use. With a late concentrate on huge scale expansion of Cloud processing, personality administration in Cloud based frameworks is a basic issue for the maintainability of any Cloud-based administration. This zone has additionally gotten extensive consideration from the exploration group and also the IT business. Diverse calculations and methodology are utilized by the specialists. Still distributed computing security is in its center stage. A few IT organizations are concentrating on cloud security and cloud information security. This paper gives a thought regarding security dangers and arrangements.

Improving the Proof of Retrievability in Cloud Computing

Vol. 3  Issue 2
Year: 2016
Issue:Feb-Apr 
Title:Improving the Proof of Retrievability in Cloud Computing
Author Name:Darubhaigari Ali Ahammed
Synopsis:
Cloud computing provides resource sharing and handling of applications on internet without having local or personal devices. The data integrity is one of the major challenges in cloud computing. The Outsourced Proof of Retrievability (OPoR) system focuses on Cloud Storage Server (CSS) for prevention of routing attacks and malicious operations of servers. In Public verifiability, the security monitoring is taken by Cloud Audit Server (CAS) for reducing overhead on clients. Hence there is a chance that CAS can take miscellaneous operations, so it is needed to strengthen the secure process of both CSS and CAS. This paper strengthens the Proof of Retrievability model (PoR) process and its dynamic data integrity verification on distrusted and outsourced storages on cloud computing. It strengthens the CAS and CSS operations by using third party entities, which generates unique temporary key for each update or modification of the file from user. Generally, this type of OTP key is generated by the server side, hence it is generated by third party Key-Generator entity. The reset attacks of CAS and cloud storage server is secured by a unique temporary key and deleting the local host replica after verifying the uploaded file proof tags, which was send by CAS and CSS, and the cost of memory and process time is reduced using Elliptic curve cryptography. Thus the proposed system, Improving the Proof of Retrievability (IPoR) model will toughen the resistant of retrievability on upload and update of file operations on cloud computing.

Providing Security to The User Data in Cloud

Vol. 3  Issue 2
Year: 2016
Issue:Feb-Apr 
Title:Providing Security to The User Data in Cloud
Author Name:J.Nagaraja and M. Purushotham
Synopsis:
Cloud computing infrastructure is widely used for deploying client's data in sharing environment. Clients can store and retrieve their data, whenever client needs to access his data, cloud provides the required data efficiently. However, some of the important data might be easily damaged, where the data holder can't store in the cloud unless and until the data privacy and confidentiality is guaranteed. It is nice to provide confidentially during the query processing time and retrieval time. To provide confidentially and efficiency for the query processing, The authors use RC4 algorithm which provides security to users data. The RC4 algorithm produces a pseudo-random key stream that issued to generate the cipher text (by XORing it with the plaintext). With this stream cipher, it can be used for encryption by combining it with the plaintext using bit-wise exclusive-or; translate is performed in the same way.

Performance Evaluation of Task Scheduling Algorithms for Cloud Computing

Vol. 3  Issue 2
Year: 2016
Issue:Feb-Apr 
Title:Performance Evaluation of Task Scheduling Algorithms for Cloud Computing
Author Name:Amtoj Kaur and Kanwalvir Singh Dhindsa
Synopsis:
Cloud Computing deals with the varied virtualization resources. The task scheduling plays a crucial role to enhance the performance of cloud computing. The issue with task scheduling is distribution of tasks within the system in a way that will optimize the performance of the overall system, minimize the makespan, waiting time, maximize throughput, and so on. The paper highlights the comparison between FCFS, Priority based and Round Robin scheduling algorithms. The priority based and round robin scheduling algorithm have showcased better results under certain parameters than the FCFS.

Performance Evaluation for Crud Operations in NosQL Databases

Vol. 3  Issue 2
Year: 2016
Issue:Feb-Apr 
Title:Performance Evaluation for Crud Operations in NosQL Databases
Author Name:Amandeep Kaur and Kanwalvir Singh Dhindsa
Synopsis:
With the Web growing rapidly and increase in user-generated content websites such as Facebook and Twitter, there is a need for fast databases that can handle huge amounts of data. For this purpose, new database management systems collectively called NoSQL are being developed. There are many NoSQL database types with different performances, and thus it is important to evaluate performance. To check the performance, three major NoSQL databases called MongoDB, Cassandra, and Couchbase have been considered. For performance analysis, different workloads were designed. The evaluation has been done on the basis of read and update operations. This evaluation enables users to choose the most appropriate NoSQL database according to the particular mechanisms and application needs.

A Survey on Energy Aware Job Scheduling Algorithms in Cloud Environment

Vol. 3  Issue 1
Year: 2016
Issue:Nov-Jan
Title:A Survey on Energy Aware Job Scheduling Algorithms in Cloud Environment
Author Name:Shaik Naseera and P. Jyotheeswai 
Synopsis:
Now-a-days there is a lot of attention to cloud computing by the Research community. Cloud computing is a platform that supports the sharing of resources, communication and storage capacity over the internet. The primary benefit of moving to the Clouds is application scalability. It provides virtualized resources and are built on the base of Grid & distributed computing. Cloud computing is also environmental friendly framework. It benefits from the efficient utilization of resources and optimal scheduling algorithms. The growth of internet based applications demands the need for the development of algorithms that cope with the escalation in energy consumption and reduce the operational cost and emission of CO gases. In this paper, the authors present a review on energy aware job scheduling algorithms existing 2 in the literature. This paper helps the readers to understand the functionality and parameters focus of various energy aware scheduling algorithms available in the literature.

Enhanced E-tree for Mining High Dimensional Data

Vol. 3  Issue 1
Year: 2016
Issue:Nov-Jan
Title:Enhanced E-tree for Mining High Dimensional Data
Author Name:S. Salam, M. Roja and T. V. Rao
Synopsis:
Data Stream classification is one of the critical tasks in data mining. At the point when DataStream touches the base at a pace of GB/sec, we need to recognize spam, web observing and capacity. It is a troublesome operation and falls flat in the existing System. Actualizing two Algorithms namely, E-tree Algorithm (Ensemble-tree) and Avaricious Algorithm and Executing E-tree algorithm, the authors have maintained a strategic distance from the existing issues. Ensemble tree (Etree) takes care of extensive volumes of stream data and drifting. E-tree, Classifies and groups the Data Stream and stores the data effectively. Furthermore, foresee web checking and spam identification precisely. Controlling the web movement, the authors have actualized the greedy algorithm.

An Effective Feature Selection Technique for Mining High Dimensional Data on Bigdata

Vol. 3  Issue 1
Year: 2016
Issue:Nov-Jan
Title:An Effective Feature Selection Technique for Mining High Dimensional Data on Bigdata
Author Name:K. Bhaskar Naik and S.P Sindhuja
Synopsis:
In the recent years, many research innovations have come into foray in the area of big data analytics. Advanced analysis of the big data stream is bound to become a key area of data mining research as the number of applications requiring such processing increases. Big data sets are now collected in many fields eg., Finance, business, medical systems, internet and other scientific research. Data sets rapidly increase their size as they are often generated in the form of incoming stream. Feature selection has been used to lighten the processing load in inducing a data mining model, but mining a high dimensional data becomes a tough task due to its exponential growth of size. This paper aims to compare the two algorithms, namely Particle Swarm Optimization and FAST algorithm in the feature selection process. The proposed algorithm FAST is used in order to reduce the irrelevant and redundant data, while streaming high dimensional data which would further increase the analytical accuracy for a reasonable processing time.

A Methodology for WebLog Data analysis using HadoopMapReduce and PIG

Vol. 3  Issue 1
Year: 2016
Issue:Nov-Jan
Title:A Methodology for WebLog Data analysis using HadoopMapReduce and PIG
Author Name:Durga Prasad P S, T. Vivekanandan and A.Srinivasan
Synopsis:
In the recent time, world is severely facing the problem related to the data storage and processing. Especially, the size of weblog data is exponentially increasing in terms of petabytes and zettabytes. The dependency of weblog data shows its importance on the users' actions on web. To solve and improve the business in all aspects, web data is prominent and hence it is vital. The traditional data management system is not adequate to handle the data in very large size. The Map Reduce programming approach is introduced to deal with the large data processing. In this paper, the authors have proposed a large scale data processing system for analysing web log data through MapReduce programming in Hadoop framework using Pig script. The experimental results show the processing time for classification of different status code in the web log data is efficient, than the traditional techniques.

Clustering Of Summarizing Multi-Documents (Large Data) By Using MapReduce Framework

Vol. 3  Issue 1
Year: 2016
Issue:Nov-Jan
Title:Clustering Of Summarizing Multi-Documents (Large Data) By Using MapReduce Framework
Author Name:K.Thirumalesh and Srinivasulu Asadi 
Synopsis:
Multi document summarization differs from the single document. Issues of compression, speed, and redundancy and passage selection are critical in the form of useful summaries. A collection of different documents is given to a variety of summarization methods based on different strategies to extract the most important sentences from the original document. LDA (Latent Dirichlet Allocation) topic modeling technique is used to divide the documents topic wise for summarizing the large text collection over the MapReduce framework. Compression ratio, retention ratio, Rouge and Pyramid score are different summarization parameters used to measure the performance of the summarizing documents. Semantic similarity and clustering methods are used efficiently for generating the summary of large text collections from multiple documents. Summarizing multi documents is a time consuming problem and it is a basic tool for understanding the summary. The presented method is compared with the MapReduce framework based k-means clustering algorithm applied on Four Multi-document summarization methods. Support for multilingual text summarization is provided over the MapReduce framework in order to provide the summary generation from the text document collections available in different languages.

A Survey on Big Data Analytics

Vol. 2  Issue 4
Year: 2015
Issue:Aug-Oct 
Title:A Survey on Big Data Analytics
Author Name:Ravi Kumar, and Bharti Nagpal 
Synopsis:
We live in the age of big data, where every data is linked to some data source. Digitally, it is difficult to calculate the amount of data. Big data refers to volumes of data in the range of petabytes and beyond. This amount of data exceeds the capacity of online storage and the processing systems. This data creation will cross the zettabyte/year range. The data mainly comes from twitter tweets, Facebook comments and so on. This type of data is normally in the form of images, video and different documents that are in unstructured form. For analysing this large amount of data, Hadoop platform is used which is fast and cost effective. In this paper, the authors have mainly focused on literature review and their challenges.

Genetic Algorithm Using MapReduce-A Critical Review

Vol. 2  Issue 4
Year: 2015
Issue:Aug-Oct 
Title:Genetic Algorithm Using MapReduce-A Critical Review
Author Name:Palak Sachar, and Vikas Khullar 
Synopsis:
Now-a-days, to achieve an optimized solution for hard problems is a big challenge. Scientists are putting their best efforts to introduce a best algorithm to optimize the problem to a great extent. Genetic Algorithm is one of the stepping stones in the challenge and is an evolutionary algorithm inspired by Darwin's theory of evolution. Using this algorithm, with MapReduce, makes it efficient and user friendly. Users can build more scalable applications with MapReduce, since it provides a better abstraction to the genetic algorithm in lesser time. To parallelize the process of any project, MapReduce plays a vital role on Hadoop platform. The platform may vary from Hadoop to cloud which affect the performance significantly. Parallelizing of a genetic algorithm is convenient with the help of MapReduce. The major objective of the study is to know the behavior of Genetic Algorithm under the paradigm of Hadoop MapReduce. The various applications show different trends influenced by this platform. Also literature review strongly depicts the advantages of Hadoop MapReduce platform over other platforms. Moreover, the difference between various paradigms of Parallelisation is given in the paper to make decisions regarding its implementation of future work.

GridSim Installation and Implementation Process

Vol. 2  Issue 4
Year: 2015
Issue:Aug-Oct 
Title:GridSim Installation and Implementation Process
Author Name:Neeraj Rathore
Synopsis:
GridSim is a famous Java-based grid simulator with a clear focus on Grid environment. This simulator is based on entities: Grid users, brokers (bargaining on behalf of users) and resources. These entities can have customized characteristics. In this paper, the author has discussed about how to create a Grid Resource, Users, Gridlets and Entities in GridSim to start the simulation as well as the submission and retrieving of Gridlets (job/task) to Grid Resources in GridSim. The author has also introduced some enhancements in the GridSim. Machine Entity (ME) is treated as a dump entity object in GridSim 4.0 and is not able to participate in any decision making activities. The author has proposed that, the ME should be active and participate in load balancing at its level. In order to implement Load balancing model, the author has developed an application which uses the simulated the Grid environment i.e, GridSim. It has been implemented in the application by using Java programming language over the GridSim 5-2_2, to run the application completely on GridSim package.

An Enhanced Framework to Design Elastic and Reliable Content Based Publish/Subscribe System

Vol. 2  Issue 4
Year: 2015
Issue:Aug-Oct 
Title:An Enhanced Framework to Design Elastic and Reliable Content Based Publish/Subscribe System
Author Name:Modiboina Suresh, P.R. Rajesh and Lalitha
Synopsis:
Publish/subscribe systems implemented as a service in cloud computing infrastructure provides elasticity and simplicity in composing distributed applications. Appropriate service provisioning in distributed computing infrastructure is an exigent task. Due to the dynamic changes in the rate of the live content arrival in the large scale subscription, it presents a challenge to the existing publish/subscribe systems. This paper proposes ESCC (Elastic and Scalable Content based Cloud Pub/Sub System) technique that presents a framework to design elastic and reliable Content based publish/subscribe system that the uses a single hop lookup overlay to reduce the latency in a cloud computing environment. ESCC dynamically adjust the scale of the servers depending on the churn workloads. ESCC achieves high throughput rate when compared to various workloads.

Energy Efficient Resource Scheduling Framework for Cloud Computing

Vol. 2  Issue 4
Year: 2015
Issue:Aug-Oct 
Title:Energy Efficient Resource Scheduling Framework for Cloud Computing
Author Name:Kamalpreet Kaur and Kanwalvir Singh Dhindsa
Synopsis:
Cloud computing has an intricate connection to grid computing. Cloud is a large collection of simply functional and available virtualized resources. Resource Scheduling is a way of determining the schedule on which, the activities are performed. Resource scheduling is a complicated task in a cloud environment because of heterogeneity of the computing resources. The most important objective of the cloud scheduler is scheduling the resources successfully and economically. There are two existing techniques for resource scheduling i.e. power-aware and non-power aware. Power aware technique minimizes the power consumption as compared to the non-power aware technique. The proposed technique is used to overcome the limitations of the existing techniques. The proposed technique gives a better result by reducing the total execution time, power consumption and the number of SLA violation as compared to the existing techniques.

Prediction of Slow Learners in Higher Educational Institutions using Random Forest Classification Algorithm

Vol. 2  Issue 3
Year: 2015
Issue:May-Jul
Title:Prediction of Slow Learners in Higher Educational Institutions using Random Forest Classification Algorithm
Author Name:B. Rakesh,K. Malli Priya and J. Harshini
Synopsis:
Educational data mining is one of the fields where there is lot of scope for research, which helps educational institutions to analyse the learning capability of the students. And also gives scope to the educational institutions to make modifications in the curriculum and also to change the teaching methodologies based upon the learning capability of a student. Here, this paper concentrates on the learning capability of the students in higher educational institutions. For that, a dataset of 300 records was collected with various socio-economical and graduate attribute factors. Various classification algorithms was performed on the dataset using Weka, an open source tool. Random forest classification algorithm was found as the best performing algorithm on the dataset. This algorithm was used to design an user interface which is used to predict the future state of a student.

Study of Various Keywords Searching in Large Databases

Vol. 2  Issue 3
Year: 2015
Issue:May-Jul
Title:Study of Various Keywords Searching in Large Databases
Author Name:A. Arulmurugan,R. Nandini,P. Jayasri, E. Rahini and B. Priyanka
Synopsis:
Keyword search in relational database is a method that has higher relevance in the current world. One very important thing is extracting data from a large database. So it reduces the personal and time consuming works. Data extraction from a large number of sets of database using the relevant keyword based on the users are needed. It is a very interactive and user-friendly. Keyword search enables the user to get information without knowing any database schemas or complex query languages like SQL (Structured Querry Language). Using a keyword in relational database, data extraction will be easier. The user doesn't want to know the query language for searching on the database. Study of keyword search using algorithms such as BANKS, DBXPLORE, DISCOVER, DEINIX in detailed explanation are given for further process. This process continues for future work which selects the one which is best based on the analyses. The main objective of this study about keyword searching in large database is to reduce the memory space and making an efficient retrieval of information; also it reduces the time of the user for retrieving the required data.

Survey on Data Security between User and Cloud Storage

Vol. 2  Issue 3
Year: 2015
Issue:May-Jul
Title:Survey on Data Security between User and Cloud Storage
Author Name:R. Kanimozhi,J. Jayashree,S. Deivanai,G. Muthumariyammal and R. Kowsalya
Synopsis:
Mobile devices are widely deployed in the world, and many people use them to download and upload media such as videos and pictures. Traditional security approaches were proposed to secure the data exchange between users and the cloud. Information hiding techniques have been recently emerged in many different application areas. In a digital image watermarking system, watermark is embedded in an object and the object may be an image or audio or video. Image object have been used in this paper. But nowadays, files are uploaded and maintained in cloud servers, where malicious users access these watermarked images from cloud servers and remove the watermarks from the images and make use of them without any copyright from the data owner. To overcome this problem, this paper introduces the new method of image chunk. Image chunk will split the image into four parts of sections and upload them in a four different sections to the cloud server respectively. Therefore, users cannot access the image and also cannot remove the watermarks from the file. Also, the authors have proposed a file compression technique to optimize the memory size and quality for storage space in the server. It will reduce the bandwidth usage for file upload and downloads.

Clustering based Cost Optimized Resource Scheduling Technique in Cloud Computing

Vol. 2  Issue 3
Year: 2015
Issue:May-Jul
Title:Clustering based Cost Optimized Resource Scheduling Technique in Cloud Computing
Author Name:Kamalpreet Kaur and Kanwalvir Singh Dhindsa
Synopsis:
Cloud Computing has revolutionized the Information and Communication Technology (ICT) industry by enabling ondemand provisioning of elastic computing resources on a pay-as-you-go basis. Resource Scheduling is a way of determining schedule on which activities should be performed. Resource scheduling is a complicated task in a Cloud environment because of heterogeneity of the computing resources. To allocate the best resource to a Cloud job is a tedious task and the problem of finding the best resource – job pair according to Cloud consumer application requirements is an optimization problem. The main goal of the Cloud scheduler is to schedule the resources effectively and efficiently. Dispersion, heterogeneity and uncertainty of resources bring challenges to resource allocation, which cannot be satisfied with traditional resource allocation policies in Cloud circumstances. In this research paper, the clustering based cost optimized resource scheduling technique has been proposed. In clustering based resource scheduling, classification of these workloads is done through k-means clustering algorithm by assigning the weights to the different quality attributes. The experimental results gathered through Cloud environment clearly demonstrate that the proposed technique has better performance for cost as compared to the existing resource scheduling technique.

Comparison of Controllers in Software-Defined Networking

Vol. 2  Issue 3
Year: 2015
Issue:May-Jul
Title:Comparison of Controllers in Software-Defined Networking
Author Name:Furqan Jameel and Ibrahim Khan
Synopsis:
Emerging trends such as cloud computing and big data have altered the requirements of future internet, for which low latency, extraordinary bandwidth and dynamic management are very significant. In order to adapt to the new needs, Software-Defined Networking (SDN) has been considered as one of the most favorable solutions. In SDN approach, centralized entities called "controllers" manages and controls the network via well-defined APIs (Application Program Interface). The forwarding layer has a set of clear and definite rules. Traffic passing through these switches is compared with these rules and a match-action method is applied to this traffic. However, with the ever growing demand of traffic, the need of more sophisticated, secure and high performance controllers has increased. Therefore, in this paper, the authors have presents performance (in latency, throughput perspective) and security evaluation for some of the most well-known controllers: Maestro, Floodlight, NOX, OpenMul, Beacon, OpenIRIS. The survey shows that OpenIRIS controller has the lowest latency, and the OpenMul controller shows the highest throughput. Whereas, security wise OpenIRIS is the least vulnerable controller.

A Scalable and Cost-Effective Data Anonymization over Big Data using MapReduce on Cloud

Vol. 2  Issue 2
Year: 2015
Issue:Feb-Apr
Title:A Scalable and Cost-Effective Data Anonymization over Big Data using MapReduce on Cloud
Author Name:Shalin Elizabeth. S and S.Sarju
Synopsis:
In big data applications, data privacy is one of the most important issues on processing large-scale privacy-sensitive data sets, which requires computation resources provisioned by public cloud services.It refers to the commercial "aggregation, mining, and analysis" of very large, complex and unstructured datasets. Due to its large size, discovering knowledge or obtaining pattern from big data within an elapsed time is a complicated task. The cloud and the advances in big data mining and analytics have expanded the scope of information available to businesses, government, and individuals. The internet users also share their private data like health records and financial transaction records for mining or data analysis purpose. For which, data anonymization is used for hiding identity or sensitive intelligence. This paper investigates the problem of big data anonymization for privacy preservation from the perspectives of scalability and cost-effectiveness. Anonymizing large scale data within a short span of time is a challenging task. To overcome that, Enhanced Top –Down Specialization approach (ETDS) can be developed which is an enhancement of Two –Phase Top Down Specialization approach (TPTDS). Accordingly, a scalable and cost-effective privacy preserving framework is developed to provide a holistic conceptual foundation for privacy preservation over big data which enable users to accomplish the full potential of the high scalability, elasticity, and cost-effectiveness of the cloud. The multidimensional anonymization of MapReducing framework will increase the efficiency of the big data processing system.

Auditing the Shared Data in Cloud through Privacy Preserving Mechanisms

Vol. 2  Issue 2
Year: 2015
Issue:Feb-Apr
Title:Auditing the Shared Data in Cloud through Privacy Preserving Mechanisms
Author Name:V.Ramya Sai, V.Lokanadham Naidu and A. Srinivasulu
Synopsis:
With cloud storage services, it is routine that data is not only stored in the cloud, but also shared among group of users. However, public auditing for such distributed data at the same time by preserving identity privacy remains a challenge. This paper proposes a privacy preserving public auditing technique for shared data called ‘Oruta’ which supports auditing, data privacy and identity privacy. This scheme is mainly focused on ring signatures to calculate the information needed for verification to audit the integrity of shared data. By using this auditing, the signer identity on each block is kept unconditionally private from a Third Party Auditor (TPA). This paper also aims to implement traceability, i.e.., the group manager or the original user will know who are editing the data in some special situations based on verification of Metadata. By doing this, the privacy will be preserved and the group manager or the original user can trace the signer, so the original user can have complete control over the data and the signers who are accessing the data.

Scalable Video Transcoding with Hadoop MapReduce in Openstack Juno Platform

Vol. 2  Issue 2
Year: 2015
Issue:Feb-Apr
Title:Scalable Video Transcoding with Hadoop MapReduce in Openstack Juno Platform
Author Name:D. Kesavaraja and A.Shenbagavalli
Synopsis:
Cloud computing and big data are changing today’s modern on demand video service.This paper describes how to increase the speed of video transcoding in an open stack private cloud environment using Hadoop Map Reduce. In this paper, OpenStack Juno is used to build the private cloud infrastructure as a service having map code executing on the node, where the video transcoding resides, to significantly reduce this problem. This practice, called “video locality”, is one of the key advantages of Hadoop MapReduce. This scheme describes the deep relationship of a Hadoop Map Reduce algorithm and video transcoding in the experiment. As a result of Map Reduce video transcoding experiment in openstack Juno, outstanding performance of the physical server was observed when running on the virtual machine in the private cloud based on the metrics, in terms of Time Complexity and Quality Check using PSNR (Peak Signal-to-Noise Ratio).

Encroachment of Cloud Education for the Present Educational Institutions

Vol. 2  Issue 2
Year: 2015
Issue:Feb-Apr
Title:Encroachment of Cloud Education for the Present Educational Institutions
Author Name:D.R.Robert Joan
Synopsis:
In this article the author explains how cloud education can provide reasonable and high-value education services for contemporary students, teachers, parents and administrators. Also the benefits for students and faculties by cloud education in educational institutions are discussed as cloud education is very necessary for the information society institution. Technology will get integrated into every aspect of the institutions and thus cloud education will change the classrooms, games fields, gyms and school trips. Whether offsite or onsite, the school, teachers, students and support staff will all be connected. In the cloud education system, all the classrooms will be paperless and the world will become the classroom. E-learning will change teaching and learning process in the educational situations. Students can learn from anywhere and teachers can teach from anywhere. The cloud can also encourage independent learning. Teachers could adopt a flipped classroom approach and students can take ownership of their own learning. Teachers can put resources for students through online to use. These could be videos, documents, audio podcasts or interactive images. All of these resources can be accessed through a student's computer, smart-phone or tablet with an internet connection by Wifi, 3G or 4G.

Data Scheduling and Mapreducing in Big Data

Vol. 2  Issue 2
Year: 2015
Issue:Feb-Apr
Title:Data Scheduling and Mapreducing in Big Data
Author Name:E.Ravi Kondal and B.Mounika 
Synopsis:
The volume of data usage is growing drastically day by day. Hence, it is not easy to maintain the data. In Big Data, huge amount of structured, semi-structured and unstructured data, produced daily by resources all over the world are stored in the computer. Mapreducing, a programming model, is used for implementing such large data sets. MapReduce program is used to collect data as per the request. To process the large volume of data, proper scheduling is used in order to achieve greater performance. Task scheduling plays a major role in Big Data cloud. Task scheduling contains a lot of rules to solve the problems of users and provides the quality of services to achieve the goal of that task to improve the resource utilization and turnaround time. Capacity and Delay Scheduling are used to improve the performance of the Big Data. This paper presents an overview of the Map-Reduce technique for shuffling and reducing the data and also the Capacity Scheduling and Delay Scheduling, for improving the reliability of the data

Cloud Video Server (CVS): Secure Video Storage on Cloud using Heterogeneous Slaves

Vol. 2  Issue 1
Year: 2015
Issue:Aug-Oct
Title:Cloud Video Server (CVS): Secure Video Storage on Cloud using Heterogeneous Slaves
Author Name:D. Kesavaraja and A.Shenbagavalli 
Synopsis:
In today's fast growing world, everyone needs a secure storage for their important information like personal videos. So the service provider needs an excellent cloud video server with high security. But hackers easily hack any kind of security mechanism. In this paper , the novel tolerant security mechanism is proposed. A server stores all new information in its storage. The Cloud Video Server is acting as a controller of all the slaves. This system has two protocols named Promise Protocol and Review Protocol for monitoring slaves activity. This scheme uses two standard algorithms namely SHA3 (Secure Hash Algorithm -3 ) and Advanced Encryption Standard (AES) for security test. The performance is verified with the help of the metrics like security level, time complexity ,and Quality of Storage. The performance of this system is superior than the other schemes.

Multi Layer Encryption using Access Control in Public Clouds

Vol. 2  Issue 1
Year: 2015
Issue:Aug-Oct
Title:Multi Layer Encryption using Access Control in Public Clouds
Author Name:K.Nethaji Sundar Sukumar and L.Venkateswara Reddy
Synopsis:
Nowadays privacy preserving is the challenging issue of public clouds. The data of fine-grained access control are enforced on confidential data hosted into the public clouds of storage. Single layer encryption (SLE) approach is to encrypt the data into public clouds by using encryption algorithm, and two layer encryption (TLE) approach is to encrypt the data before uploading into the clouds on storage. These two approaches are facing the problem of computational cost and communications process of data owner and public clouds. So, these problems are reduced by using new approach of multi layer encryption approach. In this approach, multiple keys are provided for end users and accessing the data from public clouds. So, the authors are using an algorithm known as Attribute Based-Group Key Management (AB-GKM).

Research Issues in Enterprise Cloud Computing

Vol. 2  Issue 1
Year: 2015
Issue:Aug-Oct
Title:Research Issues in Enterprise Cloud Computing
Author Name:S.Anandamurugan 
Synopsis:
Every technology has a value in it. To speak technically apart from theoretical and research, every technology must be implemented in real time to the human kind and solve all their issues. For every industry, profit is the major goal or a part of the whole business strategy itself. Thus every technology must be industry oriented and supported rather than a research topic every time.[1] Industrial oriented term of cloud computing is called as enterprise cloud computing. The word enterprise cloud computing refers to the business related operations and the usage of cloud computing technology for all their work. The close alignment of business related corporate IT sector and cloud computing has given a new dimension to the term enterprise cloud computing. Today cloud computing has changed the whole work culture in the corporate sector. Things have advanced very fast and have become more reliable. Gone are all the traditional and simple ways in handling all problems and solutions to them. Business related operations and solutions are well handled and managed using advanced remote cloud infrastructure and its unique capabilities. The objective of this paper is to give the overview of research issues in cloud computing