Tuesday, 9 August 2016

A Hybrid Cloud Approach for Secure Authorized Deduplication

                 A Hybrid Cloud Approach for Secure Authorized  Deduplication


ABSTRACT:


            Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save . To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication.

            Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct test bed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.



EXISTING SYSTEM:


            Data deduplication systems, the private cloud is involved as a proxy to allow data owner/users to securely perform duplicate check with differential privileges. Such architecture is practical and has attracted much attention from researchers. The data owners only outsource their data storage by utilizing public cloud while the data operation is managed in private cloud.

DISADVANTAGES OF EXISTING SYSTEM:


  • Traditional encryption, while providing data confidentiality, is incompatible with data deduplication.
  • Identical data copies of different users will lead to different cipher texts, making deduplication impossible.


PROPOSED SYSTEM:


             In this paper, we enhance our system in security. Specifically, we present an advanced scheme to support stronger security by encrypting the file with differential privilege keys. In this way, the users without corresponding privileges cannot perform the duplicate check. Furthermore, such unauthorized users cannot decrypt the cipher text even collude with the S-CSP. Security analysis demonstrates that our system is secure in terms of the definitions specified in the proposed security model.

ADVANTAGES OF PROPOSED SYSTEM:


  • The user is only allowed to perform the duplicate check for files marked with the corresponding privileges.
  • We present an advanced scheme to support stronger security by encrypting the file with differential privilege keys.
  • Reduce the storage size of the tags for integrity check. To enhance the security of deduplication and protect the data confidentiality,


SYSTEM ARCHITECTURE:


MODULES:-


  • Cloud Service Provider
  • Data Users Module
  • Private Cloud Module
  • Secure Deduplication System

MODULES DESCRIPTON:-


Cloud Service Provider

              In this module, we develop Cloud Service Provider module. This is an entity that provides a data storage service in public cloud. The S-CSP provides the data outsourcing service and stores data on behalf of the users. To reduce the storage cost, the S-CSP eliminates the storage of redundant data via deduplication and keeps only unique data. In this paper, we assume that S-CSP is always online and has abundant storage capacity and computation power. 

Data Users Module

           A user is an entity that wants to outsource data storage to the S-CSP and access the data later. In a storage system supporting deduplication, the user only uploads unique data but does not upload any duplicate data to save the upload bandwidth, which may be owned by the same user or different users. In the authorized deduplication system, each user is issued a set of privileges in the setup of the system. Each file is protected with the convergent encryption key and privilege keys to realize the authorized deduplication with differential privileges.

 Private Cloud Module

           Compared with the traditional deduplication architecture in cloud computing, this is a new entity introduced for facilitating user’s secure usage of cloud service. Specifically, since the computing resources at data user/owner side are restricted and the public cloud is not fully trusted in practice, private cloud is able to provide data user/owner with an execution environment and infrastructure working as an interface between user and the public cloud. The private keys for the privileges are managed by the private cloud, who answers the file token requests from the users. The interface offered by the private cloud allows user to submit files and queries to be securely stored and computed respectively.

Secure Deduplication System

            We consider several types of privacy we need protect, that is, i) unforgeability of duplicate-check token: There are two types of adversaries, that is, external adversary and internal adversary. As shown below, the external adversary can be viewed as an internal adversary without any privilege. If a user has privilege p, it requires that the adversary cannot forge and output a valid duplicate token with any other privilege p′ on any file F, where p does not match p′. Furthermore, it also requires that if the adversary does not make a request of token with its own privilege from private cloud server, it cannot forge and output a valid duplicate token with p on any F that has been queried.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

Jin Li, Yan Kit Li, Xiaofeng Chen, Patrick P. C. Lee, Wenjing Lou,“A Hybrid Cloud Approach for Secure Authorized Deduplication”, , VOL. 26, NO. 5, MAY 2015.

Monday, 1 August 2016

IEEE PROJECTS 2016 - 2017

                                                                IEEE PROJECTS 2016 - 2017





1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider. 

It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training. 


Project Domain list 2016

1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems

ECE IEEE Projects 2016

1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects

Eligibility

Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)

TECHNOLOGY USED AND FOR TRAINING IN

1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim

CONTACT US:-

1 CRORE PROJECTS

Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com 
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152

Tuesday, 26 July 2016

A Scalable and Reliable Matching Service for Content-Based Publish/Subscribe Systems

A Scalable and Reliable Matching Service for Content-Based Publish/Subscribe Systems www.1croreprojects.com

ABSTRACT:

                       Characterized by the increasing arrival rate of live content, the emergency applications pose a great challenge: how to disseminate large-scale live content to interested users in a scalable and reliable manner. The publish/subscribe (pub/sub) model is widely used for data dissemination because of its capacity of seamlessly expanding the system to massive size. However, most event matching services of existing pub/sub systems either lead to low matching throughput when matching a large number of skewed subscriptions, or interrupt dissemination when a large number of servers fail. The cloud computing provides great opportunities for the requirements of complex computing and reliable communication.
                  In this paper, we propose SREM, a scalable and reliable event matching service for content-based pub/sub systems in cloud computing environment. To achieve low routing latency and reliable links among servers, we propose a distributed overlay SkipCloud to organize servers of SREM. Through a hybrid space partitioning technique HPartition, large-scale skewed subscriptions are mapped into multiple subspaces, which ensures high matching throughput and provides multiple candidate servers for each event.
                  Moreover, a series of dynamics maintenance mechanisms are extensively studied. To evaluate the performance of SREM, 64 servers are deployed and millions of live content items are tested in a CloudStack testbed. Under various parameter settings, the experimental results demonstrate that the traffic overhead of routing events in SkipCloud is at least 60 percent smaller than in Chord overlay, the matching rate in SREM is at least 3.7 times and at most 40.4 times larger than the single-dimensional partitioning technique of BlueDove. Besides, SREM enables the event loss rate to drop back to 0 in tens of seconds even if a large number of servers fail simultaneously.

EXISTING SYSTEM:

  • In traditional data dissemination applications, the live content are generated by publishers at a low speed, which makes many pub/subs adopt the multi-hop routing techniques to disseminate events.
  • A large body of broker-based pub/subs forward events and subscriptions through organizing nodes into diverse distributed overlays, such as tree based design, cluster-based design and DHT-based design.

DISADVANTAGES OF EXISTING SYSTEM:

  • The system cannot scalable to support the large amount of live content.
  • The Multihop routing techniques in these broker-based systems lead to a low matching throughput, which is inadequate to apply to current high arrival rate of live content.
  • Most of them are inappropriate to the matching of live content with high data dimensionality due to the limitation of their subscription space partitioning techniques, which bring either low matching throughput or high memory overhead.

PROPOSED SYSTEM:

  • Specifically, we mainly focus on two problems: one is how to organize servers in the cloud computing environment to achieve scalable and reliable routing. The other is how to manage subscriptions and events to achieve parallel matching among these servers.
  • We propose a distributed overlay protocol, called SkipCloud, to organize servers in the cloud computing environment. SkipCloud enables subscriptions and events to be forwarded among brokers in a scalable and reliable manner. Also it is easy to implement and maintain.
  • To achieve scalable and reliable event matching among multiple servers, we propose a hybrid multidimensional space partitioning technique called HPartition. It allows similar subscriptions to be divided into the same server and provides multiple candidate matching servers for each event. Moreover, it adaptively alleviates hot spots and keeps workload balance among all servers.

ADVANTAGES OF PROPOSED SYSTEM:

  • We propose a scalable and reliable matching service for content-based pub/sub service in cloud computing environments, called SREM.
  • We propose a hybrid multidimensional space partitioning technique, called HPartition SSPartition.
  • To alleviate the hot spots whose subscriptions fall into a narrow space, we propose a subscription set partitioning,
  • Through a hybrid multi-dimensional space partitioning technique, SREM reaches scalable and balanced clustering of high dimensional skewed subscriptions

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk :  40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL
REFERENCE:

Xingkong Ma, Student Member, IEEE, Yijie Wang, Member, IEEE, and Xiaoqiang Pei, “A Scalable and Reliable Matching Service for Content-Based Publish/Subscribe Systems” IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. 3, NO. 1, JANUARY-MARCH 2015.

Friday, 17 June 2016

A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing

A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing http://1croreprojects.com


ABSTRACT:

As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and it is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-term renting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to serious resource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term renting are combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requests and reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performance indicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that need temporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimized configuration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conducted to compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not only guarantee the service quality of all requests, but also obtain more profit than the latter.





EXISTING SYSTEM:

  • In general, a service provider rents a certain number of servers from the infrastructure providers and builds different multi-server systems for different application domains. Each multi server system is to execute a special type of service requests and applications. Hence, the renting cost is proportional to the number of servers in a multi server system. The power consumption of a multi server system is linearly proportional to the number of servers and the server utilization, and to the square of execution speed. The revenue of a service provider is related to the amount of service and the quality of service. To summarize, the profit of a service provider is mainly determined by the configuration of its service platform.
  • To configure a cloud service platform, a service provider usually adopts a single renting scheme. That’s to say, the servers in the service system are all long-term rented. Because of the limited number of servers, some of the incoming service requests cannot be processed immediately. So they are first inserted into a queue until they can handle by any available server.

DISADVANTAGES OF EXISTING SYSTEM:

  • The waiting time of the service requests is too long.
  • Sharp increase of the renting cost or the electricity cost. Such increased cost may counterweight the gain from penalty reduction. In conclusion, the single renting scheme is not a good scheme for service providers.


PROPOSED SYSTEM:

  • In this paper, we propose a novel renting scheme for service providers, which not only can satisfy quality-of- service requirements, but also can obtain more profit.A novel double renting scheme is proposed for service providers. It combines long-term renting with short-term renting, which can not only satisfy quality-of- service requirements under the varying system workload, but also reduce the resource waste greatly.
  • A multi server system adopted in our paper is modeled as an M/M/m+D queuing model and the performance indicators are analyzed such as the average service charge, the ratio of requests that need short term servers, and so forth.
  • The optimal configuration problem of service providers for profit maximization is formulated and two kinds of optimal solutions, i.e., the ideal solutions and the actual solutions, are obtained respectively.
  • A series of comparisons are given to verify the performance of our scheme.The results show that the proposed Double-Quality- Guaranteed (DQG) renting scheme can achieve more profit than the compared Single-Quality-Unguaranteed (SQU) renting scheme in the premise of guaranteeing the service quality completely.


ADVANTAGES OF PROPOSED SYSTEM:

  • Since the requests with waiting time D are all assigned to temporary servers, it is apparent that all service requests can guarantee their deadline and are charged based on the workload according to the SLA. Hence, the revenue of the service provider increases.
  • Increase in the quality of service requests and maximize the profit of service providers.
  • This scheme combines short-term renting with long-term renting, which can reduce the resource waste greatly and adapt to the dynamical demand of computing capacity.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:
  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENE
Jing Mei, Kenli Li, Member, IEEE, Aijia Ouyang and Keqin Li, Fellow, IEEE, “A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing”, IEEE TRANSACTIONS ON COMPUTERS, 2015

Friday, 6 May 2016

Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage

Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage http://1croreprojects.com/


ABSTRACT:

            The capability of selectively sharing encrypted data with different users via public cloud storage may greatly ease security concerns over inadvertent data leaks in the cloud. A key challenge to designing such encryption schemes lies in the efficient management of encryption keys. The desired flexibility of sharing any group of selected documents with any group of users demands different encryption keys to be used for different documents. However, this also implies the necessity of securely distributing to users a large number of keys for both encryption and search, and those users will have to securely store the received keys, and submit an equally large number of keyword trapdoors to the cloud in order to perform search over the shared data. The implied need for secure communication, storage, and complexity clearly renders the approach impractical. In this paper, we address this practical problem, which is largely neglected in the literature, by proposing the novel concept of key aggregate searchable encryption (KASE) and instantiating the concept through a concrete KASE scheme, in which a data owner only needs to distribute a single key to a user for sharing a large number of documents, and the user only needs to submit a single trapdoor to the cloud for querying the shared documents. The security analysis and performance evaluation both confirm that our proposed schemes are provably secure and practically efficient.

EXISTING SYSTEM:

  • There is a rich literature on searchable encryption, including SSE schemes and PEKS schemes. In contrast to those existing work, in the context of cloud storage, keyword search under the multi-tenancy setting is a more common scenario. In such a scenario, the data owner would like to share a document with a group of authorized users, and each user who has the access right can provide a trapdoor to perform the keyword search over the shared document, namely, the “multi-user searchable encryption” (MUSE) scenario.
  •  Some recent work focus to such a MUSE scenario, although they all adopt single-key combined with access control to achieve the goal.
  •  In MUSE schemes are constructed by sharing the document’s searchable encryption key with all users who can access it, and broadcast encryption is used to achieve coarse-grained access control.
  •  In attribute based encryption (ABE) is applied to achieve fine-grained access control aware keyword search. As a result, in MUSE, the main problem is how to control which users can access which documents, whereas how to reduce the number of shared keys and trapdoors is not considered.


                                     
   fig:video of Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage

DISADVANTAGES OF EXISTING SYSTEM:

  • Unexpected privilege escalation will expose all
  • It is not efficient.
  • Shared data will not be secure.


PROPOSED SYSTEM:

  • In this paper, we address this challenge by proposing the novel concept of key-aggregate searchable encryption (KASE), and instantiating the concept through a concrete KASE scheme.
  • The proposed KASE scheme applies to any cloud storage that supports the searchable group data sharing functionality, which means any user may selectively share a group of selected files with a group of selected users, while allowing the latter to perform keyword search over the former.
  • To support searchable group data sharing the main requirements for efficient key management are twofold. First, a data owner only needs to distribute a single aggregate key (instead of a group of keys) to a user for sharing any number of files. Second, the user only needs to submit a single aggregate trapdoor (instead of a group of trapdoors) to the cloud for performing keyword search over any number of shared files.
  • We first define a general framework of key aggregate searchable encryption (KASE) composed of seven polynomial algorithms for security parameter setup, key generation, encryption, key extraction, trapdoor generation,trapdoor adjustment, and trapdoor testing. We then describe both functional and security requirements for designing a valid KASE scheme.
  • We then instantiate the KASE framework by designing a concrete KASE scheme. After providing detailed constructions for the seven algorithms, we analyze the efficiency of the scheme, and establish its security through detailed analysis.
  • We discuss various practical issues in building an actual group data sharing system based on the proposed KASE scheme, and evaluate its performance.The evaluation confirms our system can meet the performance requirements of practical applications.


ADVANTAGES OF PROPOSED SYSTEM:
  •  It is more secure.
  • Decryption key should be sent via a secure channel and kept secret.
  • It is an efficient public-key encryption scheme which supports flexible delegation.
  • To the best of our knowledge, the KASE scheme proposed in this paper is the first known scheme that can satisfy requirements.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  •  Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  •  IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

         Baojiang Cui, Zheli Liu_ and Lingyu Wang, “Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage”, IEEE TRANSACTIONS ON COMPUTERS, 2015

A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing

                   A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing  1croreprojects.com

ABSTRACT:

            As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and it is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-term renting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to serious resource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term renting are combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requests and reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performance indicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that need temporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimized configuration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conducted to compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not only guarantee the service quality of all requests, but also obtain more profit than the latter.

EXISTING SYSTEM:

  • In general, a service provider rents a certain number of servers from the infrastructure providers and builds different multi-server systems for different application domains. Each multi server system is to execute a special type of service requests and applications. Hence, the renting cost is proportional to the number of servers in a multi server system. The power consumption of a multi server system is linearly proportional to the number of servers and the server utilization, and to the square of execution speed.The revenue of a service provider is related to the amount of service and the quality of service. To summarize, the profit of a service provider is mainly determined by the configuration of its service platform.
  • To configure a cloud service platform, a service provider usually adopts  single renting scheme. That’s to say, the servers in the service system are all long-term rented. Because of the limited number of servers, some of the incoming service requests cannot be processed immediately. So they are first inserted into a queue until they can handle by any available server.

DISADVANTAGES OF EXISTING SYSTEM:

  • The waiting time of the service requests is too long.
  • Sharp increase of the renting cost or the electricity cost. Such increased cost may counterweight the gain from penalty reduction. In conclusion, the single renting scheme is not a good scheme for service providers. 
PROPOSED SYSTEM:

  • In this paper, we propose a novel renting scheme for service providers, which not only can satisfy quality-of- service requirements, but also canobtain more profit.
  • A novel double renting scheme is proposed for service providers. It combines long-term renting with short-term renting, which can not only satisfy quality-of- service requirements under the varying system workload, but also reduce the resource waste greatly.
  • A multiserver system adopted in our paper is modeled as an M/M/m+D queuing model and the performance indicators are analyzed such as the average service charge, the ratio of requests that need shortterm servers, and so forth.
  • The optimal configuration problem of service providers for profit maximization is formulated and two kinds of optimal solutions, i.e., the ideal solutions and the actual solutions, are obtained respectively.
  • A series of comparisons are given to verify the performance of our scheme.The results show that the proposed Double-Quality- Guaranteed (DQG)renting scheme can achieve more profit than the compared Single-Quality-Unguaranteed (SQU) renting scheme in the premise of guaranteeing the service quality completely.

ADVANTAGES OF PROPOSED SYSTEM:

  • Since the requests with waiting time D are all assigned to temporary servers,it is apparent that all service requests can guarantee their deadline and are charged based on the workload according to the SLA. Hence, the revenue of the service provider increases.
  • Increase in the quality of service requests and maximize the profit of service providers.
  • This scheme combines short-term renting with long-term renting, which can reduce the resource waste greatly and adapt to the dynamical demand of computing capacity.

SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  •  System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL

REFERENCE:

           Jing Mei, Kenli Li, Member, IEEE, Aijia Ouyang and Keqin Li, Fellow, IEEE, “A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing”, IEEE TRANSACTIONS ON COMPUTERS, 2015

Friday, 22 April 2016

Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage

         Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage 1croreprojects.com


ABSTRACT:

             To protect outsourced data in cloud storage against corruptions, adding fault tolerance to cloud storage together with data integrity checking and failure reparation becomes critical. Recently, regenerating codes have gained popularity due to their lower repair bandwidth while providing fault tolerance. Existing remote checking methods for regenerating-coded data only provide private auditing, requiring data owners to always stay online and handle auditing, as well as repairing, which is sometimes impractical. In this paper, we propose a public auditing scheme for the regenerating-code-based cloud storage. To solve the regeneration problem of failed authenticators in the absence of data owners, we introduce a proxy, which is privileged to regenerate the authenticators, into the traditional public auditing system model. Moreover, we design a novel public verifiable authenticator, which is generated by a couple of keys and can be regenerated using partial keys. Thus, our scheme can completely release data owners from online burden. In addition, we randomize the encode coefficients with a pseudo random function to preserve data privacy. Extensive security analysis shows that our scheme is provable secure under random oracle model and experimental evaluation indicates that our scheme is highly efficient and can be feasibly integrated into the regenerating-code-based cloud storage.



EXISTING SYSTEM: 

  • Many mechanisms dealing with the integrity of outsourced data without a local copy have been proposed under different system and security models up to now. The most significant work among these studies are the PDP (provable data possession) model and POR (proof of retrievability) model, which were originally proposed for the single-server scenario by Ateniese et al. and Juels and Kaliski, respectively.
  • Considering that files are usually striped and redundantly stored across multi-servers or multi-clouds, explore integrity verification schemes suitable for such multi-servers or multi-clouds setting with different redundancy schemes, such as replication, erasure codes, and, more recently, regenerating codes.
  • Chen et al. and Chen and Lee separately and independently extended the single-server CPOR scheme to the regeneratingcode- scenario; designed and implemented a data integrity protection (DIP) scheme for FMSR-based cloud storage and the scheme is adapted to the thin-cloud setting.


DISADVANTAGES OF EXISTING SYSTEM:


  • They are designed for private audit, only the data owner is allowed to verify the integrity and repair the faulty servers.
  • Considering the large size of the outsourced data and the user’s constrained resource capability, the tasks of auditing and reparation in the cloud can be formidable and expensive for the users 
  • The auditing schemes in existing imply the problem that users need to always stay online, which may impede its adoption in practice, especially for long-term archival storage.


PROPOSED SYSTEM:


  • In this paper, we focus on the integrity verification problem in regenerating-code-based cloud storage, especially with the functional repair strategy. To fully ensure the data integrity and save the users’ computation resources as well as online burden, we propose a public auditing scheme for the regenerating-code-based cloud storage, in which the integrity checking and regeneration (of failed data blocks and authenticators) are implemented by a third-party auditor and a semi-trusted proxy separately on behalf of the data owner.
  • Instead of directly adapting the existing public auditing scheme to the multi-server setting, we design a novel authenticator, which is more appropriate for regenerating codes. Besides, we “encrypt” the coefficients to protect data privacy against the auditor, which is more lightweight than applying the proof blind technique and data blind method.
  • We design a novel homomorphic authenticator based on BLS signature, which can be generated by a couple of secret keys and verified publicly.


ADVANTAGES OF PROPOSED SYSTEM: 

  • Utilizing the linear subspace of the regenerating codes, the authenticators can be computed efficiently. Besides, it can be adapted for data owners equipped with low end computation devices (e.g. Tablet PC etc.) in which they only need to sign the native blocks.
  • To the best of our knowledge, our scheme is the first to allow privacy-preserving public auditing for regenerating code- based cloud storage. The coefficients are masked by a PRF (Pseudorandom Function) during the Setup phase to avoid leakage of the original data. This method is lightweight and does not introduce any computational overhead to the cloud servers or TPA.
  • Our scheme completely releases data owners from online burden for the regeneration of blocks and authenticators at faulty servers and it provides the privilege to a proxy for the reparation.
  • Optimization measures are taken to improve the flexibility and efficiency of our auditing scheme; thus, the storage overhead of servers, the computational overhead of the data owner and communication overhead during the audit phase can be effectively reduced.
  • Our scheme is provable secure under random oracle model against adversaries


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk           : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4 
  • Database : MYSQL


REFERENCE:

            Jian Liu, Kun Huang, Hong Rong, Huimei Wang, and Ming Xian, “Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 7, JULY 2015.