Tuesday, 9 August 2016

A Hybrid Cloud Approach for Secure Authorized Deduplication

                 A Hybrid Cloud Approach for Secure Authorized  Deduplication


ABSTRACT:


            Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save . To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. To better protect data security, this paper makes the first attempt to formally address the problem of authorized data deduplication.

            Different from traditional deduplication systems, the differential privileges of users are further considered in duplicate check besides the data itself. We also present several new deduplication constructions supporting authorized duplicate check in a hybrid cloud architecture. Security analysis demonstrates that our scheme is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement a prototype of our proposed authorized duplicate check scheme and conduct test bed experiments using our prototype. We show that our proposed authorized duplicate check scheme incurs minimal overhead compared to normal operations.



EXISTING SYSTEM:


            Data deduplication systems, the private cloud is involved as a proxy to allow data owner/users to securely perform duplicate check with differential privileges. Such architecture is practical and has attracted much attention from researchers. The data owners only outsource their data storage by utilizing public cloud while the data operation is managed in private cloud.

DISADVANTAGES OF EXISTING SYSTEM:


  • Traditional encryption, while providing data confidentiality, is incompatible with data deduplication.
  • Identical data copies of different users will lead to different cipher texts, making deduplication impossible.


PROPOSED SYSTEM:


             In this paper, we enhance our system in security. Specifically, we present an advanced scheme to support stronger security by encrypting the file with differential privilege keys. In this way, the users without corresponding privileges cannot perform the duplicate check. Furthermore, such unauthorized users cannot decrypt the cipher text even collude with the S-CSP. Security analysis demonstrates that our system is secure in terms of the definitions specified in the proposed security model.

ADVANTAGES OF PROPOSED SYSTEM:


  • The user is only allowed to perform the duplicate check for files marked with the corresponding privileges.
  • We present an advanced scheme to support stronger security by encrypting the file with differential privilege keys.
  • Reduce the storage size of the tags for integrity check. To enhance the security of deduplication and protect the data confidentiality,


SYSTEM ARCHITECTURE:


MODULES:-


  • Cloud Service Provider
  • Data Users Module
  • Private Cloud Module
  • Secure Deduplication System

MODULES DESCRIPTON:-


Cloud Service Provider

              In this module, we develop Cloud Service Provider module. This is an entity that provides a data storage service in public cloud. The S-CSP provides the data outsourcing service and stores data on behalf of the users. To reduce the storage cost, the S-CSP eliminates the storage of redundant data via deduplication and keeps only unique data. In this paper, we assume that S-CSP is always online and has abundant storage capacity and computation power. 

Data Users Module

           A user is an entity that wants to outsource data storage to the S-CSP and access the data later. In a storage system supporting deduplication, the user only uploads unique data but does not upload any duplicate data to save the upload bandwidth, which may be owned by the same user or different users. In the authorized deduplication system, each user is issued a set of privileges in the setup of the system. Each file is protected with the convergent encryption key and privilege keys to realize the authorized deduplication with differential privileges.

 Private Cloud Module

           Compared with the traditional deduplication architecture in cloud computing, this is a new entity introduced for facilitating user’s secure usage of cloud service. Specifically, since the computing resources at data user/owner side are restricted and the public cloud is not fully trusted in practice, private cloud is able to provide data user/owner with an execution environment and infrastructure working as an interface between user and the public cloud. The private keys for the privileges are managed by the private cloud, who answers the file token requests from the users. The interface offered by the private cloud allows user to submit files and queries to be securely stored and computed respectively.

Secure Deduplication System

            We consider several types of privacy we need protect, that is, i) unforgeability of duplicate-check token: There are two types of adversaries, that is, external adversary and internal adversary. As shown below, the external adversary can be viewed as an internal adversary without any privilege. If a user has privilege p, it requires that the adversary cannot forge and output a valid duplicate token with any other privilege p′ on any file F, where p does not match p′. Furthermore, it also requires that if the adversary does not make a request of token with its own privilege from private cloud server, it cannot forge and output a valid duplicate token with p on any F that has been queried.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

Jin Li, Yan Kit Li, Xiaofeng Chen, Patrick P. C. Lee, Wenjing Lou,“A Hybrid Cloud Approach for Secure Authorized Deduplication”, , VOL. 26, NO. 5, MAY 2015.

Monday, 1 August 2016

IEEE PROJECTS 2016 - 2017

                                                                IEEE PROJECTS 2016 - 2017





1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider. 

It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training. 


Project Domain list 2016

1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems

ECE IEEE Projects 2016

1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects

Eligibility

Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)

TECHNOLOGY USED AND FOR TRAINING IN

1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim

CONTACT US:-

1 CRORE PROJECTS

Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com 
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152

Tuesday, 26 July 2016

A Scalable and Reliable Matching Service for Content-Based Publish/Subscribe Systems

A Scalable and Reliable Matching Service for Content-Based Publish/Subscribe Systems www.1croreprojects.com

ABSTRACT:

                       Characterized by the increasing arrival rate of live content, the emergency applications pose a great challenge: how to disseminate large-scale live content to interested users in a scalable and reliable manner. The publish/subscribe (pub/sub) model is widely used for data dissemination because of its capacity of seamlessly expanding the system to massive size. However, most event matching services of existing pub/sub systems either lead to low matching throughput when matching a large number of skewed subscriptions, or interrupt dissemination when a large number of servers fail. The cloud computing provides great opportunities for the requirements of complex computing and reliable communication.
                  In this paper, we propose SREM, a scalable and reliable event matching service for content-based pub/sub systems in cloud computing environment. To achieve low routing latency and reliable links among servers, we propose a distributed overlay SkipCloud to organize servers of SREM. Through a hybrid space partitioning technique HPartition, large-scale skewed subscriptions are mapped into multiple subspaces, which ensures high matching throughput and provides multiple candidate servers for each event.
                  Moreover, a series of dynamics maintenance mechanisms are extensively studied. To evaluate the performance of SREM, 64 servers are deployed and millions of live content items are tested in a CloudStack testbed. Under various parameter settings, the experimental results demonstrate that the traffic overhead of routing events in SkipCloud is at least 60 percent smaller than in Chord overlay, the matching rate in SREM is at least 3.7 times and at most 40.4 times larger than the single-dimensional partitioning technique of BlueDove. Besides, SREM enables the event loss rate to drop back to 0 in tens of seconds even if a large number of servers fail simultaneously.

EXISTING SYSTEM:

  • In traditional data dissemination applications, the live content are generated by publishers at a low speed, which makes many pub/subs adopt the multi-hop routing techniques to disseminate events.
  • A large body of broker-based pub/subs forward events and subscriptions through organizing nodes into diverse distributed overlays, such as tree based design, cluster-based design and DHT-based design.

DISADVANTAGES OF EXISTING SYSTEM:

  • The system cannot scalable to support the large amount of live content.
  • The Multihop routing techniques in these broker-based systems lead to a low matching throughput, which is inadequate to apply to current high arrival rate of live content.
  • Most of them are inappropriate to the matching of live content with high data dimensionality due to the limitation of their subscription space partitioning techniques, which bring either low matching throughput or high memory overhead.

PROPOSED SYSTEM:

  • Specifically, we mainly focus on two problems: one is how to organize servers in the cloud computing environment to achieve scalable and reliable routing. The other is how to manage subscriptions and events to achieve parallel matching among these servers.
  • We propose a distributed overlay protocol, called SkipCloud, to organize servers in the cloud computing environment. SkipCloud enables subscriptions and events to be forwarded among brokers in a scalable and reliable manner. Also it is easy to implement and maintain.
  • To achieve scalable and reliable event matching among multiple servers, we propose a hybrid multidimensional space partitioning technique called HPartition. It allows similar subscriptions to be divided into the same server and provides multiple candidate matching servers for each event. Moreover, it adaptively alleviates hot spots and keeps workload balance among all servers.

ADVANTAGES OF PROPOSED SYSTEM:

  • We propose a scalable and reliable matching service for content-based pub/sub service in cloud computing environments, called SREM.
  • We propose a hybrid multidimensional space partitioning technique, called HPartition SSPartition.
  • To alleviate the hot spots whose subscriptions fall into a narrow space, we propose a subscription set partitioning,
  • Through a hybrid multi-dimensional space partitioning technique, SREM reaches scalable and balanced clustering of high dimensional skewed subscriptions

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk :  40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.
SOFTWARE REQUIREMENTS:
  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL
REFERENCE:

Xingkong Ma, Student Member, IEEE, Yijie Wang, Member, IEEE, and Xiaoqiang Pei, “A Scalable and Reliable Matching Service for Content-Based Publish/Subscribe Systems” IEEE TRANSACTIONS ON CLOUD COMPUTING, VOL. 3, NO. 1, JANUARY-MARCH 2015.

Friday, 17 June 2016

A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing

A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing http://1croreprojects.com


ABSTRACT:

As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and it is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-term renting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to serious resource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term renting are combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requests and reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performance indicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that need temporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimized configuration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conducted to compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not only guarantee the service quality of all requests, but also obtain more profit than the latter.





EXISTING SYSTEM:

  • In general, a service provider rents a certain number of servers from the infrastructure providers and builds different multi-server systems for different application domains. Each multi server system is to execute a special type of service requests and applications. Hence, the renting cost is proportional to the number of servers in a multi server system. The power consumption of a multi server system is linearly proportional to the number of servers and the server utilization, and to the square of execution speed. The revenue of a service provider is related to the amount of service and the quality of service. To summarize, the profit of a service provider is mainly determined by the configuration of its service platform.
  • To configure a cloud service platform, a service provider usually adopts a single renting scheme. That’s to say, the servers in the service system are all long-term rented. Because of the limited number of servers, some of the incoming service requests cannot be processed immediately. So they are first inserted into a queue until they can handle by any available server.

DISADVANTAGES OF EXISTING SYSTEM:

  • The waiting time of the service requests is too long.
  • Sharp increase of the renting cost or the electricity cost. Such increased cost may counterweight the gain from penalty reduction. In conclusion, the single renting scheme is not a good scheme for service providers.


PROPOSED SYSTEM:

  • In this paper, we propose a novel renting scheme for service providers, which not only can satisfy quality-of- service requirements, but also can obtain more profit.A novel double renting scheme is proposed for service providers. It combines long-term renting with short-term renting, which can not only satisfy quality-of- service requirements under the varying system workload, but also reduce the resource waste greatly.
  • A multi server system adopted in our paper is modeled as an M/M/m+D queuing model and the performance indicators are analyzed such as the average service charge, the ratio of requests that need short term servers, and so forth.
  • The optimal configuration problem of service providers for profit maximization is formulated and two kinds of optimal solutions, i.e., the ideal solutions and the actual solutions, are obtained respectively.
  • A series of comparisons are given to verify the performance of our scheme.The results show that the proposed Double-Quality- Guaranteed (DQG) renting scheme can achieve more profit than the compared Single-Quality-Unguaranteed (SQU) renting scheme in the premise of guaranteeing the service quality completely.


ADVANTAGES OF PROPOSED SYSTEM:

  • Since the requests with waiting time D are all assigned to temporary servers, it is apparent that all service requests can guarantee their deadline and are charged based on the workload according to the SLA. Hence, the revenue of the service provider increases.
  • Increase in the quality of service requests and maximize the profit of service providers.
  • This scheme combines short-term renting with long-term renting, which can reduce the resource waste greatly and adapt to the dynamical demand of computing capacity.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:
  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENE
Jing Mei, Kenli Li, Member, IEEE, Aijia Ouyang and Keqin Li, Fellow, IEEE, “A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing”, IEEE TRANSACTIONS ON COMPUTERS, 2015

Friday, 6 May 2016

Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage

Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage http://1croreprojects.com/


ABSTRACT:

            The capability of selectively sharing encrypted data with different users via public cloud storage may greatly ease security concerns over inadvertent data leaks in the cloud. A key challenge to designing such encryption schemes lies in the efficient management of encryption keys. The desired flexibility of sharing any group of selected documents with any group of users demands different encryption keys to be used for different documents. However, this also implies the necessity of securely distributing to users a large number of keys for both encryption and search, and those users will have to securely store the received keys, and submit an equally large number of keyword trapdoors to the cloud in order to perform search over the shared data. The implied need for secure communication, storage, and complexity clearly renders the approach impractical. In this paper, we address this practical problem, which is largely neglected in the literature, by proposing the novel concept of key aggregate searchable encryption (KASE) and instantiating the concept through a concrete KASE scheme, in which a data owner only needs to distribute a single key to a user for sharing a large number of documents, and the user only needs to submit a single trapdoor to the cloud for querying the shared documents. The security analysis and performance evaluation both confirm that our proposed schemes are provably secure and practically efficient.

EXISTING SYSTEM:

  • There is a rich literature on searchable encryption, including SSE schemes and PEKS schemes. In contrast to those existing work, in the context of cloud storage, keyword search under the multi-tenancy setting is a more common scenario. In such a scenario, the data owner would like to share a document with a group of authorized users, and each user who has the access right can provide a trapdoor to perform the keyword search over the shared document, namely, the “multi-user searchable encryption” (MUSE) scenario.
  •  Some recent work focus to such a MUSE scenario, although they all adopt single-key combined with access control to achieve the goal.
  •  In MUSE schemes are constructed by sharing the document’s searchable encryption key with all users who can access it, and broadcast encryption is used to achieve coarse-grained access control.
  •  In attribute based encryption (ABE) is applied to achieve fine-grained access control aware keyword search. As a result, in MUSE, the main problem is how to control which users can access which documents, whereas how to reduce the number of shared keys and trapdoors is not considered.


                                     
   fig:video of Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage

DISADVANTAGES OF EXISTING SYSTEM:

  • Unexpected privilege escalation will expose all
  • It is not efficient.
  • Shared data will not be secure.


PROPOSED SYSTEM:

  • In this paper, we address this challenge by proposing the novel concept of key-aggregate searchable encryption (KASE), and instantiating the concept through a concrete KASE scheme.
  • The proposed KASE scheme applies to any cloud storage that supports the searchable group data sharing functionality, which means any user may selectively share a group of selected files with a group of selected users, while allowing the latter to perform keyword search over the former.
  • To support searchable group data sharing the main requirements for efficient key management are twofold. First, a data owner only needs to distribute a single aggregate key (instead of a group of keys) to a user for sharing any number of files. Second, the user only needs to submit a single aggregate trapdoor (instead of a group of trapdoors) to the cloud for performing keyword search over any number of shared files.
  • We first define a general framework of key aggregate searchable encryption (KASE) composed of seven polynomial algorithms for security parameter setup, key generation, encryption, key extraction, trapdoor generation,trapdoor adjustment, and trapdoor testing. We then describe both functional and security requirements for designing a valid KASE scheme.
  • We then instantiate the KASE framework by designing a concrete KASE scheme. After providing detailed constructions for the seven algorithms, we analyze the efficiency of the scheme, and establish its security through detailed analysis.
  • We discuss various practical issues in building an actual group data sharing system based on the proposed KASE scheme, and evaluate its performance.The evaluation confirms our system can meet the performance requirements of practical applications.


ADVANTAGES OF PROPOSED SYSTEM:
  •  It is more secure.
  • Decryption key should be sent via a secure channel and kept secret.
  • It is an efficient public-key encryption scheme which supports flexible delegation.
  • To the best of our knowledge, the KASE scheme proposed in this paper is the first known scheme that can satisfy requirements.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  •  Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  •  IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

         Baojiang Cui, Zheli Liu_ and Lingyu Wang, “Key-Aggregate Searchable Encryption (KASE) for Group Data Sharing via Cloud Storage”, IEEE TRANSACTIONS ON COMPUTERS, 2015

A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing

                   A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing  1croreprojects.com

ABSTRACT:

            As an effective and efficient way to provide computing resources and services to customers on demand, cloud computing has become more and more popular. From cloud service providers’ perspective, profit is one of the most important considerations, and it is mainly determined by the configuration of a cloud service platform under given market demand. However, a single long-term renting scheme is usually adopted to configure a cloud platform, which cannot guarantee the service quality but leads to serious resource waste. In this paper, a double resource renting scheme is designed firstly in which short-term renting and long-term renting are combined aiming at the existing issues. This double renting scheme can effectively guarantee the quality of service of all requests and reduce the resource waste greatly. Secondly, a service system is considered as an M/M/m+D queuing model and the performance indicators that affect the profit of our double renting scheme are analyzed, e.g., the average charge, the ratio of requests that need temporary servers, and so forth. Thirdly, a profit maximization problem is formulated for the double renting scheme and the optimized configuration of a cloud platform is obtained by solving the profit maximization problem. Finally, a series of calculations are conducted to compare the profit of our proposed scheme with that of the single renting scheme. The results show that our scheme can not only guarantee the service quality of all requests, but also obtain more profit than the latter.

EXISTING SYSTEM:

  • In general, a service provider rents a certain number of servers from the infrastructure providers and builds different multi-server systems for different application domains. Each multi server system is to execute a special type of service requests and applications. Hence, the renting cost is proportional to the number of servers in a multi server system. The power consumption of a multi server system is linearly proportional to the number of servers and the server utilization, and to the square of execution speed.The revenue of a service provider is related to the amount of service and the quality of service. To summarize, the profit of a service provider is mainly determined by the configuration of its service platform.
  • To configure a cloud service platform, a service provider usually adopts  single renting scheme. That’s to say, the servers in the service system are all long-term rented. Because of the limited number of servers, some of the incoming service requests cannot be processed immediately. So they are first inserted into a queue until they can handle by any available server.

DISADVANTAGES OF EXISTING SYSTEM:

  • The waiting time of the service requests is too long.
  • Sharp increase of the renting cost or the electricity cost. Such increased cost may counterweight the gain from penalty reduction. In conclusion, the single renting scheme is not a good scheme for service providers. 
PROPOSED SYSTEM:

  • In this paper, we propose a novel renting scheme for service providers, which not only can satisfy quality-of- service requirements, but also canobtain more profit.
  • A novel double renting scheme is proposed for service providers. It combines long-term renting with short-term renting, which can not only satisfy quality-of- service requirements under the varying system workload, but also reduce the resource waste greatly.
  • A multiserver system adopted in our paper is modeled as an M/M/m+D queuing model and the performance indicators are analyzed such as the average service charge, the ratio of requests that need shortterm servers, and so forth.
  • The optimal configuration problem of service providers for profit maximization is formulated and two kinds of optimal solutions, i.e., the ideal solutions and the actual solutions, are obtained respectively.
  • A series of comparisons are given to verify the performance of our scheme.The results show that the proposed Double-Quality- Guaranteed (DQG)renting scheme can achieve more profit than the compared Single-Quality-Unguaranteed (SQU) renting scheme in the premise of guaranteeing the service quality completely.

ADVANTAGES OF PROPOSED SYSTEM:

  • Since the requests with waiting time D are all assigned to temporary servers,it is apparent that all service requests can guarantee their deadline and are charged based on the workload according to the SLA. Hence, the revenue of the service provider increases.
  • Increase in the quality of service requests and maximize the profit of service providers.
  • This scheme combines short-term renting with long-term renting, which can reduce the resource waste greatly and adapt to the dynamical demand of computing capacity.

SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  •  System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL

REFERENCE:

           Jing Mei, Kenli Li, Member, IEEE, Aijia Ouyang and Keqin Li, Fellow, IEEE, “A Profit Maximization Scheme with Guaranteed Quality of Service in Cloud Computing”, IEEE TRANSACTIONS ON COMPUTERS, 2015

Friday, 22 April 2016

Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage

         Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage 1croreprojects.com


ABSTRACT:

             To protect outsourced data in cloud storage against corruptions, adding fault tolerance to cloud storage together with data integrity checking and failure reparation becomes critical. Recently, regenerating codes have gained popularity due to their lower repair bandwidth while providing fault tolerance. Existing remote checking methods for regenerating-coded data only provide private auditing, requiring data owners to always stay online and handle auditing, as well as repairing, which is sometimes impractical. In this paper, we propose a public auditing scheme for the regenerating-code-based cloud storage. To solve the regeneration problem of failed authenticators in the absence of data owners, we introduce a proxy, which is privileged to regenerate the authenticators, into the traditional public auditing system model. Moreover, we design a novel public verifiable authenticator, which is generated by a couple of keys and can be regenerated using partial keys. Thus, our scheme can completely release data owners from online burden. In addition, we randomize the encode coefficients with a pseudo random function to preserve data privacy. Extensive security analysis shows that our scheme is provable secure under random oracle model and experimental evaluation indicates that our scheme is highly efficient and can be feasibly integrated into the regenerating-code-based cloud storage.



EXISTING SYSTEM: 

  • Many mechanisms dealing with the integrity of outsourced data without a local copy have been proposed under different system and security models up to now. The most significant work among these studies are the PDP (provable data possession) model and POR (proof of retrievability) model, which were originally proposed for the single-server scenario by Ateniese et al. and Juels and Kaliski, respectively.
  • Considering that files are usually striped and redundantly stored across multi-servers or multi-clouds, explore integrity verification schemes suitable for such multi-servers or multi-clouds setting with different redundancy schemes, such as replication, erasure codes, and, more recently, regenerating codes.
  • Chen et al. and Chen and Lee separately and independently extended the single-server CPOR scheme to the regeneratingcode- scenario; designed and implemented a data integrity protection (DIP) scheme for FMSR-based cloud storage and the scheme is adapted to the thin-cloud setting.


DISADVANTAGES OF EXISTING SYSTEM:


  • They are designed for private audit, only the data owner is allowed to verify the integrity and repair the faulty servers.
  • Considering the large size of the outsourced data and the user’s constrained resource capability, the tasks of auditing and reparation in the cloud can be formidable and expensive for the users 
  • The auditing schemes in existing imply the problem that users need to always stay online, which may impede its adoption in practice, especially for long-term archival storage.


PROPOSED SYSTEM:


  • In this paper, we focus on the integrity verification problem in regenerating-code-based cloud storage, especially with the functional repair strategy. To fully ensure the data integrity and save the users’ computation resources as well as online burden, we propose a public auditing scheme for the regenerating-code-based cloud storage, in which the integrity checking and regeneration (of failed data blocks and authenticators) are implemented by a third-party auditor and a semi-trusted proxy separately on behalf of the data owner.
  • Instead of directly adapting the existing public auditing scheme to the multi-server setting, we design a novel authenticator, which is more appropriate for regenerating codes. Besides, we “encrypt” the coefficients to protect data privacy against the auditor, which is more lightweight than applying the proof blind technique and data blind method.
  • We design a novel homomorphic authenticator based on BLS signature, which can be generated by a couple of secret keys and verified publicly.


ADVANTAGES OF PROPOSED SYSTEM: 

  • Utilizing the linear subspace of the regenerating codes, the authenticators can be computed efficiently. Besides, it can be adapted for data owners equipped with low end computation devices (e.g. Tablet PC etc.) in which they only need to sign the native blocks.
  • To the best of our knowledge, our scheme is the first to allow privacy-preserving public auditing for regenerating code- based cloud storage. The coefficients are masked by a PRF (Pseudorandom Function) during the Setup phase to avoid leakage of the original data. This method is lightweight and does not introduce any computational overhead to the cloud servers or TPA.
  • Our scheme completely releases data owners from online burden for the regeneration of blocks and authenticators at faulty servers and it provides the privilege to a proxy for the reparation.
  • Optimization measures are taken to improve the flexibility and efficiency of our auditing scheme; thus, the storage overhead of servers, the computational overhead of the data owner and communication overhead during the audit phase can be effectively reduced.
  • Our scheme is provable secure under random oracle model against adversaries


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk           : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4 
  • Database : MYSQL


REFERENCE:

            Jian Liu, Kun Huang, Hong Rong, Huimei Wang, and Ming Xian, “Privacy-Preserving Public Auditing for Regenerating-Code-Based Cloud Storage”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 7, JULY 2015.

Thursday, 21 April 2016

Identity-Based Encryption with Outsourced Revocation in Cloud Computing

                Identity-Based Encryption with Outsourced Revocation in Cloud Computing 1croreprojects.com


ABSTRACT:

              Identity-Based Encryption (IBE) which simplifies the public key and certificate management at Public Key Infrastructure (PKI) is an important alternative to public key encryption. However, one of the main efficiency drawbacks of IBE is the overhead computation at Private Key Generator (PKG) during user revocation. Efficient revocation has been well studied in traditional PKI setting, but the cumbersome management of certificates is precisely the burden that IBE strives to alleviate. In this paper, aiming at tackling the critical issue of identity revocation, we introduce outsourcing computation into IBE for the first time and propose a revocable IBE scheme in the server-aided setting. Our scheme offloads most of the key generation related operations during key-issuing and key-update processes to a Key Update Cloud Service Provider, leaving only a constant number of simple operations for PKG and users to perform locally. This goal is achieved by utilizing a novel collusion-resistant technique: we employ a hybrid private key for each user, in which an AND gate is involved to connect and bound the identity component and the time component. Furthermore, we propose another construction which is provable secure under the recently formulized Refereed Delegation of Computation model. Finally, we provide extensive experimental results to demonstrate the efficiency of our proposed construction.

EXISTING SYSTEM: 

  • Identity-Based Encryption (IBE) is an interesting alternative to public key encryption, which is proposed to simplify key management in a certificate-based Public Key Infrastructure (PKI) by using human-intelligible identities (e.g., unique name, email address, IP address, etc) as public keys.
  • Boneh and Franklin suggested that users renew their private keys periodically and senders use the receivers’ identities concatenated with current time period.
  • Hanaoka et al. proposed a way for users to periodically renew their private keys without interacting with PKG.
  • Lin et al. proposed a space efficient revocable IBE mechanism from non-monotonic Attribute-Based Encryption (ABE), but their construction requires times bilinear pairing operations for a single decryption where is the number of revoked users.


DISADVANTAGES OF EXISTING SYSTEM:


  • Boneh and Franklin mechanism would result in an overhead load at PKG. In another word, all the users regardless of whether their keys have been revoked or not, have to contact with PKG periodically to prove their identities and update new private keys. It requires that PKG is online and the secure channel must be maintained for all transactions, which will become a bottleneck for IBE system as the number of users grows. 
  • Boneh and Franklin’s suggestion  is more a viable solution but impractical.
  • In Hanaoka et al system, however, the assumption required in their work is that each user needs to possess a tamper-resistant hardware device.
  • If an identity is revoked then the mediator is instructed to stop helping the user. Obviously, it is impractical since all users are unable to decrypt on their own and they need to communicate with mediator for each decryption.


PROPOSED SYSTEM:


  • In this paper, we introduce outsourcing computation into IBE revocation, and formalize the security definition of outsourced revocable IBE for the first time to the best of our knowledge. We propose a scheme to offload all the key generation related operations during key-issuing and keyupdate, leaving only a constant number of simple operations for PKG and eligible users to perform locally. 
  • In our scheme, as with the suggestion, we realize revocation through updating the private keys of the unrevoked users. But unlike that work which trivially concatenates time period with identity for key generation/update and requires to re-issue the whole private key for unrevoked users, we propose a novel collusion-resistant key issuing technique: we employ a hybrid private key for each user, in which an AND gate is involved to connect and bound two sub-components, namely the identity component and the time component. 
  • At first, user is able to obtain the identity component and a default time component (i.e., for current time period) from PKG as his/her private key in key-issuing. Afterwards, in order to maintain decryptability, unrevoked users needs to periodically request on keyupdate for time component to a newly introduced entity named Key Update Cloud Service Provider (KU-CSP).


ADVANTAGES OF PROPOSED SYSTEM:


  • Compared with the previous work, our scheme does not have to re-issue the whole private keys, but just need to update a lightweight component of it at a specialized entity KU-CSP. 
  • With the aid of KU-CSP, user needs not to contact with PKG in key-update, in other words, PKGis allowed to be offline after sending the revocation list to KU-CSP. 
  • No secure channel or user authentication is required during key-update between user and KU-CSP.
  • Furthermore, we consider to realize revocable IBE with a semi-honest KU-CSP. To achieve this goal, we present a security enhanced construction under the recently formalized Refereed Delegation of Computation (RDoC) model. 
  • Finally, we provide extensive experimental results to demonstrate the efficiency of our proposed construction.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk           : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour. 
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

               Jin Li, Jingwei Li, Xiaofeng Chen, Chunfu Jia, and Wenjing Lou, Senior Member, IEEE, “Identity-Based Encryption with Outsourced Revocation in Cloud Computing”, IEEE TRANSACTIONS ON COMPUTERS, VOL. 64, NO. 2, FEBRUARY 2015.

Wednesday, 20 April 2016

Enabling Fine-grained Multi-keyword Search Supporting Classified Sub-dictionaries over Encrypted Cloud Data

                           Enabling Fine-grained Multi-keyword Search Supporting Classified Sub-dictionaries over Encrypted Cloud Data 1croreprojects.com


ABSTRACT:

                     Using cloud computing, individuals can store their data on remote servers and allow data access to public users through the cloud servers. As the outsourced data are likely to contain sensitive privacy information, they are typically encrypted before uploaded to the cloud. This, however, significantly limits the usability of outsourced data due to the difficulty of searching over the encrypted data. In this paper, we address this issue by developing the fine-grained multi-keyword search schemes over encrypted cloud data. Our original contributions are three-fold. First, we introduce the relevance scores and preference factors upon keywords which enable the precise keyword search and personalized user experience. Second, we develop a practical and very efficient multi-keyword search scheme. The proposed scheme can support complicated logic search the mixed “AND”, “OR” and “NO” operations of keywords. Third, we further employ the classified sub-dictionaries technique to achieve better efficiency on index building, trapdoor generating and query. Lastly, we analyze the security of the proposed schemes in terms of confidentiality of documents, privacy protection of index and trapdoor, and unlink ability of trapdoor. Through extensive experiments using the real-world dataset, we validate the performance of the proposed schemes. Both the security analysis and experimental results demonstrate that the proposed schemes can achieve the same security level comparing to the existing ones and better performance in terms of functionality, query complexity and efficiency.



EXISTING SYSTEM:

  •  The searchable encryption has been recently developed as a fundamental approach to enable searching over encrypted cloud data, which proceeds the following operations.
  • Wang et al. propose a ranked keyword search scheme which considers the relevance scores of keywords.
  • Sun et al. propose a multi-keyword text search scheme which considers the relevance scores of keywords and utilizes a multidimensional tree technique to achieve efficient search query. 
  • Yu et al. propose a multi-keyword top-k retrieval scheme which uses fully homomorphic encryption to encrypt the index/trapdoor and guarantees high security. 
  • Cao et al. propose a multi-keyword ranked search (MRSE), which applies coordinate machine as the keyword matching rule, i.e., return data with the most matching keywords.


DISADVANTAGES OF EXISTING SYSTEM:


  • Due to using order-preserving encryption (OPE) to achieve the ranking property, the existing scheme cannot achieve unlink ability of trapdoor.
  • Although many search functionalities have been developed in previous literature towards precise and efficient searchable encryption, it is still difficult for searchable encryption to achieve the same user experience as that of the plain text search, like Google search.
  • Most existing proposals can only enable search with single logic operation, rather than the mixture of multiple logic operations on keywords


PROPOSED SYSTEM:


  • In this work, we address by developing two Fine-grained Multi-keyword Search (FMS) schemes over encrypted cloud data.
  • In this system, we introduce the relevance scores and the preference factors of keywords for searchable encryption. The relevance scores of keywords can enable more precise returned results, and the preference factors of keywords represent the importance of keywords in the search keyword set specified by search users and correspondingly enables personalized search to cater to specific user preferences. It thus further improves the search functionalities and user experience.
  • In this system, we realize the “AND”, “OR” and “NO” operations in the multi-keyword search for searchable encryption. Compared with schemes, the proposed scheme can achieve more comprehensive functionality and lower query complexity.
  • In this system, we employ the classified sub-dictionaries technique to enhance the efficiency of the above two schemes. Extensive experiments demonstrate that the enhanced schemes can achieve better efficiency in terms of index building, trapdoor generating and query in the comparison with schemes


ADVANTAGES OF PROPOSED SYSTEM:


  • Better search results with multi-keyword query by the cloud server according to some ranking criteria.
  • To reduce the communication cost.
  • Achieves lower query complexity.
  • Achieves better efficiency in index building scheme of our proposed model.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk           : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

           Hongwei Li, Member, IEEE, Yi Yang, Student Member, IEEE, Tom H. Luan, Member, IEEE, Xiaohui Liang, Student Member, IEEE, Liang Zhou, Member, IEEE, and Xuemin (Sherman) Shen, Fellow, IEEE, “Enabling Fine-grained Multi-keyword Search Supporting Classified Sub-dictionaries over Encrypted Cloud Data”, IEEE Transactions on Dependable and Secure Computing, 2015.

Tuesday, 19 April 2016

Enabling Cloud Storage Auditing With Key-Exposure Resistance

                 Enabling Cloud Storage Auditing With Key-Exposure Resistance 1croreprojects.com


ABSTRACT:

                Cloud storage auditing is viewed as an important service to verify the integrity of the data in public cloud. Current auditing protocols are all based on the assumption that the client’s secret key for auditing is absolutely secure. However, such assumption may not always be held, due to the possibly weak sense of security and/or low security settings at the client. If such a secret key for auditing is exposed, most of the current auditing protocols would inevitably become unable to work. In this paper, we focus on this new aspect of cloud storage auditing. We investigate how to reduce the damage of the client’s key exposure in cloud storage auditing, and give the first practical solution for this new problem setting. We formalize the definition and the security model of auditing protocol with key-exposure resilience and propose such a protocol. In our design, we employ the binary tree structure and the preorder traversal technique to update the secret keys for the client. We also develop a novel authenticator construction to support the forward security and the property of blockless verifiability. The security proof and the performance analysis show that our proposed protocol is secure and efficient.


EXISTING SYSTEM:


  • These protocols focus on several different aspects of auditing, and how to achieve high bandwidth and computation efficiency is one of the essential concerns. For that purpose, the Homomorphic Linear Authenticator (HLAtechnique that supports blockless verification is explored to reduce the overheads of computation and communication in auditing protocols, which allows the auditor to verify the integrity of the data in cloud without retrieving the whole data.
  • The privacy protection of data is also an important aspect of cloud storage auditing. In order to reduce the computational burden of the client, a third-party auditor (TPA) is introduced to help the client to periodically check the integrity of the data in cloud. However, it is possible for the TPA to get the client’s data after it executes the auditing protocol multiple times.
  • Wang et al. have proposed an auditing protocol supporting fully dynamic data operations including modification, insertion and deletion.


DISADVANTAGES OF EXISTING SYSTEM:


  • Though many research works about cloud storage auditing have been done in recent years, a critical security problem—the key exposure problem for cloud storage auditing, has remained unexplored in previous researches. While all existing protocols focus on the faults or dishonesty of the cloud, they have overlooked the possible weak sense of security and/or low security settings at the client.
  • Unfortunately, previous auditing protocols did not consider this critical issue of how to deal with the client’s secret key exposure for cloud storage auditing, and any exposure of the client’s secret auditing key would make most of the existing auditing protocols unable to work correctly.


PROPOSED SYSTEM:


  • In this paper, we focus on how to reduce the damage of the clients key exposure in cloud storage auditing. Our goal is to design a cloud storage auditing protocol with built-in key-exposure resilience. How to do it efficiently under this new problem setting brings in many new challenges to be addressed below. First of all, applying the traditional solution of key revocation to cloud storage auditing is not practical. This is because, whenever the client’s secret key for auditing is exposed, the client needs to produce a new pair of public key and secret key and regenerate the authenticators for the client’s data previously stored in cloud.
  • Our goal is to design a practical auditing protocol with key-exposure resilience, in which the operational complexities of key size, computation overhead and communication overhead should be at most sub-linear to T. In order to achieve our goal, we use a binary tree structure to appoint time periods and associate periods with tree nodes by the pre-order traversal technique. The secret key in each time period is organized as a stack. In each time period, the secret key is updated by a forward-secure technique.
  • The auditing protocol achieves key-exposure resilience while satisfying our efficiency requirements. As we will show later, in our protocol, the client can audit the integrity of the cloud data still in aggregated manner, i.e., without retrieving the entire data from the cloud.


ADVANTAGES OF PROPOSED SYSTEM:


  • We initiate the first study on how to achieve the key-exposure resilience in the storage auditing protocol and propose a new concept called auditing protocol with key-exposure resilience. In such a protocol, any dishonest behaviors, such as deleting or modifying some client’s data stored in cloud in previous time periods, can all be detected, even if the cloud gets the client’s current secret key for cloud storage auditing.
  • This very important issue is not addressed before by previous auditing protocol designs. We further formalize the definition and the security model of auditing protocol with key-exposure resilience for secure cloud storage.
  • We design and realize the first practical auditing protocol with built-in key-exposure resilience for cloud storage. In order to achieve our goal, we employ the binary tree structure, seen in a few previous works on different cryptographic designs, to update the secret keys of the client. Such a binary tree structure can be considered as a variant of the tree structure used in the HIBE scheme. In addition, the pre-order traversal technique is used to associate each node of a binary tree with each time period. In our detailed protocol, the stack structure is used to realize the pre-order traversal of the binary tree. We also design a novel authenticator supporting the forward security and the property of blockless verifiability. 
  • We prove the security of our protocol in the formalized security model, and justify its performance via concrete asymptotic analysis. Indeed, the proposed protocol only adds reasonable overhead to achieve the key-exposure resilience. We also show that our proposed design can be extended to support the TPA, lazy update and multiple sectors.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk           : 40 GB.
  • Floppy Drive : 1.44 Mb. 
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  •  Database : MYSQL


REFERENCE:

              Jia Yu, Kui Ren, Senior Member, IEEE, Cong Wang, Member, IEEE, and Vijay Varadharajan, Senior Member, IEEE, “Enabling Cloud Storage Auditing With Key-Exposure Resistance”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 6, JUNE 2015.

Monday, 18 April 2016

Control Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute-Based Encryption

                  Control Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute-Based Encryption 1croreprojects.com

ABSTRACT:

              Cloud computing is a revolutionary computing paradigm, which enables flexible, on-demand, and low-cost usage of computing resources, but the data is outsourced to some cloud servers, and various privacy concerns emerge from it. Various schemes based on the attribute-based encryption have been proposed to secure the cloud storage. However, most work focuses on the data contents privacy and the access control, while less attention is paid to the privilege control and the identity privacy. In this paper, we present a semi-anonymous privilege control scheme AnonyControl to address not only the data privacy, but also the user identity privacy in existing access control schemes. AnonyControl decentralizes the central authority to limit the identity leakage and thus achieves semianonymity. Besides, it also generalizes the file access control to the privilege control, by which privileges of all operations on the cloud data can be managed in a fine-grained manner. Subsequently, we present the AnonyControl-F, which fully prevents the identity leakage and achieve the full anonymity. Our security analysis shows that both AnonyControl and AnonyControl-F are secure under the decisional bilinear Diffie–Hellman assumption, and our performance evaluation exhibits the feasibility of our schemes.



EXISTING SYSTEM:

  • Various techniques have been proposed to protect the data contents privacy via access control. Identity-based encryption (IBE) was first introduced by Shamir, in which the sender of a message can specify an identity such that only a receiver with matching identity can decrypt it.
  • Few years later, Fuzzy Identity-Based Encryption is proposed, which is also known as Attribute-Based Encryption (ABE).
  • The work by Lewko et al. and Muller et al. are the most similar ones to ours in that they also tried to decentralize the central authority in the CP-ABE into multiple ones. 
  • Lewko et al. use a LSSS matrix as an access structure, but their scheme only converts the AND, OR gates to the LSSS matrix, which limits their encryption policy to boolean formula, while we inherit the flexibility of the access tree having threshold gates. 
  • Muller et al. also supports only Disjunctive Normal Form (DNF) in their encryption policy.


DISADVANTAGES OF EXISTING SYSTEM:


  • The identity is authenticated based on his information for the purpose of access control (or privilege control in this paper).
  • Preferably, any authority or server alone should not know any client’s personal information.
  • The users in the same system must have their private keys re-issued so as to gain access to the re-encrypted files, and this process causes considerable problems in implementation.


PROPOSED SYSTEM: 


  • The data confidentiality, less effort is paid to protect users’ identity privacy during those interactive protocols. Users’ identities, which are described with their attributes, are generally disclosed to key issuers, and the issuers issue private keys according to their attributes. 
  • We propose AnonyControl and AnonyControl-Fallow cloud servers to control users’ access privileges without knowing their identity information. In this setting, each authority knows only a part of any user’s attributes, which are not enough to figure out the user’s identity. The scheme proposed by Chase et al.  considered the basic threshold-based KP-ABE. Many attribute based encryption schemes having multiple authorities have been proposed afterwards.
  • In our system, there are four types of entities: N Attribute Authorities (denoted as A), Cloud Server, Data Owners and Data Consumers. A user can be a Data Owner and a Data Consumer simultaneously. 
  • Authorities are assumed to have powerful computation abilities, and they are supervised by government offices because some attributes partially contain users’ personally identifiable information. The whole attribute set is divided into N  is joint sets and controlled by each authority, therefore each authority is aware of only part of attributes.


ADVANTAGES OF PROPOSED SYSTEM:


  • The proposed schemes are able to protect user’s privacy against each single authority. Partial information is disclosed in AnonyControl and no information is disclosed in AnonyControl-F. 
  • The proposed schemes are tolerant against authority compromise, and compromising of up to (N −2) authorities does not bring the whole system down.
  • We provide detailed analysis on security and performance to show feasibility of the scheme AnonyControl and AnonyControl-F.
  • We firstly implement the real toolkit of a multiauthority based encryptioscheme AnonyControl and AnonyControl-F.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk           : 40 GB. 
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:


  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL


REFERENCE:

              Taeho Jung, Xiang-Yang Li, Senior Member, IEEE, Zhiguo Wan, and Meng Wan, Member, IEEE, “Control Cloud Data Access Privilege and Anonymity With Fully Anonymous Attribute-Based Encryption”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 1, JANUARY 2015.

Saturday, 16 April 2016

CloudArmor: Supporting Reputation-based Trust Management for Cloud Services

                   Cloud Armor: Supporting Reputation-based Trust Management for Cloud Services 1croreprojects.com


ABSTRACT:

               Trust management is one of the most challenging issues for the adoption and growth of cloud computing. The highly dynamic, distributed, and non-transparent nature of cloud services introduces several challenging issues such as privacy, security, and availability. Preserving consumers’ privacy is not an easy task due to the sensitive information involved in the interactions between consumers and the trust management service. Protecting cloud services against their malicious users (e.g., such users might give misleading feedback to disadvantage a particular cloud service) is a difficult problem. Guaranteeing the availability of the trust management service is another significant challenge because of the dynamic nature of cloud environments. In this article, we describe the design and implementation of Cloud Armor, a reputation-based trust management framework that provides a set of functionalities to deliver Trust as a Service (TaaS), which includes i) a novel protocol to prove the credibility of trust feed backs and preserve users’ privacy, ii) an adaptive and robust credibility model for measuring the credibility of trust feed backs to protect cloud services from malicious users and to compare the trustworthiness of cloud services, and iii) an availability model to manage the availability of the decentralized implementation of the trust management service. The feasibility and benefits of our approach have been validated by a prototype and experimental studies using a collection of real-world trust feed backs on cloud services.



EXISTING SYSTEM:

              According to researchers at Berkeley, trust and security are ranked one of the top 10 obstacles for the adoption of cloud computing. Indeed, Service-Level Agreements (SLAs). Consumers’ feedback is a good source to assess the overall trustworthiness of cloud services. Several researchers have recognized the significance of trust management and proposed solutions to assess and manage trust based on feed backs collected from participants.

DISADVANTAGES OF EXISTING SYSTEM:


  • Guaranteeing the availability of  TMS  is a difficult problem due to the unpredictable number of users and the highly dynamic nature of the cloud environment.
  • A Self-promoting attack might have been performed on cloud service sy, which means sx should have been selected instead.
  • Disadvantage a cloud service by giving multiple misleading trust feed backs (i.e., collusion attacks)
  • Trick users into trusting cloud services that are not trustworthy by creating several accounts and giving misleading trust feed backs (i.e., Sybil attacks).


PROPOSED SYSTEM: 


  • Cloud service users’ feedback is a good source to assess the overall trustworthiness of cloud services. In this paper, we have presented novel techniques that help in detecting reputation based attacks and allowing users to effectively identify trustworthy cloud services. 
  • We introduce a credibility model that not only identifies misleading trust feed backs from collusion attacks but also detects Sybil attacks no matter these attacks take place in a long or short period of time (i.e., strategic or occasional attacks respectively). 
  • We also develop an availability model that maintains the trust management service at a desired level. We also develop an availability model that maintains the trust management service at a desired level.


ADVANTAGES OF PROPOSED SYSTEM:


  • Trust Cloud framework for accountability and trust in cloud computing. In particular, Trust Cloud consists of five layers including workflow,
  • Propose a multi-faceted Trust Management (TM) system architecture for cloud computing to help the cloud service users to identify trustworthy cloud service providers.


SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:


  • System  : Pentium IV 2.4 GHz.
  • Hard Disk  : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.


SOFTWARE REQUIREMENTS:

  • Operating system : WindowsXP/7. 
  • Coding Language : JAVA/J2EE
  • IDE : Net beans 7.4
  • Database : MYSQL


REFERENCE:

             Talal H. Noor, Quan Z. Sheng, Member, IEEE, Lina Yao, Schahram Dustdar, Senior Member, IEEE, and Anne H.H. Ngu, “CloudArmor: Supporting Reputation-based Trust Management for Cloud Services”,  IEEE Transactions on Parallel and Distributed Systems, 2015.

ADDRESS:
              1crore projects,
              No. 214/215, 2nd Floor, Raahat Plaza
              Vadapalani, Chennai,Tamilnadu, india.-600026

Friday, 15 April 2016

Circuit Ciphertext-policy Attribute-based Hybrid Encryption with Verifiable Delegation in Cloud Computing

                        Circuit Cipher text-policy Attribute-based Hybrid Encryption with Verifiable Delegation in Cloud Computing1croreprojects.com

ABSTRACT:

                   In the cloud, for achieving access control and keeping data confidential, the data owners could adopt attribute-based encryption to encrypt the stored data. Users with limited computing power are however more likely to delegate the mask of the decryption task to the cloud servers to reduce the computing cost. As a result, attribute-based encryption with delegation emerges. Still, there are caveats and questions remaining in the previous relevant works. For instance, during the delegation, the cloud servers could tamper or replace the delegated cipher text and respond a forged computing result with malicious intent. They may also cheat the eligible users by responding them that they are ineligible for the purpose of cost saving. Furthermore, during the encryption, the access policies may not be flexible enough as well. Since policy for general circuits enables to achieve the strongest form of access control, a construction for realizing circuit cipher text-policy attribute-based hybrid encryption with verifiable delegation has been considered in our work. In such a system, combined with verifiable computation and encrypt- then-mac mechanism, the data confidentiality, the fine-grained access control and the correctness of the delegated computing results are well guaranteed at the same time. Besides, our scheme achieves security against chosen-plaint ext attacks under the k-multi linear  Decisional Diffie-Hellman assumption. Moreover, an extensive simulation campaign confirms the feasibility and efficiency of the proposed solution.


EXISTING SYSTEM:

                  The servers could be used to handle and calculate numerous data according to the user’s demands. As applications move to cloud computing platforms, cipher text-policy attribute-based encryption (CP-ABE)  and verifiable delegation (VD)  are used to ensure the data confidentiality and the verifiability of delegation on dishonest cloud servers. the increasing volumes of medical images and medical records, the healthcare organizations put a large amount of data in the cloud for reducing data storage costs and supporting medical cooperation. There are two complementary forms of attribute based encryption. One is key-policy attribute-based encryption (KP-ABE)  and the other is cipher text-policy attribute-based encryption (CPABE).

DISADVANTAGES OF EXISTING SYSTEM:

  • The cloud server might tamper or replace the data owner’s original cipher text for malicious attacks, and then respond a false transformed cipher text. 
  • The cloud server might cheat the authorized user for cost saving. Though the servers could not respond a correct transformed cipher text to an unauthorized user, he could cheat an authorized one that he/she is not eligible.

PROPOSED SYSTEM:

             We firstly present a circuit cipher text-policy attribute-based hybrid encryption with verifiable delegation scheme. General circuits are used to express the strongest form of access control policy. the proposed scheme is proven to be secure based on k-multi linear Decisional Diffie-Hellman assumption. On the other hand, we implement our scheme over the integers. During the delegation computing, a user could validate whether the cloud server responds a correct transformed cipher text to help him/her decrypt  the cipher text immediately and correctly.

ADVANTAGES OF PROPOSED SYSTEM:
  • The generic KEM/DEM construction for hybrid encryption which can encrypt messages of arbitrary length.
  • They seek to guarantee the correctness of the original cipher text by using a commitment.
  • We give the anti-collusion circuit CP-ABE construction in this paper for the reason that CPABE is conceptually closer to the traditional access control methods.

SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:
  • System  : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 1.44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse : Logitech.
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:
  • Operating system : Windows XP/7.
  • Coding Language : JAVA/J2EE
  • IDE : Netbeans 7.4
  • Database : MYSQL

REFERENCE:

Jie Xu, Qiaoyan Wen, Wenmin Li and Zhengping Jin, “Circuit Ciphertext-policy Attribute-based Hybrid Encryption with Verifiable Delegation in Cloud Computing”, IEEE Transactions on Parallel and Distributed Systems 2015.

Address:

1crore projects,
Door No: 214/215,2nd Floor,
No. 172, Raahat Plaza, (Shopping Mall)
Arcot Road, Vadapalani, Chennai,
Tamil Nadu, INDIA - 600 026
Phone : +91 77081 50152 / +91 97518 00789 / +91 72999 51536