Software Solutions Developed With
High Perfection & High Quality
Home

Cloud Computing

Cloud computing

Cloud computing is a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network (typically the Internet). Cloud computing is a jargon term without a commonly accepted non-ambiguous scientific or technical definition. In science, cloud computing is a synonym for distributed computing over a network and means the ability to run a program on many connected computers at the same time. The phrase is also, more commonly used to refer to network based services which appear to be provided by real server hardware, which in fact are served up by virtual hardware, simulated by software running on one or more real machines. Such virtual servers do not physically exist and can therefore be moved around and scaled up (or down) on the fly without affecting the end user - arguably, rather like a cloud.

The popularity of the term can be attributed to its use in marketing to sell hosted services in the sense of application service provisioning that run client server software on a remote location.

Advantages

Cloud computing relies on sharing of resources to achieve coherence and economies of scale similar to a utility (like the electricity grid) over a network. At the foundation of cloud computing is the broader concept of converged infrastructure and shared services.

The cloud also focuses on maximizing the effectiveness of the shared resources. Cloud resources are usually not only shared by multiple users but as dynamically re-allocated per demand. This can work for allocating resources to users in different time zones. For example, a cloud computer facility, which serves European users during European business hours with a specific application (e.g. email) while the same resources are getting reallocated and serve North American users during North America's business hours with another application (e.g. web server). This approach should maximize the use of computing powers thus reducing environmental damage as well. Since less power, air conditioning, rackspace, and so on, is required for the same functions.

The term "moving to cloud" also refers to an organization moving away from a traditional CAPEX model (buy the dedicated hardware and depreciate it over a period of time) to the OPEX model (use a shared cloud infrastructure and pay as you use it).

Proponents claim that cloud computing allows companies to avoid upfront infrastructure costs, and focus on projects that differentiate their businesses instead of infrastructure. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and enables IT to more rapidly, adjust resources to meet fluctuating and unpredictable business demand.

Hosted services

In marketing, cloud computing is mostly used to sell hosted services in the sense of application service provisioning that run client server software at a remote location. Such services are given popular acronyms like 'SaaS' (Software as a Service), 'PaaS' (Platform as a Service), 'IaaS' (Infrastructure as a Service), 'HaaS' (Hardware as a Service) and finally 'EaaS' (Everything as a Service). End users access cloud-based applications through a web browser or a light-weight desktop or mobile app while the business software and user's data are stored on servers at a remote location.

History

The 1950s

The underlying concept of cloud computing dates back to the 1950s, when large-scale mainframe computers became available in academia and corporations, accessible via thin clients/terminal computers, often referred to as "dumb terminals", because they were used for communications but had no internal processing capacities. To make more efficient use of costly mainframes, a practice evolved that allowed multiple users to share both the physical access to the computer from multiple terminals as well as to share the CPU time. This eliminated periods of inactivity on the mainframe and allowed for a greater return on the investment. The practice of sharing CPU time on a mainframe became known in the industry as time-sharing.

The 1960s-1990s

John McCarthy opined in the 1960s that "computation may someday be organized as a public utility." Almost all the modern-day characteristics of cloud computing (elastic provision, provided as a utility, online, illusion of infinite supply), the comparison to the electricity industry and the use of public, private, government, and community forms, were thoroughly explored in Douglas Parkhill's 1966 book, The Challenge of the Computer Utility. Other scholars have shown that cloud computing's roots go all the way back to the 1950s when scientist Herb Grosch (the author of Grosch's law) postulated that the entire world would operate on dumb terminals powered by about 15 large data centers. Due to the expense of these powerful computers, many corporations and other entities could avail themselves of computing capability through time sharing and several organizations, such as GE's GEISCO, IBM subsidiary The Service Bureau Corporation (SBC, founded in 1957), Tymshare (founded in 1966), National CSS (founded in 1967 and bought by Dun & Bradstreet in 1979), Dial Data (bought by Tymshare in 1968), and Bolt, Beranek and Newman (BBN) marketed time sharing as a commercial venture.

The 1990s

In the 1990s, telecommunications companies,who previously offered primarily dedicated point-to-point data circuits, began offering virtual private network (VPN) services with comparable quality of service, but at a lower cost. By switching traffic as they saw fit to balance server use, they could use overall network bandwidth more effectively. They began to use the cloud symbol to denote the demarcation point between what the provider was responsible for and what users were responsible for. Cloud computing extends this boundary to cover servers as well as the network infrastructure.

As computers became more prevalent, scientists and technologists explored ways to make large-scale computing power available to more users through time sharing, experimenting with algorithms to provide the optimal use of the infrastructure, platform and applications with prioritized access to the CPU and efficiency for the end users.

Cloud Computing >> cont... :
Since 2000

After the dot-com bubble, Amazon played a key role in all the development of cloud computing by modernizing their data centers, which, like most computer networks, were using as little as 10% of their capacity at any one time, just to leave room for occasional spikes. Having found that the new cloud architecture resulted in significant internal efficiency improvements whereby small, fast-moving "two-pizza teams" (teams small enough to feed with two pizzas) could add new features faster and more easily, Amazon initiated a new product development effort to provide cloud computing to external customers, and launched Amazon Web Services (AWS) on a utility computing basis in 2006.

In early 2008, Eucalyptus became the first open-source, AWS API-compatible platform for deploying private clouds. In early 2008, OpenNebula, enhanced in the RESERVOIR European Commission-funded project, became the first open-source software for deploying private and hybrid clouds, and for the federation of clouds. In the same year, efforts were focused on providing quality of service guarantees (as required by real-time interactive applications) to cloud-based infrastructures, in the framework of the IRMOS European Commission-funded project, resulting to a real-time cloud environment. By mid-2008, Gartner saw an opportunity for cloud computing "to shape the relationship among consumers of IT services, those who use IT services and those who sell them" and observed that "organizations are switching from company-owned hardware and software assets to per-use service-based models" so that the "projected shift to computing ... will result in dramatic growth in IT products in some areas and significant reductions in other areas."

On March 1, 2011, IBM announced the IBM SmartCloud framework to support Smarter Planet. Among the various components of the Smarter Computing foundation, cloud computing is a critical piece.


Growth and popularity

The development of the Internet from being document centric via semantic data towards more and more services was described as "Dynamic Web". This contribution focused in particular in the need for better meta-data able to describe not only implementation details but also conceptual details of model-based applications.

The present availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture, autonomic, and utility computing have led to a growth in cloud computing. Financials Cloud vendors are experiencing growth rates of 90% per annum.

Origin of the term

The origin of the term cloud computing is unclear. The expression cloud is commonly used in science to describe a large agglomeration of objects that visually appear from a distance as a cloud and describes any set of things whose details are not inspected further in a given context.

Meteorology: a weather cloud is an agglomeration.

Mathematics: a large number of points in a coordinate system in mathematics is seen as a point cloud;
Astronomy: stars that appear crowded together in the sky are known as nebula (Latin for mist or cloud), e.g. the Milky Way;
Physics: The indeterminate position of electrons around an atomic kernel appears like a cloud to a distant observer
In analogy to above usage the word cloud was used as a metaphor for the Internet and a standardized cloud-like shape was used to denote a network on telephony schematics and later to depict the Internet in computer network diagrams. The cloud symbol was used to represent the Internet as early as 1994, in which servers were then shown connected to, but external to, the cloud symbol.

References to cloud computing in its modern sense can be found as early as 2006, with the earliest known mention to be found in a Compaq internal document.

Urban legends claim that usage of the expression is directly derived from the practice of using drawings of stylized clouds to denote networks in diagrams of computing and communications systems or that it derived from a marketing term.

The term became popular after Amazon.com introduced the Elastic Compute Cloud in 2006.


Similar systems and concepts

Cloud Computing is the result of evolution and adoption of existing technologies and paradigms. The goal of cloud computing is to allow users to take bene?t from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs, and help the users focus on their core business instead of being impeded by IT obstacles.

The main enabling technology for cloud computing is virtualization. Virtualization abstracts the physical infrastructure, which is the most rigid component, and makes it available as a soft component that is easy to use and manage. By doing so, virtualization provides the agility required to speed up IT operations, and reduces cost by increasing infrastructure utilization. On the other hand, autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process and reduces the possibility of human errors.

Users face difficult business problems every day. Cloud computing adopts concepts from Service-oriented Architecture (SOA) that can help the user break these problems into services that can be integrated to provide a solution. Cloud computing provides all of its resources as services, and makes use of the well-established standards and best practices gained in the domain of SOA to allow global and easy access to cloud services in a standardized way.

Cloud computing also leverages concepts from utility computing in order to provide metrics for the services used. Such metrics are at the core of the public cloud pay-per-use models. In addition, measured services are an essential part of the feedback loop in autonomic computing, allowing services to scale on-demand and to perform automatic failure recovery.

Cloud computing is a kind of grid computing; it has evolved by addressing the QoS (quality of service) and reliability problems. Cloud computing provides the tools and technologies to build data/compute intensive parallel applications with much more affordable prices compared to traditional parallel computing techniques.

Cloud computing shares characteristics with:

Client-server model - Client-server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).

Grid computing - "A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks."

Mainframe computer - Powerful computers used mainly by large organizations for critical applications, typically bulk data processing such as: census; industry and consumer statistics; police and secret intelligence services; enterprise resource planning; and financial transaction processing.

Utility computing - The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."

Peer-to-peer - A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client - server model).

Cloud gaming - Also known as on-demand gaming, is a way of delivering games to computers. Gaming data is stored in the provider's server, so that gaming is independent of client computers used to play the game.


Characteristics

Cloud computing exhibits the following key characteristics:
Agility improves with users' ability to re-provision technological infrastructure resources.

Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.

Cost: cloud providers claim that computing costs reduce. A public-cloud delivery model converts capital expenditure to operational expenditure. This purportedly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained, with usage-based options and fewer IT skills are required for implementation (in-house). The e-FISCAL project's state-of-the-art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.

Device and location independence enable users to access systems using a web browser regardless of their location or what device they use (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.

Virtualization technology allows sharing of servers and storage devices and increased utilization. Applications can be easily migrated from one physical server to another.

    • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • peak-load capacity increases (users need not engineer for highest possible load-levels)
    • utilisation and efficiency improvements for systems that are often only 10 - 20% utilised.

  • .

    Reliability improves with the use of multiple redundant sites, which makes well-designed cloud computing suitable for business continuity and disaster recovery.

    Scalability and elasticity via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.

    Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface. Security can improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford to tackle. However, the complexity of security is greatly increased when data is distributed over a wider area or over a greater number of devices, as well as in multi-tenant systems shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users' desire to retain control over the infrastructure and avoid losing control of information security.

    Maintenance of cloud computing applications is easier, because they do not need to be installed on each user's computer and can be accessed from different places.

    The National Institute of Standards and Technology's definition of cloud computing identifies "five essential characteristics":

    On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.

    Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).

    Resource pooling. The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. ...

    Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.

    Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.


    National Institute of Standards and Technology

    On-demand self-service

    On-demand self-service allows users to obtain, configure and deploy cloud services themselves using cloud service catalogues, without requiring the assistance of IT. This feature is listed by the National Institute of Standards and Technology (NIST) as a characteristic of cloud computing.

    The self-service requirement of cloud computing prompts infrastructure vendors to create cloud computing templates, which are obtained from cloud service catalogues. Manufacturers of such templates or blueprints include BMC Software (BMC), with Service Blueprints as part of their cloud management platform Hewlett-Packard (HP), which names its templates as HP Cloud Maps RightScale and Red Hat, which names its templates CloudForms.

    The templates contain predefined configurations used by consumers to set up cloud services. The templates or blueprints provide the technical information necessary to build ready-to-use clouds. Each template includes specific configuration details for different cloud infrastructures, with information about servers for specific tasks such as hosting applications, databases, websites and so on. The templates also include predefined Web service, the operating system, the database, security configurations and load balancing.

    Cloud computing consumers use cloud templates to move applications between clouds through a self-service portal. The predefined blueprints define all that an application requires to run in different environments. For example, a template could define how the same application could be deployed in cloud platforms based on Amazon Web Service, VMware or Red Hat. The user organization benefits from cloud templates because the technical aspects of cloud configurations reside in the templates, letting users to deploy cloud services with a push of a button. Developers can use cloud templates to create a catalog of cloud services.


    Cloud management

    Legacy management infrastructures, which are based on the concept of dedicated system relationships and architecture constructs, are not well suited to cloud environments where instances are continually launched and decommissioned. Instead, the dynamic nature of cloud computing requires monitoring and management tools that are adaptable, extensible and customizable.

    Cloud management challenges

    Cloud computing presents a number of management challenges. Companies using public clouds do not have ownership of the equipment hosting the cloud environment, and because the environment is not contained within their own networks, public cloud customers don’t have full visibility or control. Users of public cloud services must also integrate with an architecture defined by the cloud provider, using its specific parameters for working with cloud components. Integration includes tying into the cloud APIs for configuring IP addresses, subnets, firewalls and data service functions for storage. Because control of these functions is based on the cloud provider’s infrastructure and services, public cloud users must integrate with the cloud infrastructure management.

    Capacity management is a challenge for both public and private cloud environments because end users have the ability to deploy applications using self-service portals. Applications of all sizes may appear in the environment, consume an unpredictable amount of resources, then disappear at any time.

    Chargeback or, pricing resource use on a granular basis is a challenge for both public and private cloud environments. Chargeback is a challenge for public cloud service providers because they must price their services competitively while still creating profit.[68] Users of public cloud services may find chargeback challenging because it is difficult for IT groups to assess actual resource costs on a granular basis due to overlapping resources within an organization that may be paid for by an individual business unit, such as electrical power. For private cloud operators, chargeback is fairly straightforward, but the challenge lies in guessing how to allocate resources as closely as possible to actual resource usage to achieve the greatest operational efficiency. Exceeding budgets can be a risk.

    Hybrid cloud environments, which combine public and private cloud services, sometimes with traditional infrastructure elements, present their own set of management challenges. These include security concerns if sensitive data lands on public cloud servers, budget concerns around overuse of storage or bandwidth and proliferation of mismanaged images. Managing the information flow in a hybrid cloud environment is also a significant challenge. On-premises clouds must share information with applications hosted off-premises by public cloud providers, and this information may change constantly. Hybrid cloud environments also typically include a complex mix of policies, permissions and limits that must be managed consistently across both public and private clouds.


    Cloud clients

    Users access cloud computing using networked client devices, such as desktop computers, laptops, tablets and smartphones. Some of these devices - cloud clients - rely on cloud computing for all or a majority of their applications so as to be essentially useless without it. Examples are thin clients and the browser-based Chromebook. Many cloud applications do not require specific software on the client and instead use a web browser to interact with the cloud application. With Ajax and HTML5 these Web user interfaces can achieve a similar, or even better, look and feel to native applications. Some cloud applications, however, support specific client software dedicated to these applications (e.g., virtual desktop clients and most email clients). Some legacy applications (line of business applications that until now have been prevalent in thin client computing) are delivered via a screen-sharing technology.

    Deployment models

    Private cloud

    Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities.

    They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially the economic model that makes cloud computing such an intriguing concept".


    Public cloud

    A cloud is called a 'Public cloud' when the services are rendered over a network that is open for public use. Technically there is no difference between public and private cloud architecture, however, security consideration may be substantially different for services (applications, storage, and other resources) that are made available by a service provider for a public audience and when communication is effected over a non-trusted network. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and offer access only via Internet (direct connectivity is not offered).

    It has been suggested that Public cloud be merged into this article. (Discuss) Proposed since February 2013.

    Community cloud

    Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether managed internally or by a third-party and hosted internally or externally. The costs are spread over fewer users than a public cloud (but more than a private cloud), so only some of the cost savings potential of cloud computing are realized.

    Hybrid cloud

    Hybrid cloud is a composition of two or more clouds (private, community or public) that remain unique entities but are bound together, offering the benefits of multiple deployment models. Such composition expands deployment options for cloud services, allowing IT organizations to use public cloud computing resources to meet temporary needs. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds.

    Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization only pays for extra compute resources when they are needed.

    Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.

    By utilizing "hybrid cloud" architecture, companies and individuals are able to obtain degrees of fault tolerance combined with locally immediate usability without dependency on internet connectivity. Hybrid cloud architecture requires both on-premises resources and off-site (remote) server-based cloud infrastructure.

    Hybrid clouds lack the flexibility, security and certainty of in-house applications. Hybrid cloud provides the flexibility of in house applications with the fault tolerance and scalability of cloud based services.

    Personal cloud

    Personal cloud is an application of cloud computing for individuals similar to a Personal Computer. While a vendor organization may help manage or maintain a personal cloud, it never takes possession of the data on the personal cloud, which remains under control of the individual.

    Distributed cloud

    Cloud computing can also be provided by a distributed set of machines that are running at different locations, while still connected to a single network or hub service. Older examples of this include distributed computing platforms such as BOINC and Folding@Home, as well as new crowd-sourced cloud providers such as Slicify.

    Cloud management strategies

    Public clouds are managed by public cloud service providers, which include the public cloud environment’s servers, storage, networking and data center operations. Users of public cloud services can generally select from three basic categories:

    User self-provisioning: Customers purchase cloud services directly from the provider, typically through a web form or console interface. The customer pays on a per-transaction basis.

    Advance provisioning: Customers contract in advance a predetermined amount of resources, which are prepared in advance of service. The customer pays a flat fee or a monthly fee.

    Dynamic provisioning: The provider allocates resources when the customer needs them, then decommissions them when they are no longer needed. The customer is charged on a pay-per-use basis.

    Managing a private cloud requires software tools to help create a virtualized pool of compute resources, provide a self-service portal for end users and handle security, resource allocation, tracking and billing. Management tools for private clouds tend to be service driven, as opposed to resource driven, because cloud environments are typically highly virtualized and organized in terms of portable workloads.

    In hybrid cloud environments, compute, network and storage resources must be managed across multiple domains, so a good management strategy should start by defining what needs to be managed, and where and how to do it. Policies to help govern these domains should include configuration and installation of images, access control, and budgeting and reporting.


    Aspects of cloud management systems

    A cloud management system is a combination of software and technologies designed to manage cloud environments. The industry has responded to the management challenges of cloud computing with cloud management systems. HP, Novell, Eucalyptus, OpenNebula, Citrix and are among the vendors that have management systems specifically for managing cloud environments.

    At a minimum, a cloud management solution should be able to manage a pool of heterogeneous compute resources, provide access to end users, monitor security, manage resource allocation and manage tracking.

    Enterprises with large-scale cloud implementations may require more robust cloud management tools that include specific characteristics, such as the ability to manage multiple platforms from a single point of reference, include intelligent analytics to automate processes like application lifecycle management. And high-end cloud management tools should also be able to handle system failures automatically with capabilities such as self-monitoring, an explicit notification mechanism, and include failover and self-healing capabilities.


    Architecture

    Cloud computing sample architecture

    Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. Elastic provision implies intelligence in the use of tight or loose coupling as applied to mechanisms such as these and others.

    The Intercloud

    The Intercloud is an interconnected global "cloud of clouds". and an extension of the Internet "network of networks" on which it is based.


    Cloud engineering

    Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high-level concerns of commercialisation, standardisation, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.


    Issues
    Threats and opportunities of the cloud

    Critical voices including GNU project initiator Richard Stallman and Oracle founder Larry Ellison warned that the whole concept is rife with privacy and ownership concerns and constitute merely a fad.

    However, cloud computing continues to gain steam with 56% of the major European technology decision-makers estimate that the cloud is a priority in 2013 and 2014, and the cloud budget may reach 30% of the overall IT budget.

    According to the TechInsights Report 2013: Cloud Succeeds based on a survey, the cloud implementations generally meets or exceedes expectations across major service models, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS)".

    Several deterrents to the widespread adoption of cloud computing remain. Among them, are: reliability, availability of services and data, security, complexity, costs, regulations and legal issues, performance, migration, reversion, the lack of standards, limited customization and issues of privacy. The cloud offers many strong points: infrastructure flexibility, faster deployment of applications and data, cost control, adaptation of cloud resources to real needs, improved productivity, etc. The early 2010s cloud market is dominated by software and services in SaaS mode and IaaS (infrastructure), especially the private cloud. PaaS and the public cloud are further back.

    Privacy

    Privacy advocates have criticized the cloud model for giving hosting companies' greater ease to control and thus, to monitor at will communication between host company and end user, and access user data (with or without permission) . Instances such as the secret NSA program, working with AT&T, and Verizon, which recorded over 10 million telephone calls between American citizens, causes uncertainty among privacy advocates, and the greater powers it gives to telecommunication companies to monitor user activity. A cloud service provider (CSP) can complicate data privacy because of the extent of virtualization (virtual machines) and cloud storage used to implement cloud service. CSP operations, customer or tenant data may not remain on the same system, or in the same data center or even within the same provider's cloud; this can lead to legal concerns over jurisdiction. While there have been efforts (such as US-EU Safe Harbor) to "harmonise" the legal environment, providers such as Amazon still cater to major markets (typically the United States and the European Union) by deploying local infrastructure and allowing customers to select "availability zones." Cloud computing poses privacy concerns because the service provider can access the data that is on the cloud at any time. It could accidentally or deliberately alter or even delete information.

    Compliance

    To comply with regulations including FISMA, HIPAA, and SOX in the United States, the Data Protection Directive in the EU and the credit card industry's PCI DSS, users may have to adopt community or hybrid deployment modes that are typically more expensive and may offer restricted benefits. This is how Google is able to "manage and meet additional government policy requirements beyond FISMA" and Rackspace Cloud or QubeSpace are able to claim PCI compliance.

    Many providers also obtain a SAS 70 Type II audit, but this has been criticised on the grounds that the hand-picked set of goals and standards determined by the auditor and the auditee are often not disclosed and can vary widely. Providers typically make this information available on request, under non-disclosure agreement.

    Customers in the EU contracting with cloud providers outside the EU/EEA have to adhere to the EU regulations on export of personal data. U.S. Federal Agencies have been directed by the Office of Management and Budget to use a process called FedRAMP (Federal Risk and Authorization Management Program) to assess and authorize cloud products and services. Federal CIO Steven VanRoekel issued a memorandum to federal agency Chief Information Officers on December 8, 2011 defining how federal agencies should use FedRAMP. FedRAMP consists of a subset of NIST Special Publication 800-53 security controls specifically selected to provide protection in cloud environments. A subset has been defined for the FIPS 199 low categorization and the FIPS 199 moderate categorization. The FedRAMP program has also established a Joint Accreditation Board (JAB) consisting of Chief Information Officers from DoD, DHS and GSA. The JAB is responsible for establishing accreditation standards for 3rd party organizations who perform the assessments of cloud solutions. The JAB also reviews authorization packages, and may grant provisional authorization (to operate). The federal agency consuming the service still has final responsibility for final authority to operate.

    A multitude of laws and regulations have forced specific compliance requirements onto many companies that collect, generate or store data. These policies may dictate a wide array of data storage policies, such as how long information must be retained, the process used for deleting data, and even certain recovery plans. Below are some examples of compliance laws or regulations.

    In the United States, the Health Insurance Portability and Accountability Act (HIPAA) requires a contingency plan that includes, data backups, data recovery, and data access during emergencies.

    The privacy laws of the Switzerland demand that private data, including emails, be physically stored in the Switzerland. In the United Kingdom, the Civil Contingencies Act of 2004 sets forth guidance for a Business contingency plan that includes policies for data storage.

    In a virtualized cloud computing environment, customers may never know exactly where their data is stored. In fact, data may be stored across multiple data centers in an effort to improve reliability, increase performance, and provide redundancies. This geographic dispersion may make it more difficult to ascertain legal jurisdiction if disputes arise.


    Legal

    As with other changes in the landscape of computing, certain legal issues arise with cloud computing, including trademark infringement, security concerns and sharing of proprietary data resources.

    The Electronic Frontier Foundation has criticized the United States government during the Megaupload seizure process for considering that people lose property rights by storing data on a cloud computing service.

    One important but not often mentioned problem with cloud computing is the problem of who is in "possession" of the data. If a cloud company is the possessor of the data, the possessor has certain legal rights. If the cloud company is the "custodian" of the data, then a different set of rights would apply. The next problem in the legalities of cloud computing is the problem of legal ownership of the data. Many Terms of Service agreements are silent on the question of ownership.

    These legal issues are not confined to the time period in which the cloud based application is actively being used. There must also be consideration for what happens when the provider-customer relationship ends. In most cases, this event will be addressed before an application is deployed to the cloud. However, in the case of provider insolvencies or bankruptcy the state of the data may become blurred.


    Vendor lock-in

    Because cloud computing is still relatively new, standards are still being developed. Many cloud platforms and services are proprietary, meaning that they are built on the specific standards, tools and protocols developed by a particular vendor for its particular cloud offering. This can make migrating off a proprietary cloud platform prohibitively complicated and expensive.

    Three types of vendor lock-in can occur with cloud computing :

    Platform lock-in: cloud services tend to be built on one of several possible virtualization platforms, for example VMWare or Xen. Migrating from a cloud provider using one platform to a cloud provider using a different platform could be very complicated.

    Data lock-in: since the cloud is still new, standards of ownership, i.e. who actually owns the data once it lives on a cloud platform, are not yet developed, which could make it complicated if cloud computing users ever decide to move data off of a cloud vendor's platform.

    Tools lock-in: if tools built to manage a cloud environment are not compatible with different kinds of both virtual and physical infrastructure, those tools will only be able to manage data or apps that live in the vendor's particular cloud environment.

    Heterogeneous cloud computing is described as a type of cloud environment that prevents vendor lock-in, and aligns with enterprise data centers that are operating hybrid cloud models. The absence of vendor lock-in lets cloud administrators select his or her choice of hypervisors for specific tasks, or to deploy virtualized infrastructures to other enterprises without the need to consider the flavor of hypervisor in the other enterprise.

    A heterogeneous cloud is considered one that includes on-premise private clouds, public clouds and software-as-a-service clouds. Heterogeneous clouds can work with environments that are not virtualized, such as traditional data centers. Heterogeneous clouds also allow for the use of piece parts, such as hypervisors, servers, and storage, from multiple vendors.

    Cloud piece parts, such as cloud storage systems, offer APIs but they are often incompatible with each other. The result is complicated migration between backends, and makes it difficult to integrate data spread across various locations. This has been described as a problem of vendor lock-in. The solution to this is for clouds to adopt common standards.

    Heterogeneous cloud computing differs from homogeneous clouds, which have been described as those using consistent building blocks supplied by a single vendor. Intel General Manager of high-density computing, Jason Waxman, is quoted as saying that a homogenous system of 15,000 servers would cost $6 million more in capital expenditure and use 1 megawatt of power.


    Open source

    Open-source software has provided the foundation for many cloud computing implementations, prominent examples being the Hadoop framework and VMware's Cloud Foundry. In November 2007, the Free Software Foundation released the Affero General Public License, a version of GPLv3 intended to close a perceived legal loophole associated with free software designed to run over a network.

    Open standards

    Most cloud providers expose APIs that are typically well-documented (often under a Creative Commons license) but also unique to their implementation and thus not interoperable. Some vendors have adopted others' APIs and there are a number of open standards under development, with a view to delivering interoperability and portability. As of November 2012, the Open Standard with broadest industry support is probably OpenStack, founded in 2010 by NASA and Rackspace, and now governed by the OpenStack Foundation. OpenStack supporters include AMD, Intel, Canonical, SUSE Linux, Red Hat, Cisco, Dell, HP, IBM, Yahoo and now VMware.

    Security

    As cloud computing is achieving increased popularity, concerns are being voiced about the security issues introduced through adoption of this new model. The effectiveness and efficiency of traditional protection mechanisms are being reconsidered as the characteristics of this innovative deployment model can differ widely from those of traditional architectures. An alternative perspective on the topic of cloud security is that this is but another, although quite broad, case of "applied security" and that similar security principles that apply in shared multi-user mainframe security models apply with cloud security.

    The relative security of cloud computing services is a contentious issue that may be delaying its adoption. Physical control of the Private Cloud equipment is more secure than having the equipment off site and under someone else's control. Physical control and the ability to visually inspect data links and access ports is required in order to ensure data links are not compromised. Issues barring the adoption of cloud computing are due in large part to the private and public sectors' unease surrounding the external management of security-based services. It is the very nature of cloud computing-based services, private or public, that promote external management of provided services. This delivers great incentive to cloud computing service providers to prioritize building and maintaining strong management of secure services. Security issues have been categorised into sensitive data access, data segregation, privacy, bug exploitation, recovery, accountability, malicious insiders, management console security, account control, and multi-tenancy issues[96] . Solutions to various cloud security issues vary, from cryptography, particularly public key infrastructure (PKI), to use of multiple cloud providers, standardisation of APIs, and improving virtual machine support and legal support.

    Cloud computing offers many benefits, but is vulnerable to threats. As cloud computing uses increase, it is likely that more criminals find new ways to exploit system vulnerabilities. Many underlying challenges and risks in cloud computing increase the threat of data compromise. To mitigate the threat, cloud computing stakeholders should invest heavily in risk assessment to ensure that the system encrypts to protect data, establishes trusted foundation to secure the platform and infrastructure, and builds higher assurance into auditing to strengthen compliance. Security concerns must be addressed to maintain trust in cloud computing technology.


    Sustainability

    Although cloud computing is often assumed to be a form of green computing, no published study substantiates this assumption. Citing the servers' effects on the environmental effects of cloud computing, in areas where climate favors natural cooling and renewable electricity is readily available, the environmental effects will be more moderate. (The same holds true for "traditional" data centers.) Thus countries with favorable conditions, such as Finland, Sweden and Switzerland, are trying to attract cloud computing data centers. Energy efficiency in cloud computing can result from energy-aware scheduling and server consolidation. However, in the case of distributed clouds over data centers with different source of energies including renewable source of energies, a small compromise on energy consumption reduction could result in high carbon footprint reduction.

    Abuse

    As with privately purchased hardware, customers can purchase the services of cloud computing for nefarious purposes. This includes password cracking and launching attacks using the purchased services. In 2009, a banking trojan illegally used the popular Amazon service as a command and control channel that issued software updates and malicious instructions to PCs that were infected by the malware.


    IT governance

    The introduction of cloud computing requires an appropriate IT governance model to ensure a secured computing environment and to comply with all relevant organizational information technology policies. As such, organizations need a set of capabilities that are essential when effectively implementing and managing cloud services, including demand management, relationship management, data security management, application lifecycle management, risk and compliance management. A danger lies with the explosion of companies joining the growth in cloud computing by becoming providers. However, many of the infrastructural and logistical concerns regarding the operation of cloud computing businesses are still unknown. This over-saturation may have ramifications for the industry as whole.

    Consumer end storage

    The increased use of cloud computing could lead to a reduction in demand for high storage capacity consumer end devices, due to cheaper low storage devices that stream all content via the cloud becoming more popular.In a Wired article, Jake Gardner explains that while unregulated usage is beneficial for IT and tech moguls like Amazon, the anonymous nature of the cost of consumption of cloud usage makes it difficult for business to evaluate and incorporate it into their business plans. The popularity of cloud and cloud computing in general is so quickly increasing among all sorts of companies, that in May 2013, through its company Amazon Web Services, Amazon started a certification program for cloud computing professionals.

    Ambiguity of terminology

    Outside of the information technology and software industry, the term "cloud" can be found to reference a wide range of services, some of which fall under the category of cloud computing, while others do not. The cloud is often used to refer to a product or service that is discovered, accessed and paid for over the Internet, but is not necessarily a computing resource. Examples of service that are sometimes referred to as "the cloud" include, but are not limited to, crowd sourcing, cloud printing, crowd funding, cloud manufacturing.

    Performance interference and noisy neighbors

    Due to its multi-tenant nature and resource sharing, Cloud computing must also deal with the "noisy neighbor" effect. This effect in essence indicates that in a shared infrastructure, the activity of a virtual machine on a neighboring core on the same physical host may lead to increased performance degradation of the VMs in the same physical host, due to issues such as e.g. cache contamination. Due to the fact that the neighboring VMs may be activated or deactivated at arbitrary times, the result is an increased variation in the actual performance of Cloud resources. This effect seems to be dependent also on the nature of the applications that run inside the VMs but also other factors such as scheduling parameters and the careful selection may lead to optimized assignment in order to minimize the phenomenon. This has also led to difficulties in comparing various cloud providers on cost and performance using traditional benchmarks for service and application performance, as the time period and location in which the benchmark is performed can result in widely varied results.

    Monopolies and privatization of cyberspace

    Philosopher Slavoj -i-ek points out that, although cloud computing enhances content accessibility, this access is "increasingly grounded in the virtually monopolistic privatization of the cloud which provides this access". According to him, this access, necessarily mediated through a handful of companies, ensures a progressive privatization of global cyberspace. -i-ek criticises the argument purported by supporters of cloud computing that this phenomenon is part of the "natural evolution" of the Internet, sustaining that the quasi-monopolies "set prices at will but also filter the software they provide to give its "universality" a particular twist depending on commercial and ideological interests."

    Mobile cloud computing :

    Mobile cloud computing

    Mobile Cloud Computing (MCC) is the state-of-the-art mobile distributed computing paradigm comprises three heterogeneous domains of mobile computing, cloud computing, and wireless networks aiming to enhance computational capabilities of resource-constrained mobile devices towards rich user experience. MCC provides business opportunities for mobile network operators as well as cloud providers. More comprehensively, MCC can be defined as "a rich mobile computing technology that leverages uni?ed elastic resources of varied clouds and network technologies toward unrestricted functionality, storage, and mobility to serve a multitude of mobile devices anywhere, anytime through the channel of Ethernet or Internet regardless of heterogeneous environments and platforms based on the pay-as-you-use principle." MCC realizes its vision leveraging computational augmentation approaches by which resource-constraint mobile devices can utilize computational resources of varied cloud-based resources. In MCC, there are four types of cloud-based resources, namely distant immobile clouds, proximate immobile computing entities, proximate mobile computing entities, and hybrid (combination of the other three model). Giant clouds such as Amazon EC2 are in the distant immobile groups whereas cloudlet or surrogates are member of proximate immobile computing entities ?. Smartphones, tablets, handheld devices, and wearable computing devices are part of the third group of cloud-based resources which is proximate mobile computing entities.

    Applications are run on a remote server and then sent to the user. Because of the advanced improvement in mobile browsers thanks to Apple, Google, Microsoft and Research in Motion, nearly every mobile should have a suitable browser. This means developers will have a much wider market and they can bypass the restrictions created by mobile operating systems.

    Mobile cloud computing gives new company chances for mobile network providers. Several operators such as Vodafone, Orange and Verizon have started to offer cloud computing services for companies.

    Mobile Cloud Architecture
    Applications

    Mobile applications are a rapidly developing segment of the global mobile market. They consist of software that runs on a mobile device and perform certain tasks for the user of the mobile phone. As reported by World Mobile Applications Market, about 7 billion (free and paid) application downloads were made globally in 2009 alone from both native and third-party application stores, generating revenues of $3.9 billion in the same year. The global mobile application market is expected to be worth $24.4 billion in 2015, growing at a CAGR of 64% from 2009 to 2015. Apple is a typical example for the explosion of mobile applications. Apple with a whopping more than 4 billion downloads to date commanded more than 90% of the application market share in 2009. The success of Apple’s App Store has not only established the scalability of mobile applications, but has also shown that the best of these offer the potential to generate enormous revenues.

    Convenient Commerce

    The explosion in the use of electronic commerce (e-commerce) by the business sector has been tremendous since its inception only a few years ago. E-commerce is known as: buying and selling of products or services over electronic systems such as the Internet and other computer networks. From governments to multinational companies to one-person start-ups, e-commerce is increasingly viewed as a key business modality of the future. Ease of transaction, widening markets, and decreased overheads are factors that make e-commerce solutions more and more attractive, as evident with the growth of on-line sales.

    Augmented Reality

    A new class of mobile applications, augmented reality (AR), has started to draw users’ attention. Wearable mobile devices, like gestural interface SixthSense and Google’s head-mounted display Project Glass, aim to blur the boundary between the cyber world and real world. For example, SixthSense can project augmented live news on a real-world newspaper; Google Glass can overlay wearers’ vision with map directions, calendar reminders, text messages, and so on. Augmented reality is also incorporated into mobile games, where virtual objects are projected into the real world so that users can interact with them. Nevertheless, algorithms in augmented reality are mostly resourceand computation-intensive, posing challenges to resource-poor mobile devices. These applications can integrate the power of the cloud to handle complex processing of aug- mented reality tasks. Specifically, data streams of the sensors on a mobile device can be directed to the cloud for processing, and the processed data streams are then redirected back to the device. It should be noted that AR applications demand low latency to provide a lifelike experience. In this sense, apart from exploiting cloud resources, a mobile device can also offload data processing to a nearby cloudlet or ad hoc mobile cloud as elaborated earlier to avoid unpredictable multihop network latencies.

    Mobile Learning

    Mobile learning today is becoming more popular as there are many people using mobile devices to enhance their learning. Mobile learning (m-learning) is not only electronic learning (e-learning) but e-learning plus mobility. It is clear that learning via mobile brings many benefits for mobile users. It brings the convenience for them since they can learn anywhere they want in any convenient time from a portable device. However, there is some research pointing out restrictions of traditional mobile learning such as: expensive mobile devices, high cost of network, poor network transmission rate, and limited educational resources. As a result, it is difficult for mobile learning to take full advantage and to be popular as well.

    Mobile Healthcare

    The development of telecommunication technology in the medical field helped diagnosis and treatment become easier for many people. This can helps patients regularly monitor their health and have timely treatment. Also, it leads to an increase accessibility to healthcare providers, more efficient tasks and processes, and the improvement about quality of the healthcare services. Nevertheless it also has to face many challenges (e.g., physical storage issues, security and privacy, medical errors). Therefore cloud computing is introduced as a solution to address aforementioned issues. Cloud computing provides the convenience for users to help them access resources easily and quickly. Besides, it offers services on demand over the network to perform operation that meet changing needs in electronic healthcare applications.

    Mobile Computing

    The analysis of the impact of mobile computing on the various services shows how the mobile computing has changed each service. As mobile computing has become more popular over the past decade, it has been under continuous development with advances in hardware, software, and network. Mobile computing has various applications in our everyday life. Use of this technology has become a fundamental skill. With mobile computing we can check our email messages, our bills, our bank accounts, and our other private information just by using a mobile phone or laptop anywhere. All the functionalities obligate each exchange data to make it safe and immune from any attack. Mobile computing services have simplified our lives. Every day we get attached to a new device that includes a lot of functionalities and is based on mobile computing, as examples, BlackBerry from RIM, iPhone from Apple, Net-Book, etc.

    MCC Challenges

    In the MCC landscape, an amalgam of mobile computing,cloud computing, and communication networks (to augment smartphones) creates several complex challenges such as Mobile Computation Of?oading, Seamless Connectivity,Long WAN Latency,Mobility Management,Context-Processing,Energy Constraint,Vendor/data Lock-in,Security and Privacy, Elasticity that hinder MCC success and adoption.

    Open Research Issues

    MCC is an emerging research area with significant research opportunities. Although significant research and development in MCC is available in the literature, still efforts in the following domains lacking:

    Architectural issues: A reference architecture for heterogeneous MCC environment is a crucial requirement for unleashing the power of mobile computing towards unrestricted ubiquitous computing.

    Energy-efficient transmission: MCC requires frequent transmissions between cloud platform and mobile devices, due to the stochastic nature of wireless networks, the transmission protocol should be carefully designed.

    Context-awareness issues: Context-aware and socially-aware computing are inseparable traits of contemporary handheld computers. To achieve the vision of mobile computing among heterogeneous converged networks and computing devices, designing resource-ef?cient environment-aware applications is an essential need.

    Live VM migration issues:Executing resource-intensive mobile application via Virtual Machine(VM) migration-based application of?oading involves encapsulation of application in VM instance and migrating it to the cloud, which is a challenging task due to additional overhead of deploying and managing VM on mobile devices.

    Mobile communication congestion issues: Mobile data traf?c is tremendously hiking by ever increasing mobile user demands for exploiting cloud resources which impact on mobile network operators and demand future efforts to enable smooth communication between mobile and cloud endpoints.
    Trust, security, and privacy issues: Trust is an essential factor for the success of the burgeoning MCC paradigm.

    Software as a Service :

    Software as a service

    Software as a service (SaaS, pronounced sæs or s?s), sometimes referred to as "on-demand software" supplied by ISVs or "Application-Service-Providers" (ASPs), is a software delivery model in which software and associated data are centrally hosted on the cloud. SaaS is typically accessed by users using a thin client via a web browser. SaaS has become a common delivery model for many business applications, including Office & Messaging software, DBMS software, Management software, CAD software, Development software, Gamification, Virtualization,accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, human resource management (HRM), content management (CM) and service desk management. SaaS has been incorporated into the strategy of all leading enterprise software companies. One of the biggest selling points for these companies is the potential to reduce IT support costs by outsourcing hardware and software maintenance and support to the SaaS provider.

    According to a Gartner Group estimate, SaaS sales in 2010 reached $10 billion, and were projected to increase to $12.1bn in 2011, up 20.7% from 2010. Gartner Group estimates that SaaS revenue will be more than double its 2010 numbers by 2015 and reach a projected $21.3bn. Customer relationship management (CRM) continues to be the largest market for SaaS. SaaS revenue within the CRM market was forecast to reach $3.8bn in 2011, up from $3.2bn in 2010.

    The term "software as a service" (SaaS) is considered to be part of the nomenclature of cloud computing, along with infrastructure as a service (IaaS), platform as a service (PaaS), desktop as a service (DaaS), backend as a service (BaaS), and information technology management as a service (ITMaaS).

    History

    Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centers.

    The expansion of the Internet during the 1990s brought about a new class of centralized computing, called Application Service Providers (ASP). ASPs provided businesses with the service of hosting and managing specialized business applications, with the goal of reducing costs through central administration and through the solution provider's specialization in a particular business application. Two of the world's pioneers and largest ASPs were USI, which was headquartered in the Washington, D.C. area, and Futurelink Corporation, headquartered in Orange County California.

    Software as a service essentially extends the idea of the ASP model. The term Software as a Service (SaaS), however, is commonly used in more specific settings:
    whereas most initial ASPs focused on managing and hosting third-party independent software vendors' software, as of 2012 SaaS vendors typically develop and manage their own software

    whereas many initial ASPs offered more traditional client-server applications, which require installation of software on users' personal computers, contemporary SaaS solutions rely predominantly on the Web and only require an internet browser to use.

    whereas the software architecture used by most initial ASPs mandated maintaining a separate instance of the application for each business, as of 2012 SaaS solutions normally utilize a multi-tenant architecture, in which the application serves multiple businesses and users, and partitions its data accordingly

    The SAAS acronym allegedly first appeared in an article called "Strategic Backgrounder: Software As A Service", internally published in February 2001 by the Software & Information Industry's (SIIA) eBusiness Division.

    DbaaS (Database as a Service) has emerged as a sub-variety of SaaS.

    Distribution

    The Cloud (or SaaS) model has no physical need for indirect distribution since it is not distributed physically and is deployed almost instantaneously. The first wave of SaaS companies built their own economic model without including partner remuneration in their pricing structure, (except when there were certain existing affiliations). It has not been easy for traditional software publishers to enter into the SaaS model. Firstly, because the SaaS model does not bring them the same income structure, secondly, because continuing to work with a distribution network was decreasing their profit margins and was damaging to the competitiveness of their product pricing. Today a landscape is taking shape with SaaS and managed service players who combine the indirect sales model with their own existing business model, and those who seek to redefine their role within the 3.0 IT economy.

    Pricing

    Unlike traditional software which is conventionally sold as a perpetual license with an up-front cost (and an optional ongoing support fee), SaaS providers generally price applications using a subscription fee, most commonly a monthly fee or an annual fee. Consequently, the initial setup cost for SaaS is typically lower than the equivalent enterprise software. SaaS vendors typically price their applications based on some usage parameters, such as the number of users using the application. However, because in a SaaS environment customers' data reside with the SaaS vendor, opportunities also exist to charge per transaction, event, or other unit of value.

    The relatively low cost for user provisioning (i.e., setting up a new customer) in a multi-tenant environment enables some SaaS vendors to offer applications using the freemium model. In this model, a free service is made available with limited functionality or scope, and fees are charged for enhanced functionality or larger scope. Some other SaaS applications are completely free to users, with revenue being derived from alternate sources such as advertising.

    A key driver of SaaS growth is SaaS vendors' ability to provide a price that is competitive with on-premises software. This is consistent with the traditional rationale for outsourcing IT systems, which involves applying economies of scale to application operation, i.e., an outside service provider may be able to offer better, cheaper, more reliable applications.

    Platform as a service :

    Platform as a service

    Platform as a service (PaaS) is a category of cloud computing services that provides a computing platform and a solution stack as a service. Along with software as a service (SaaS) and infrastructure as a service (IaaS), it is a service model of cloud computing. In this model, the consumer creates the software using tools and/or libraries from the provider. The consumer also controls software deployment and configuration settings. The provider provides the networks, servers, storage, and other services.

    PaaS offerings facilitate the deployment of applications without the cost and complexity of buying and managing the underlying hardware and software and provisioning hosting capabilities.

    There are various types of PaaS vendors; however, all offer application hosting and a deployment environment, along with various integrated services. Services offer varying levels of scalability and maintenance.

    PaaS offerings may also include facilities for application design, application development, testing, and deployment as well as services such as team collaboration, web service integration, and marshalling, database integration, security, scalability, storage, persistence, state management, application versioning, application instrumentation, and developer community facilitation.


    Types

    These facilities allow customization of existing software-as-a-service (SaaS) applications, and in some ways are the equivalent of macro language customization facilities provided with packaged software applications such as Lotus Notes, or Microsoft Word. Often these require PaaS developers and their users to purchase subscriptions to the co-resident SaaS application.

    Stand alone development environments

    Stand-alone PaaS environments do not include technical, licensing or financial dependencies on specific SaaS applications or web services, and are intended to provide a generalized development environment.

    Application delivery-only environments

    Delivery-only PaaS offerings do not include development, debugging and test capabilities as part of the service, though they may be supplied offline (via an Eclipse plugin for example). The services provided generally focus on security and on-demand scalability.

    Open platform as a service

    This type of PaaS does not include hosting as such, rather it provides open source software to allow a PaaS provider to run applications. For example, AppScale allows a user to deploy some applications written for Google App Engine to their own servers, providing datastore access from a standard SQL or NoSQL database. Some open platforms let the developer use any programming language, any database, any operating system, any server, etc. to deploy their applications.

    Network as a service :

    Network as a service

    Network as a service (NaaS), a category of cloud services where the capability provided to the cloud service user is to use network/transport connectivity services and/or inter-cloud network connectivity services. NaaS involves the optimization of resource allocations by considering network and computing resources as a unified whole.

    Traditional NaaS services include flexible and extended VPN, and bandwidth on demand. NaaS concept materialization also includes the provision of a virtual network service by the owners of the network infrastructure to a third party (VNP - VNO).

    The term "Network as a service" (NaaS) is considered to be part of the nomenclature of cloud computing, along with infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).


    Most Common NaaS Service Models

    Naas Service Model varies depending on the network enabler, the network user, and the service provided. Most common models are VPN, BoD services, and Mobile Virtualized Networks such as MVNOs:

    Virtual Private Network (VPN): Extends a private network and the resources contained in the network across public networks like the Internet. It enables a host computer to send and receive data across shared or public networks as if it were a private network with all the functionality, policies of the private network.

    Bandwidth on Demand (BoD): Technique by which traffic bandwidth in an IT or Telecom network is assigned based on requirements between different nodes or users. Under this model link bandwidth is dynamically adapted to the instantaneous traffic demands of the nodes connected to the link.

    Mobile Network Virtualization: Model consisting in a Telecom infrastructure manufacturer or independent network enabler that builds and operates a telecom network (wireless, or transport connectivity) and sells its communication access capabilities to third parties (commonly mobile operators) charging by capacity utilization. Most common implementation of Mobile Virtual Network is the Mobile Virtual Network Operator (MVNO), in which a mobile communications services provider does not own the radio spectrum or wireless network infrastructure over which the MVNO provides services to its customers. Commonly a MVNO offers its communication services using the network infrastructure of an established mobile network operator.

    LaaS (Location-as-a-Service) :

    LaaS (Location-as-a-Service)

    Location-as-a-Service (LaaS) is a location data delivery model where privacy protected physical location data acquired through multiple sources including carriers, WiFi, IP addresses and landlines is available to enterprise customers through a simply API. The vast amount of Location Data provided in a LaaS model can be accessed by organizations to realize greater operational efficiencies, increase security, reduce costs and optimize customer engagement while realizing rapid ROI.

    The Location Data hosted in a LaaS model is used to support location-based services, including geofencing, proximity marketing, location-based advertising, fraud management and asset tracking.

    LaaS services are used by multiple industries including Mobile Marketing, Retail, Financial Services, Mobile Gaming, M2M and Healthcare.

    Fluid Operations :

    Fluid Operations

    The German software company fluid Operations AG was founded in 2008 and is specialised in cloud management and semantic technology. fluid Operations' product portfolio includes the eCloudManager and the Information Workbench. Additionally, fluid Operations offers an open source software, the VMFS Driver.

    Research Projects

    fluidOps is part of these research projects:

    NewProt : Development of an interactive Self-Service Portal for protein engineering software and databases. This project is funded by the European Union and started in 2011.

    Optique : Development of a scalable end user access to Big Data. The project is funded by the Seventh Framework Programme of the EU. Project partners are: the University of Oxford, the Technical University of Hamburg, the National and Kapodistrian University of Athens, the Sapienza University of Rome and the Free University of Bozen-Bolzano, as well as the companies Siemens and Statoil.

    CORA - abbreviation for Cloud Orchestration Appliance : Development of a planning and control system for the provisioning and operation of cloud-based data centers. The project is funded by Federal Ministry of Economics and Technology and project partners are NetApp, University of Bielefeld and Christmann Informationstechnik + Medien.

    StratusCloud: Integration of virtualized data sources in the cloud. Mainly service providers and service customers will benefit of the data analysis in enterprise networks. Collaboration partners are: the DHBW Mannheim , DHBW Mosbach and the company Harms&Wende.

    Durchblick : Development of a mobile conference assistance system for Augmented reality devices such as google glass which will be based on the Conference Explorer. The project is funded by the Federal Ministry of Economics and Technology and the project partner is the University of Freiburg .

    EUCLID - abbreviation for EdUcational Curriculum for the usage of Linked Data: Development, Implementation and Dissemination of Linked data learning material and activities for data practitioners who use work with Linked data. fluidOps develops a community portal based on the Information Workbench. Funded by the 7th Framework Programme of the European Union the project runs for 24 months and started in May 2012. Project partners are KIT, The Open University STI Research, Universidad Simon Bolivar, University of Southampton and Ontotext .


    Awards

    fluid Operations AG: Gartner Cool Vendor in the SAP Ecosystem Report 2010

    Information Workbench Conference Explorer: Linked Data-a-Thon 2011 at the ISWC. 2nd Place at the WWW Metadata Challenge 2012.

    fluid Operations AG: Winner of the Computerwoche Best in Cloud Award 2012 in the category Infrastructure as a Service - Private Cloud

    Eucalyptus (computing) :

    Eucalyptus (computing)

    Eucalyptus is open source computer software for building Amazon Web Services (AWS)-compatible private and hybrid cloud computing environments marketed by the company Eucalyptus Systems. Eucalyptus enables pooling compute, storage, and network resources that can be dynamically scaled up or down as application workloads change. Eucalyptus Systems announced a formal agreement with AWS in March 2012 to maintain compatibility.

    History

    The software development had its roots in the Virtual Grid Application Development Software project, at Rice University and other institutions from 2003 to 2008. Jitendra Porwal led a group at the University of California, Santa Barbara, and became the chief technical officer at the company headquartered in Goleta, California before returning to teach at UCSB. Eucalyptus software was included in the Ubuntu 9.04 distribution in 2009. The company was formed in 2009 with $5.5 million in funding to commercialize the software.

    Software architecture

    Eucalyptus commands can manage either Amazon or Eucalyptus instances. Users can also move instances between a Eucalyptus private cloud and the Amazon Elastic Compute Cloud to create a hybrid cloud. Hardware virtualization isolates applications from computer hardware details.

    Eucalyptus architecture overview

    Eucalyptus uses the terminology:

    Images - An image is a fixed collection of software modules, system software, application software, and configuration information that is started from a known baseline (immutable/fixed). When bundled and uploaded to the Eucalyptus cloud, this becomes a Eucalyptus machine image (EMI).

    Instances - When an image is put to use, it is called an instance. The configuration is executed at runtime, and the Cloud Controller decides where the image will run, and storage and networking is attached to meet resource needs.

    IP addressing - Eucalyptus instances can have public and private IP addresses. An IP address is assigned to an instance when the instance is created from an image. For instances that require a persistent IP address, such as a web-server, Eucalyptus supplies elastic IP addresses. These are pre-allocated by the Eucalyptus cloud and can be reassigned to a running instance.

    Security - TCP/IP security groups share a common set of firewall rules. This is a mechanism to firewall off an instance using IP address and port block/allow functionality. At TCP/IP layer 2 instances are isolated. If this were not present, a user could manipulate the networking of instances and gain access to neighboring instances violating the basic cloud tenet of instance isolation and separation.

    Networking - There are three networking modes. In Managed Mode Eucalyptus manages a local network of instances, including security groups and IP addresses. In System Mode, Eucalyptus assigns a MAC address and attaches the instance's network interface to the physical network through the Node Controller's bridge. System Mode does not offer elastic IP addresses, security groups, or VM isolation. In Static Mode, Eucalyptus assigns IP addresses to instances. Static Mode does not offer elastic IPs, security groups, or VM isolation.

    Access Control - A user of Eucalyptus is assigned an identity, and identities can be grouped together for access control.


    Components

    Eucalyptus has six components:

    Eucalyptus components

    The Cloud Controller (CLC) is a Java program that offers EC2-compatible interfaces, as well as a web interface to the outside world. In addition to handling incoming requests, the CLC acts as the administrative interface for cloud management and performs high-level resource scheduling and system accounting. The CLC accepts user API requests from command-line interfaces like euca2ools or GUI-based tools like the Eucalyptus User Console and manages the underlying compute, storage, and network resources. Only one CLC can exist per cloud and it handles authentication, accounting, reporting, and quote management.

    Walrus, also written in Java, is the Eucalyptus equivalent to AWS Simple Storage Service (S3). Walrus offers persistent storage to all of the virtual machines in the Eucalyptus cloud and can be used as a simple HTTP put/get storage as a service solution. There are no data type restrictions for Walrus, and it can contain images (i.e., the building blocks used to launch virtual machines), volume snapshots (i.e., point-in-time copies), and application data. Only one Walrus can exist per cloud.

    The Cluster Controller (CC) is written in C and acts as the front end for a cluster within a Eucalyptus cloud and communicates with the Storage Controller and Node Controller. It manages instance (i.e., virtual machines) execution and Service Level Agreements (SLAs) per cluster.

    The Storage Controller (SC) is written in Java and is the Eucalyptus equivalent to AWS EBS. It communicates with the Cluster Controller and Node Controller and manages Eucalyptus block volumes and snapshots to the instances within its specific cluster. If an instance requires writing persistent data to memory outside of the cluster, it would need to write to Walrus, which is available to any instance in any cluster.

    The VMware Broker is an optional component that provides an AWS-compatible interface for VMware environments and physically runs on the Cluster Controller. The VMware Broker overlays existing ESX/ESXi hosts and transforms Eucalyptus Machine Images (EMIs) to VMware virtual disks. The VMware Broker mediates interactions between the Cluster Controller and VMware and can connect directly to either ESX/ESXi hosts or to vCenter Server.

    The Node Controller (NC) is written in C and hosts the virtual machine instances and manages the virtual network endpoints. It downloads and caches images from Walrus as well as creates and caches instances. While there is no theoretical limit to the number of Node Controllers per cluster, performance limits do exist.

    Amazon Web Services compatibility

    Eucalyptus Compatibility with Amazon Web Services

    Organizations can use or reuse AWS-compatible tools, images, and scripts to manage their own on-premise infrastructure as a service (IaaS) environments. The AWS API is implemented on top of Eucalyptus, so tools in the cloud ecosystem that can communicate with AWS can use the same API with Eucalyptus. In March 2012, Amazon Web Services and Eucalyptus announced details of the compatibility between AWS and Eucalyptus. As part of this agreement, AWS will support Eucalyptus as they continue to extend compatibility with AWS APIs and customer use cases. Customers can run applications in their existing data centers that are compatible with Amazon Web Services such as Amazon Elastic Compute Cloud (EC2) and Amazon Simple Storage Service (S3).

    In June, 2013, Eucalyptus 3.3 was released, featuring a new series of AWS-compatible tools. These include: Auto-Scaling - Allows application developers to scale Eucalyptus cloud resources up or down in order to maintain performance and meet SLAs. With auto-scaling, developers can add instances and virtual machines as traffic demands increase. Auto-scaling policies for Eucalyptus are defined using Amazon EC2-compatible APIs and tools.

    Elastic Load Balancing - A service that distributes incoming application traffic and service calls across multiple Eucalyptus workload instances, providing greater application fault tolerance.

    CloudWatch - A monitoring tool similar to Amazon CloudWatch that monitors resources and applications on Eucalyptus clouds. Using CloudWatch, application developers and cloud administrators can program the collection of metrics, set alarms and identify trends that may be endangering workload operations, and take action to ensure their applications continue to run smoothly.

    Eucalyptus 3.3 is also the first private cloud platform to support Netflix's open source tools - including Chaos Monkey, Asgard, and Edda - through its API fidelity with AWS.

    Functionality

    The Eucalyptus User Console provides an interface for users to self-service provision and configure compute, network, and storage resources. Development and test teams can manage virtual instances using built-in key management and encryption capabilities. Access to virtual instances is available using familiar SSH and RDP mechanisms. Virtual instances with application configuration can be stopped and restarted using encrypted boot from EBS capability.

    IaaS service components Cloud Controller, Cluster Controller, Walrus, Storage Controller, and VMware Broker are configurable as redundant systems that are resilient to multiple types of failures. Management state of the cloud machine is preserved and reverted to normal operating conditions in the event of a hardware or software failure.

    Eucalyptus can run multiple versions of Windows and Linux virtual machine images. Users can build a library of Eucalyptus Machine Images (EMIs) with application metadata that are decoupled from infrastructure details to allow them to run on Eucalyptus clouds. Amazon Machine Images are also compatible with Eucalyptus clouds. VMware Images and vApps can be converted to run on Eucalyptus clouds and AWS public clouds.

    Eucalyptus user identity management can be integrated with existing Microsoft Active Directory or LDAP systems to have fine-grained role based access control over cloud resources.

    Eucalyptus supports storage area network devices to take advantage of storage arrays to improve performance and reliability. Eucalyptus Machine Images can be backed by EBS-like persistent storage volumes, improving the performance of image launch time and enabling fully persistent virtual machine instances. Eucalyptus also supports direct-attached storage.

    Eucalyptus 3.3 offers new features for AWS compatibility. These include resource tagging, which allows application developers and cloud administrators to assign customizable metadata tags to resources such as firewalls, load balancers, Web servers, and individual workloads to better identify them. Eucalyptus 3.3 also supports an expanded set of instance types to more closely align to instance types in Amazon EC2.

    Eucalyptus 3.3 also includes a new Maintenance Mode that allows cloud administrators to perform maintenance on Eucalyptus clouds with zero downtime to instances or cloud applications. It also includes new user console features such as a Magic Search Bar, and an easy option to allow users to change their password.

    Nimbus (cloud computing) :

    Nimbus (cloud computing)

    Nimbus is an open-source toolkit that, once installed on a cluster, provides an infrastructure as a service cloud to its client via WSRF-based or Amazon EC2 WSDL web service APIs.

    Nimbus supports the Xen hypervisor or KVM and virtual machine schedulers PBS and SGE. It allows deployment of self-configured virtual clusters via contextualization. It is configurable with respect to scheduling, networking leases, and usage accounting.

    Requirements
    • Xen 3.x
    • Kernel-based Virtual Machine
    • Java 1.5+
    • Python (2.4+)
    • ebtables filtering tool for a bridging firewall
    • DHCP server

  • .

    Cloud.com :

    Cloud.com

    Cloud.com was a venture-backed software company based in Cupertino, California that developed open source software for the implementation of public and private cloud computing environments. Its software, CloudStack, is designed to make it easier for service providers and enterprises to build, manage and deploy offerings similar to Amazon EC2 and Amazon S3. CloudStack is available in three editions: the Enterprise Edition, the Service Provider Edition and the open-source Community Edition.

    In July 2011, Cloud.com was acquired by Citrix Systems. The CloudStack software then became available under Apache Software License and further development governed by the Apache Foundation.


    Features

    Cloud.com implements infrastructure as a service (IaaS) style private, public and hybrid clouds; technologies can be deployed on-premise or as hosted cloud services. The platform provides an AJAX based interface that lets users access computing infrastructure resources (machines, network, and storage) available in private and public cloud services.

    • Cloud.com includes these features:
    • Multiple Hypervisor support from a single management pane
    • Support for Common Cloud APIs like Amazon Web Services API, the OpenStack API and the VMware vCloud API
    • Support for Linux and Windows virtual machines (VMs) .
    • Multi-tenant support for both secure internal cloud deployments as well as service provider environments
    • Elastic IPs and Security Groups
    • Billing and chargeback integration
    • On-demand Virtual Datacenter Hosting
    • Integrated Cloud templates and libraries
    • In-browser console access
    • Virtual Machine snapshots and rollback
    • Virtual resource management and isolation

  • .

    History

    VMOps was founded by Sheng Liang, Shannon Williams, Alex Huang, Will Chan, and Chiradeep Vittal in 2008. The company raised a total of $17.6M in venture funding from Redpoint Ventures, Nexus Ventures and Index Ventures (Redpoint and Nexus led the initial Series A funding round).

    The company changed its name from VMOps to Cloud.com on May 4, 2010 when it emerged from stealth mode by announcing its product. In July 2010, Cloud.com became a founding member of the OpenStack initiative. In October 2010, Cloud.com announced a partnership with Microsoft to develop the code to provide integration and support of Windows Server 2008 R2 Hyper-V to the OpenStack project.

    In July 2011, Cloud.com was acquired by Citrix Systems.

    Products

    CloudStack Community Edition (CE) is available under the GNU General Public License license. The community edition is based on the latest features which engineers are developing. There are weekly builds as well as native sources for developers, users and contributors to have access to.

    CloudStack 2.0 for Enterprises provides an integrated software solution to extend infrastructure investment into a highly scalable, on-premise cloud computing environment for enterprises.

    CloudStack Service Provider Edition (SPE) offers service providers a management software and infrastructure technology to host their own public computing cloud. Core management functions include end-user self administration, service offering management, cloud administration, and billing and reporting.

    Cloud Foundry :

    Cloud Foundry

    Cloud Foundry is an open source cloud computing Platform as a service (PaaS) software developed by VMware released under the terms of the Apache License 2.0. Cloud Foundry is part of the Pivotal Initiative, an independent entity funded by VMware and EMC. It is primarily written in Ruby. The source and development community for this software is available at cloudfoundry.org

    PaaS service

    As well as being an Open Source project, Cloud Foundry is also a hosted service offered by VMware. This service can be accessed at cloudfoundry.com. As of September 2012, this service is still in beta and pricing is not yet determined. CloudFoundry.com runs on VMware's infrastructure and uses its vSphere virtualization product suite as infrastructure.
    Other companies also offer Platform as a service products using the Cloud Foundry platform. Examples are noted in Platform as a service.

    Licenses

    The source code is under an Apache 2.0 license, and contributions are governed by the VMware contributors' license for individuals and corporations. These licenses grant both copyright and patent access and protection to the VMware Corporation, which is the same model that VMware has followed with the Spring Framework from SpringSource, which VMware acquired in 2009.

    Supported Runtimes and Frameworks
    • Language : Runtime Framework
    • Java : Java 6, Java 7 Spring Framework 3.1
    • Ruby : Ruby 1.8, Ruby 1.9 Rails, Sinatra
    • Node.js : Node.js
    • Scala : Play 2.0, Lift

  • .

    Supported Application Services

    The following services are available on the hosted CloudFoundry.com platform, with other Cloud Foundry-based PaaS providers and the Open Source codebase offering additional databases and other service integrations.

    Service Description
    • MySQL : The open source relational database
    • vFabric : Postgres Relational database based on PostgreSQL
    • MongoDB : The scalable, open, document-based database
    • Redis : The open key-value data structure server
    • RabbitMQ : Reliable, scalable, and portable messaging for applications

  • .

    AppScale :

    AppScale

    AppScale is an open-source framework for running Google App Engine applications. The primary goal of AppScale is to allow developers to have application portability. It is a cloud computing platform (marketed as platform as a service), supporting Xen, Kernel-based Virtual Machine (KVM), Google Compute Engine, Amazon EC2, RackSpace, OpenStack, CloudStack, and Eucalyptus. It is developed and maintained by AppScale Systems, based in Santa Barbara, California. AppScale was initially funded by Google, IBM, the NSF, and NIH.

    AppScale supports the ability to host multiple App Engine applications with the ability to swap out distributed datastores such as HBase, Hypertable, and Apache Cassandra. It has support for Python, Go, and Java applications by implementing scalable services such as the datastore, memcache, blobstore, users API, and channel API.

    Flexiant Limited :

    Flexiant Limited

    History

    The Flexiant heritage in the service provider and cloud industry stretches back to 1997 when founder, Tony Lucas, formed a hosting company called XCalibre Communications. XCalibre saw a need amongst their service provider customers for cloud management tools, so they began to develop Extility, a cloud orchestration software. However, after distributing Extility to their customer base, it became clear that the need for this type of software existed in the overall market as well. This led to the public release of Extility and XCalibre officially became a software provider and hosting provider.

    In 2007, XCalibre built and launched FlexiScale, using Extility. FlexiScale was Europe’s first public cloud platform, released a full nine month’s earlier than Amazon’s European cloud platform. This platform consisted of pay-as-you-go virtual dedicated servers that could be set up in less than a minute by customers themselves. This meant web hosting companies were no longer needed to set up and provision dedicated servers anymore.

    XCalibre, the web hosting business was sold in 2009, and the remaining company was renamed Flexiant. Due to the success of FlexiScale, Flexiant dedicated their full focus to developing software solutions for other service providers in Europe.

    In 2010, Flexiant launched the first version of Extility Cloud Orchestration, and by 2011, 95 customers were using the software. 2012 then saw significant changes in the business. In January of the year, Flexiant secured additional funding and expanded their management team. Early May brought a change of name to the Extility software to its present name, Flexiant Cloud Orchestrator Version 2.0. This launch improved significantly upon the previous Extility software.

    Also with this launch, Flexiant began to expand their market reach further, picking up partners across Europe and the US, creating a test lab in Amsterdam and building their presence in North America with an office in New York.

    The company’s most recent launch in November 2012 brought Flexiant Cloud Orchestrator Version 3.0 to the market. This update has four editions: two targeting hosting providers and two targeting service providers and telecoms. This means that the Flexiant products are more tailored for those who use it.

    Moreover, in November, 2012, Info-Tech Research Group awarded the Trendsetter Award to Flexiant for their Flexiant Cloud Orchestrator 2.0 version of cloud management solutions.

    In January 2013, Flexiant took on their largest sponsorship to date as the Premier Sponsor of Cloud Expo Europe.

    In April 2013, Gartner recognised Flexiant as a Gartner Cool Vendor in Cloud Management.

    In May 2013, Flexiant released Flexiant Cloud Orchestrator V3.1. The latest version extends functionality and integration for metering and billing cloud services, simplifies product catalog management across reseller environments, and offers extensive customization with upgraded Flexiant APIs, Flexiant Query Language and a new Flexiant Development Language.


    Software overview

    From May 2012, Flexiant has offered Flexiant Cloud Orchestrator version 2.0 which was a fully automated software suite that enabled managed service providers, hosting providers, data centre operators and enterprises to offer cloud-computing services to their customers.

    On 28 November 2012, Flexiant released the new Flexiant Cloud Orchestrator version 3.0, which marked a significant leap forward in technological capabilities and features. In addition, with this launch, Flexiant has changed their target market to include not only Managed Service Providers but also hosting providers and telecoms globally.

    Capabilities of Flexiant Cloud Orchestrator Version 3.1 include :

    • Universal Storage support (including local storage)
    • Universal Node support
    • Multiple Cluster support
    • Multi-currency, multi-language support
    • DHCP service
    • Integrated Routing Platform
    • Scalability Assurance
    • Smart Provisioning
    • Smart Dashboard
    • Full Multi-Hypervisor Support (VMware, KVM, Hyper-V, Xen 4)
    • Integrated metering, billing and invoicing
    • Integration with external billing/CRM systems
    • Multi-tier Reseller capabilities
    • Smart Filtering
    • Full customer, reseller, and administrator portals
    • Bento Boxes
    • Customer/Admin/System API

  • .

    OpenNebula :
    OpenNebula

    OpenNebula is an open-source cloud computing toolkit for managing heterogeneous distributed data center infrastructures. The OpenNebula toolkit manages a data center's virtual infrastructure to build private, public and hybrid implementations of infrastructure as a service.

    Description

    OpenNebula orchestrates storage, network, virtualization, monitoring, and security technologies to deploy multi-tier services (e.g. compute clusters) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's 2010 report "... only few cloud dedicated research projects in the widest sense have been initiated - most prominent amongst them probably OpenNebula ...".

    The toolkit includes features for integration, management, scalability, security and accounting. It also claims standardization, interoperability and portability, providing cloud users and administrators with a choice of several cloud interfaces (Amazon EC2 Query, OGF Open Cloud Computing Interface and vCloud) and hypervisors (Xen, KVM and VMware), and can accommodate multiple hardware and software combinations in a data center.
    OpenNebula was a mentoring organization in Google Summer of Code 2010.

    OpenNebula is sponsored by C12G.

    OpenNebula is used by hosting providers, telecom operators, IT services providers, supercomputing centers, research labs, and international research projects. Some other cloud solutions use OpenNebula as the cloud engine or kernel service.

    OpenQRM :

    OpenQRM

    OpenQRM is an open-source cloud computing management platform for managing heterogeneous data center infrastructures. The OpenQRM platform manages a data center's infrastructure to build private, public and hybrid IaaS (Infrastructure as a Service) clouds. OpenQRM orchestrates a multiplicity of storage, network, virtualization, monitoring, and security implementaions technologies to deploy multi-tier services (e.g. compute clusters) as virtual machines on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies. According to the European Commission's report about the future of cloud computing from a group of experts "... only few cloud dedicated research projects in the widest sense have been initiated – most prominent amongst them probably OpenNebula ...".

    The platform emphasizes a separation of hardware (physical servers and virtual machines) from software (operating system server-images). Hardware is treated agnostically as a computing resource which should be replaceable without the need to reconfigure the software.

    Supported virtualization technologies, include VMware, Xen, KVM, Linux-VServer, and OpenVZ. Virtual machines of these types are managed transparently via openQRM.

    P2V (physical to virtual), V2P (virtual to physical), and V2V (virtual to virtual) migration are possible as well as transitioning from one virtualization technology to another with the same VM

    OpenQRM is sponsored by OpenQRM Enterprise.


    History

    OpenQRM was initially released by the Qlusters company and went open-source in 2004. Qlusters ceased operations, while OpenQRM was left in the hands of the OpenQRM community. In November 2008, the OpenQRM community released version 4.0 which included a complete port of the platform from Java to PHP/C/Perl/Shell.

    OpenShift :

    OpenShift

    OpenShift is a cloud computing platform as a service product from Red Hat. A version for private cloud is named OpenShift Enterprise.
    The software that runs the service is open-sourced under the name OpenShift Origin, and is available on GitHub. Developers can use Git to deploy web applications in different languages on the platform.

    OpenShift also supports binary programs that are web applications, so long as they can run on Red Hat Enterprise Linux. This allows the use of arbitrary languages and frameworks. OpenShift takes care of maintaining the services underlying the application and scaling the application as needed.

    Supported language environments
    • Node.js
    • Ruby
    • Python
    • PHP
    • Perl
    • Java

  • .

    Supported databases
    • MySQL
    • PostgreSQL
    • MongoDB

  • .

    Supported frameworks

    OpenShift supports web-application frameworks by supporting each language's preferred web-integration API, with no required changes to the actual framework code.

    • Rack for Ruby
    • WSGI for Python
    • PSGI for Perl

  • .

    Some frameworks that work unmodified on OpenShift include:
    • CodeIgniter
    • CakePHP
    • Ruby on Rails
    • Django
    • Perl Dancer
    • Sinatra
    • Tornado
    • web2py

  • .

    OpenStack :

    OpenStack

    OpenStack is a cloud computing project to provide an infrastructure as a service (IaaS). It is free open source software released under the terms of the Apache License. The project is managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012 to promote OpenStack software and its community.

    More than 200 companies joined the project among which are AMD, Brocade Communications Systems, Canonical, Cisco, Dell, EMC, Ericsson, Groupe Bull, HP, IBM, Inktank, Intel, NEC, Rackspace Hosting, Red Hat, SUSE Linux, VMware, and Yahoo!.

    The technology consists of a series of interrelated projects that control pools of processing, storage, and networking resources throughout a datacenter, all managed through a dashboard that gives administrators control while empowering its users to provision resources through a web interface.

    The OpenStack community collaborates around a six-month, time-based release cycle with frequent development milestones. During the planning phase of each release, the community gathers for the OpenStack Design Summit to facilitate developer working sessions and assemble plans.

    History

    NASA's Nebula platform

    In July 2010 Rackspace Hosting and NASA jointly launched an open-source cloud-software initiative known as OpenStack. The OpenStack project intended to help organizations offer cloud-computing services running on standard hardware. The community's first official release, code-named Austin, appeared four months later, with plans to release regular updates of the software every few months. The early code came from NASA's Nebula platform as well as from Rackspace's Cloud Files platform.

    In 2011 developers of the Ubuntu Linux distribution decided to adopt OpenStack.


    Components

    Cisco Cloud Computing CTO, Cloud Computing on OpenStack and network-as-a-Service

    OpenStack has a modular architecture with various code names for its components.


    Compute (Nova)

    OpenStack Compute (Nova) is a cloud computing fabric controller (the main part of an IaaS system). It is written in Python and uses many external libraries such as Eventlet (for concurrent programming), Kombu (for AMQP communication), and SQLAlchemy (for database access). Nova's architecture is designed to scale horizontally on standard hardware with no proprietary hardware or software requirements and provide the ability to integrate with legacy systems and third party technologies. It is designed to manage and automate pools of computer resources and can work with widely available virtualization technologies, as well as bare metal and high-performance computing (HPC) configurations. KVM and XenServer are available choices for hypervisor technology, together with Hyper-V and Linux container technology such as LXC. In addition to different hypervisors, OpenStack runs on ARM.

    Object Storage (Swift)

    OpenStack Object Storage (Swift) is a scalable redundant storage system. Objects and files are written to multiple disk drives spread throughout servers in the data center, with the OpenStack software responsible for ensuring data replication and integrity across the cluster. Storage clusters scale horizontally simply by adding new servers. Should a server or hard drive fail, OpenStack replicates its content from other active nodes to new locations in the cluster. Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used.

    In Aug 2009, Rackspace started the development of Swift, as a complete replacement for the Cloud Files product. The initial development team consists of nine developers.

    Block Storage (Cinder)

    OpenStack Block Storage (Cinder) provides persistent block-level storage devices for use with OpenStack compute instances. The block storage system manages the creation, attaching and detaching of the block devices to servers. Block storage volumes are fully integrated into OpenStack Compute and the Dashboard allowing for cloud users to manage their own storage needs. In addition to local Linux server storage, it can use storage platforms including Ceph, CloudByte, Coraid, EMC (VMAX and VNX), GlusterFS, IBM Storage (Storwize family, SAN Volume Controller, and XIV Storage System), Linux LIO, NetApp, Nexenta, Scality, SolidFire and HP (StoreVirtual and StoreServ 3Par families). Block storage is appropriate for performance sensitive scenarios such as database storage, expandable file systems, or providing a server with access to raw block level storage. Snapshot management provides powerful functionality for backing up data stored on block storage volumes. Snapshots can be restored or used to create a new block storage volume.

    Networking (Neutron)

    OpenStack Networking (Neutron, formerly Quantum) is a system for managing networks and IP addresses. Like other aspects of the cloud operating system, it can be used by administrators and users to increase the value of existing datacenter assets. OpenStack Networking ensures the network will not be the bottleneck or limiting factor in a cloud deployment and gives users real self-service, even over their network configurations.

    OpenStack Neutron provides networking models for different applications or user groups. Standard models include flat networks or VLANs for separation of servers and traffic. OpenStack Networking manages IP addresses, allowing for dedicated static IPs or DHCP. Floating IPs allow traffic to be dynamically rerouted to any of your compute resources, which allows you to redirect traffic during maintenance or in the case of failure. Users can create their own networks, control traffic and connect servers and devices to one or more networks. Administrators can take advantage of software-defined networking (SDN) technology like OpenFlow to allow for high levels of multi-tenancy and massive scale. OpenStack Networking has an extension framework allowing additional network services, such as intrusion detection systems (IDS), load balancing, firewalls and virtual private networks (VPN) to be deployed and managed.

    Dashboard (Horizon)

    OpenStack Dashboard (Horizon) provides administrators and users a graphical interface to access, provision and automate cloud-based resources. The design allows for third party products and services, such as billing, monitoring and additional management tools. The dashboard is also brandable for service providers and other commercial vendors who want to make use of it.

    The dashboard is just one way to interact with OpenStack resources. Developers can automate access or build tools to manage their resources using the native OpenStack API or the EC2 compatibility API.

    Identity Service (Keystone)

    OpenStack Identity (Keystone) provides a central directory of users mapped to the OpenStack services they can access. It acts as a common authentication system across the cloud operating system and can integrate with existing backend directory services like LDAP. It supports multiple forms of authentication including standard username and password credentials, token-based systems and AWS-style (i.e. Amazon Web Services) logins. Additionally, the catalog provides a queryable list of all of the services deployed in an OpenStack cloud in a single registry. Users and third-party tools can programmatically determine which resources they can access.

    Image Service (Glance)

    OpenStack Image Service (Glance) provides discovery, registration and delivery services for disk and server images. Stored images can be used as a template. It can also be used to store and catalog an unlimited number of backups. The Image Service can store disk and server images in a variety of back-ends, including OpenStack Object Storage. The Image Service API provides a standard REST interface for querying information about disk images and lets clients stream the images to new servers.


    Amazon Web Services compatibility

    OpenStack APIs are compatible with Amazon EC2 and Amazon S3 and thus client applications written for Amazon Web Services can be used with OpenStack with minimal porting effort.

    Governance

    OpenStack is governed by a non-profit foundation and its board of directors, a technical committee and a user committee.

    The foundation's stated mission is providing shared resources to help achieve the OpenStack Mission by Protecting, Empowering, and Promoting OpenStack software and the community around it, including users, developers and the entire ecosystem. Though, it has little to do with the development of the software, which is managed by the technical committee - an elected group that represents the contributors to the project, and has oversight on all technical matters.

    Jelastic :

    Jelastic

    From Wikipedia, the free encyclopediaJelastic is a platform as a service (PaaS) cloud computing service that provides networks, servers, and storage solutions to software development clients. The company has developed technologies for moving Java-based applications onto the cloud. Originally based in Ukraine, the company’s headquarters are now in Palo Alto, California.

    History

    In 2010, Ruslan Synytskyy, Constantin Alexandrov, and Alexey Skutin founded Jelastic (Hivext Technologies). The three were working together on a project remotely, but found they were spending large amounts of time on hosting and system administrative tasks. In late 2010, they began working full-time developing tools for simplifying and automating application deployment and hosting.

    In 2011, Jelastic launched as a PaaS provider for Java applications. In 2013, the company added PHP cloud hosting to their service options. Jelastic has data centers and hosting partners worldwide.

    Jelastic's main headquarters are located in Palo Alto, California.

    According to company chief operating officer, Dmitry Sotnikov, Jelastic’s approach to the PaaS market is similar to Android’s approach to the mobile phone OS market. Instead of making their platform proprietary and monopolized, the company partners with many hosting providers. These providers include Elastx, Websolute, Tsukaeru, Layershift, Rusonyx, Host Europe, Dogado, Innofield, Planeetta, Info.nl, ServInt and others.

    In March 2013, Jelastic updated its pricing model to make scalability affordable. Customers get automatic discounts for buying larger blocks of resources and pay only for what they use. Jelastic offers a cloud-based solutions that accommodate requirements and have the ability to scale on demand. In March 2013, in response to customer requests, Jelastic announced support for the Apache TomEE server stack.


    Funding

    In 2010, Runa Capital provided $500,000 in seed funding to Jelastic. Jelastic then closed a Series A funding round in 2012 of $2 Million from Almaz Capital Partners and Foresight Ventures. Later that year in July, Jelastic received a $1 Million grant from the Skolkovo Foundation for the development of its private cloud offering. As part of the grant, Jelastic set up an office in the Skolkovo Innovation Center in Moscow, also known as the Russian Silicon Valley.


    Services

    Jelastic is a PaaS provider for Java application and offers PHP cloud hosting. It has international hosting partners and data centers. The company can add memory, CPU and disk space to meet customer needs. The main competitors of Jelastic are Google App Engine, Amazon Elastic Beanstalk, Heroku, and Cloud Foundry. Jelastic is unique in that it does not have limitations or code change requirements, and it offers automated vertical scaling, application lifecycle management, and availability from multiple hosting providers around the world.

    In May 2013, Jelastic announced a new plugin for integration with NetBeans IDE. Two weeks later, the company launched Jelastic PaaS 1.9.1, which included the latest versions of the software stacks (including PostgreSQL 9.2.4).

    August 14, 2013, Jelastic version 1.9.2 was released and included FTP and FTPS access to database servers, the ability to deploy PHP projects from GIT repository with submodules/dependencies added and PHP version 5.5.

    Supported development platforms
    • Java
    • PHP
    • Web hosting partners

  • .

    Jelastic is deployed from multiple data centers world-wide and was launched in the US by ServInt, in Germany by dogado & Host Europe, in Japan by Tsukaeru, in Finland by Planeetta, in Brazil by Websolute, in Hong Kong by PacHosting, in Russia by Rusonyx, in Sweden by Elastx, in Switzerland by Innofield, in the Netherlands by Info.nl and in UK by Layershift.


    Advisers

    Advisers to the Jelastic team include Serguei Beloussov founder of Parallels,_Inc., Dmitry Chikhachev, Soeren von Varchmin and Mark_Zbikowski who spearheaded efforts in MS-DOS and contributed to OS/2, Windows NT and Cairo.


    Awards and recognition

    After its launch, technology news site Informilo named Jelastic on its list of the Top 25 Hottest Russian Startups. To identify these companies Informilo asked investors in Moscow, London, New York, Boston and Silicon Valley to nominate and vote on companies outside of their own portfolios.

    Jelastic won a Duke’s Choice Award, which celebrates extreme innovation in the world of Java technology. Jelastic won the highest prize from Oracle - The Technology Leader Award.
    Venture Village named Jelastic as one of the Top 10 Russian Internet Startups in March, 2012.

    In December 2012, The Moscow Times named Jelastic one of the Top 10 Russian Internet Companies.Jelastic has won several industry awards, such as the Duke’s Choice Award, among others.

    OnApp :

    OnApp

    OnApp is a company that develops cloud management, CDN and storage software for service providers and enterprises. Its OnApp Cloud software enables IaaS on commodity datacenter infrastructure: using OnApp Cloud, a company can create its own cloud, presenting heterogeneous server and storage devices as a single pool of resources that can be provisioned on demand to clients or end users. OnApp has also been described as "Cloud On-Ramp" software.

    As well as creating cloud hosting infrastructure, OnApp Cloud software provides functionality for the management of cloud resources, including virtual machines, hypervisors, SANs and networks; end user accounts, permissions and limits; pricing and billing calculations for cloud resources; and failover between different hypervisors in the cloud.

    On 8 August 2011 OnApp launched OnApp CDN, a federated CDN that uses spare capacity in OnApp-powered clouds to provide a global CDN platform for service providers. OnApp CDN combines software and services from OnApp and Aflexi, which became part of OnApp on the same date. On 20 March 2012 OnApp announced OnApp Storage, a distributed block storage platform for cloud environments.

    On 9 May 2013 OnApp announced that it had collaborated with Dell to create three different pre-tested cloud offerings for service providers.

    Software overview

    OnApp Cloud software uses hardware virtualization/paravirtualization methods to enable the deployment of multiple types of cloud hosting infrastructure: public clouds, private clouds, hybrid clouds and VPS clouds. It also allows hosts to offer traditional VPS hosting with local storage. OnApp describes its software as a multi-tenant tool with multi-cloud and multi-hypervisor support, and multi-OS support for virtual machines. The features of OnApp Cloud include:

    Rapid IaaS enablement: cloud infrastructure can be deployed "within a day or two"

    • Simultaneous hosting of multiple variants of x86 and x64 Windows and Linux virtual machines
    • Rapid virtual machine deployment using templates (a template in OnApp is a preconfigured OS image)
    • Xen, KVM and VMware hypervisor support
    • A GUI management interface, used by administrators to manage the cloud, and by customers to order and configure cloud resources
    • Automatic hypervisor failover
    • Utility (hourly) billing for CPU, RAM, storage, bandwidth, IOPS and IP resources, as well as plan-based billing (e.g. monthly)
    • Integration to popular hosting billing software, including HostBill, WHMCS and Ubersmith
    • A detailed permission, limits and user roles engine
    • Support for any storage that presents a block device, including RAID, LVM, iSCSI, Fiber and local storage
    • Hypervisor, data store and network zones, which can be used to create private clouds and availability zones

  • .

    A RESTful xml and JSON API

    Version 2.2 of OnApp Cloud was announced on 21 July 2011. Version 2.2. added integrated autoscaling, load balancing, and support for virtual machines based on the FreeBSD operating system. Version 2.3 was announced on 7 October 2011. It introduced IPv6 support, the ability to migrate virtual disks between SANs, and the integration of OnApp CDN.

    OnApp CDN combines software and services from OnApp and Aflexi, which recently became part of OnApp. It has three main components: the OnApp CDN Stack, which is edge server software that deploys as a virtual appliance managed by an OnApp Controller server; OnApp CDNaaS (CDN as a Service) which is a global DNS redirection service for the OnApp CDN platform; and the OnApp CDN Federation, which is a marketplace where hosting providers can buy CDN bandwidth from Points of Presence around the world, in order to build their CDN, and sell CDN bandwidth to other hosts.

    OnApp Storage is a distributed block storage platform announced on 20 March 2012, along with a public beta program. It creates a Storage Area Network (SAN) from local disks in hypervisors, and has a decentralized management model in which each disk has its own integrated I/O controller.





    Quality Service

    Quality in a service or product is not what you put into it. It is what the client or customer gets out of it.
    -Peter Drucker

    Intelligent Quotes

    A solid working knowledge of productivity software and other IT tools has become a basic foundation for success in virtually any career. Beyond that, however, I don't think you can overemphasise the importance of having a good background in maths and science.....
    "Every software system needs to have a simple yet powerful organizational philosophy (think of it as the software equivalent of a sound bite that describes the system's architecture)... A step in thr development process is to articulate this architectural framework, so that we might have a stable foundation upon which to evolve the system's function points. "
    "All architecture is design but not all design is architecture. Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change"
    "The ultimate measurement is effectiveness, not efficiency "
    "It is argued that software architecture is an effective tool to cut development cost and time and to increase the quality of a system. "Architecture-centric methods and agile approaches." Agile Processes in Software Engineering and Extreme Programming.
    "Java is C++ without the guns, knives, and clubs "
    "When done well, software is invisible"
    "Our words are built on the objects of our experience. They have acquired their effectiveness by adapting themselves to the occurrences of our everyday world."
    "I always knew that one day Smalltalk would replace Java. I just didn't know it would be called Ruby. "
    "The best way to predict the future is to invent it."
    "In 30 years Lisp will likely be ahead of C++/Java (but behind something else)"
    "Possibly the only real object-oriented system in working order. (About Internet)"
    "Simple things should be simple, complex things should be possible. "
    "Software engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines."
    "Model Driven Architecture is a style of enterprise application development and integration, based on using automated tools to build system independent models and transform them into efficient implementations. "
    "The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. "
    "Software Engineering Economics is an invaluable guide to determining software costs, applying the fundamental concepts of microeconomics to software engineering, and utilizing economic analysis in software engineering decision making. "
    "Ultimately, discovery and invention are both problems of classification, and classification is fundamentally a problem of finding sameness. When we classify, we seek to group things that have a common structure or exhibit a common behavior. "
    "Perhaps the greatest strength of an object-oriented approach to development is that it offers a mechanism that captures a model of the real world. "
    "The entire history of software engineering is that of the rise in levels of abstraction. "
    "The amateur software engineer is always in search of magic, some sensational method or tool whose application promises to render software development trivial. It is the mark of the professional software engineer to know that no such panacea exist "


    Core Values ?

    Agile And Scrum Based Architecture

    Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration.....

    more

    Core Values ?

    Total quality management

    Total Quality Management / TQM is an integrative philosophy of management for continuously improving the quality of products and processes. TQM is based on the premise that the quality of products and .....

    more

    Core Values ?

    Design that Matters

    We are more than code junkies. We're a company that cares how a product works and what it says to its users. There is no reason why your custom software should be difficult to understand.....

    more

    Core Values ?

    Expertise that is Second to None

    With extensive software development experience, our development team is up for any challenge within the Great Plains development environment. our Research works on IEEE international papers are consider....

    more

    Core Values ?

    Solutions that Deliver Results

    We have a proven track record of developing and delivering solutions that have resulted in reduced costs, time savings, and increased efficiency. Our clients are very much ....

    more

    Core Values ?

    Relentless Software Testing

    We simply dont release anything that isnt tested well. Tell us something cant be tested under automation, and we will go prove it can be. We create tests before we write the complementary production software......

    more

    Core Values ?

    Unparalled Technical Support

    If a customer needs technical support for one of our products, no-one can do it better than us. Our offices are open from 9am until 9pm Monday to Friday, and soon to be 24hours. Unlike many companies, you are able to....

    more

    Core Values ?

    Impressive Results

    We have a reputation for process genius, fanatical testing, high quality, and software joy. Whatever your business, our methods will work well in your field. We have done work in Erp Solutions ,e-commerce, Portal Solutions,IEEE Research....

    more



    .

    .
     
     

    Why Choose Us ?

    Invest in Thoughts

    The intellectual commitment of our development team is central to the leonsoft ability to achieve its mission: to develop principled, innovative thought leaders in global communities.

    Read More
    From Idea to Enterprise

    Today's most successful enterprise applications were once nothing more than an idea in someone's head. While many of these applications are planned and budgeted from the beginning.

    Read More
    Constant Innovation

    We constantly strive to redefine the standard of excellence in everything we do. We encourage both individuals and teams to constantly strive for developing innovative technologies....

    Read More
    Utmost Integrity

    If our customers are the foundation of our business, then integrity is the cornerstone. Everything we do is guided by what is right. We live by the highest ethical standards.....

    Read More