Software Solutions Developed With
High Perfection & High Quality
Home

Distributed Solutions

Distributed Computing

Distributed computing is a field of computer science that studies distributed systems. A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal. There are many alternatives for the message passing mechanism, including RPC-like connectors and message queues. Three significant characteristics of distributed systems are: concurrency of components, lack of a global clock, and independent failure of components. An important goal and challenge of distributed systems is location transparency. Examples of distributed systems vary from SOA-based systems to massively multiplayer online games to peer-to-peer applications.

A computer program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs.

Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers, which communicate with each other by message passing.


Introduction

The word distributed in terms such as "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area. The terms are nowadays used in a much wider sense, even referring to autonomous processes that run on the same physical computer and interact with each other by message passing. While there is no single definition of a distributed system, the following defining properties are commonly used:

There are several autonomous computational entities, each of which has its own local memory.

The entities communicate with each other by message passing.

In this article, the computational entities are called computers or nodes.

A distributed system may have a common goal, such as solving a large computational problem. Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.

Other typical properties of distributed systems include the following :

    • The system has to tolerate failures in individual computers.
    • The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.
    • Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.

    • (a) A distributed system.
    • (b) A parallel system.

  • .

    Parallel and Distributed Computing

    Distributed systems are groups of networked computers, which have the same goal for their work. The terms "concurrent computing", "parallel computing", and "distributed computing" have a lot of overlap, and no clear distinction exists between them. The same system may be characterised both as "parallel" and "distributed"; the processors in a typical distributed system run concurrently in parallel. Parallel computing may be seen as a particular tightly coupled form of distributed computing,[16] and distributed computing may be seen as a loosely coupled form of parallel computing. Nevertheless, it is possible to roughly classify concurrent systems as "parallel" or "distributed" using the following criteria:

    In parallel computing, all processors may have access to a shared memory to exchange information between processors.

    In distributed computing, each processor has its own private memory (distributed memory). Information is exchanged by passing messages between the processors.

    The figure on the right illustrates the difference between distributed and parallel systems. Figure (a) is a schematic view of a typical distributed system; as usual, the system is represented as a network topology in which each node is a computer and each line connecting the nodes is a communication link. Figure (b) shows the same distributed system in more detail: each computer has its own local memory, and information can be exchanged only by passing messages from one node to another by using the available communication links. Figure (c) shows a parallel system in which each processor has a direct access to a shared memory.

    The situation is further complicated by the traditional uses of the terms parallel and distributed algorithm that do not quite match the above definitions of parallel and distributed systems; see the section Theoretical foundations below for more detailed discussion. Nevertheless, as a rule of thumb, high-performance parallel computation in a shared-memory multiprocessor uses parallel algorithms while the coordination of a large-scale distributed system uses distributed algorithms.


    Distributed Computing > Theoretical Foundations :
    Theoretical Foundations

    Many tasks that we would like to automate by using a computer are of question - answer type: we would like to ask a question and the computer should produce an answer. In theoretical computer science, such tasks are called computational problems. Formally, a computational problem consists of instances together with a solution for each instance. Instances are questions that we can ask, and solutions are desired answers to these questions.

    Theoretical computer science seeks to understand which computational problems can be solved by using a computer (computability theory) and how efficiently (computational complexity theory). Traditionally, it is said that a problem can be solved by using a computer if we can design an algorithm that produces a correct solution for any given instance. Such an algorithm can be implemented as a computer program that runs on a general-purpose computer: the program reads a problem instance from input, performs some computation, and produces the solution as output. Formalisms such as random access machines or universal Turing machines can be used as abstract models of a sequential general-purpose computer executing such an algorithm.

    The field of concurrent and distributed computing studies similar questions in the case of either multiple computers, or a computer that executes a network of interacting processes: which computational problems can be solved in such a network and how efficiently? However, it is not at all obvious what is meant by - solving a problem - in the case of a concurrent or distributed system: for example, what is the task of the algorithm designer, and what is the concurrent or distributed equivalent of a sequential general-purpose computer?

    The discussion below focuses on the case of multiple computers, although many of the issues are the same for concurrent processes running on a single computer.


    Three viewpoints are commonly used:
    Parallel algorithms in shared-memory model

    All computers have access to a shared memory. The algorithm designer chooses the program executed by each computer. One theoretical model is the parallel random access machines (PRAM) that are used.[24] However, the classical PRAM model assumes synchronous access to the shared memory.

    A model that is closer to the behavior of real-world multiprocessor machines and takes into account the use of machine instructions, such as Compare-and-swap (CAS), is that of asynchronous shared memory. There is a wide body of work on this model, a summary of which can be found in the literature.


    Parallel algorithms in message-passing model

    The algorithm designer chooses the structure of the network, as well as the program executed by each computer. Models such as Boolean circuits and sorting networks are used. A Boolean circuit can be seen as a computer network: each gate is a computer that runs an extremely simple computer program. Similarly, a sorting network can be seen as a computer network: each comparator is a computer.

    Distributed algorithms in message-passing model

    The algorithm designer only chooses the computer program. All computers run the same program. The system must work correctly regardless of the structure of the network.

    A commonly used model is a graph with one finite-state machine per node.

    In the case of distributed algorithms, computational problems are typically related to graphs. Often the graph that describes the structure of the computer network is the problem instance. This is illustrated in the following example.


    Architectures

    Various hardware and software architectures are used for distributed computing. At a lower level, it is necessary to interconnect multiple CPUs with some sort of network, regardless of whether that network is printed onto a circuit board or made up of loosely coupled devices and cables. At a higher level, it is necessary to interconnect processes running on those CPUs with some sort of communication system.

    Distributed programming typically falls into one of several basic architectures or categories: client - server, 3-tier architecture, n-tier architecture, distributed objects, loose coupling, or tight coupling.

    Client - server: Smart client code contacts the server for data then formats and displays it to the user. Input at the client is committed back to the server when it represents a permanent change.

    3-tier architecture: Three tier systems move the client intelligence to a middle tier so that stateless clients can be used. This simplifies application deployment. Most web applications are 3-Tier.

    n-tier architecture: n-tier refers typically to web applications which further forward their requests to other enterprise services. This type of application is the one most responsible for the success of application servers.

    highly coupled (clustered): refers typically to a cluster of machines that closely work together, running a shared process in parallel. The task is subdivided in parts that are made individually by each one and then put back together to make the final result.

    Peer-to-peer: an architecture where there is no special machine or machines that provide a service or manage the network resources. Instead all responsibilities are uniformly divided among all machines, known as peers. Peers can serve both as clients and servers.

    Space based: refers to an infrastructure that creates the illusion (virtualization) of one single address-space. Data are transparently replicated according to application needs. Decoupling in time, space and reference is achieved.

    Another basic aspect of distributed computing architecture is the method of communicating and coordinating work among concurrent processes. Through various message passing protocols, processes may communicate directly with one another, typically in a master/slave relationship. Alternatively, a "database-centric" architecture can enable distributed computing to be done without any form of direct inter-process communication, by utilizing a shared database.

    Swing (Java) :
    Swing (Java)

    Swing is the primary Java GUI widget toolkit. It is part of Oracle's Java Foundation Classes (JFC) — an API for providing a graphical user interface (GUI) for Java programs.

    Swing was developed to provide a more sophisticated set of GUI components than the earlier Abstract Window Toolkit (AWT). Swing provides a native look and feel that emulates the look and feel of several platforms, and also supports a pluggable look and feel that allows applications to have a look and feel unrelated to the underlying platform. It has more powerful and flexible components than AWT. In addition to familiar components such as buttons, check boxes and labels, Swing provides several advanced components such as tabbed panel, scroll panes, trees, tables, and lists.

    Unlike AWT components, Swing components are not implemented by platform-specific code. Instead they are written entirely in Java and therefore are platform-independent. The term "lightweight" is used to describe such an element.


    Architecture

    Swing is a platform-independent, Model-View-Controller GUI framework for Java, which follows a single-threaded programming model. Additionally, this framework provides a layer of abstraction between the code structure and graphic presentation of a Swing-based GUI.

    Foundations

    Swing is platform-independent because it is completely written in Java. Complete documentation for all Swing classes can be found in the Java API Guide.


    Extensible

    Swing is a highly modular-based architecture, which allows for the "plugging" of various custom implementations of specified framework interfaces: Users can provide their own custom implementation(s) of these components to override the default implementations using Java's inheritance mechanism.


    Swing is a component-based framework, whose components are all ultimately derived from the javax.swing.JComponent class. Swing objects asynchronously fire events, have bound properties, and respond to a documented set of methods specific to the component. Swing components are Java Beans components, compliant with the Java Beans Component Architecture specifications.


    Customizable

    Given the programmatic rendering model of the Swing framework, fine control over the details of rendering of a component is possible. As a general pattern, the visual representation of a Swing component is a composition of a standard set of elements, such as a border, inset, decorations, and other properties. Typically, users will programmatically customize a standard Swing component (such as a JTable) by assigning specific borders, colors, backgrounds, opacities, etc. The core component will then use these properties to render itself. However, it is also completely possible to create unique GUI controls with highly customized visual representation.

    Configurable

    Swing's heavy reliance on runtime mechanisms and indirect composition patterns allows it to respond at run time to fundamental changes in its settings. For example, a Swing-based application is capable of hot swapping its user-interface during runtime. Furthermore, users can provide their own look and feel implementation, which allows for uniform changes in the look and feel of existing Swing applications without any programmatic change to the application code.

    Lightweight UI

    Swing's high level of flexibility is reflected in its inherent ability to override the native host operating system (OS)'s GUI controls for displaying itself. Swing "paints" its controls using the Java 2D APIs, rather than calling a native user interface toolkit. Thus, a Swing component does not have a corresponding native OS GUI component, and is free to render itself in any way that is possible with the underlying graphics GUIs.

    However, at its core, every Swing component relies on an AWT container, since (Swing's) JComponent extends (AWT's) Container. This allows Swing to plug into the host OS's GUI management framework, including the crucial device/screen mappings and user interactions, such as key presses or mouse movements. Swing simply "transposes" its own (OS-agnostic) semantics over the underlying (OS-specific) components. So, for example, every Swing component paints its rendition on the graphic device in response to a call to component.paint(), which is defined in (AWT) Container. But unlike AWT components, which delegated the painting to their OS-native "heavyweight" widget, Swing components are responsible for their own rendering.

    This transposition and decoupling is not merely visual, and extends to Swing's management and application of its own OS-independent semantics for events fired within its component containment hierarchies. Generally speaking, the Swing architecture delegates the task of mapping the various flavors of OS GUI semantics onto a simple, but generalized, pattern to the AWT container. Building on that generalized platform, it establishes its own rich and complex GUI semantics in the form of the JComponent model.


    Loosely coupled and MVC

    The Swing library makes heavy use of the Model/View/Controller software design pattern, which conceptually decouples the data being viewed from the user interface controls through which it is viewed. Because of this, most Swing components have associated models (which are specified in terms of Java interfaces), and the programmers can use various default implementations or provide their own. The framework provides default implementations of model interfaces for all of its concrete components. The typical use of the Swing framework does not require the creation of custom models, as the framework provides a set of default implementations that are transparently, by default, associated with the corresponding JComponent child class in the Swing library. In general, only complex components, such as tables, trees and sometimes lists, may require the custom model implementations around the application-specific data structures. To get a good sense of the potential that the Swing architecture makes possible, consider the hypothetical situation where custom models for tables and lists are wrappers over DAO and/or EJB services.

    Typically, Swing component model objects are responsible for providing a concise interface defining events fired, and accessible properties for the (conceptual) data model for use by the associated JComponent. Given that the overall MVC pattern is a loosely coupled collaborative object relationship pattern, the model provides the programmatic means for attaching event listeners to the data model object. Typically, these events are model centric (ex: a "row inserted" event in a table model) and are mapped by the JComponent specialization into a meaningful event for the GUI component.

    For example, the JTable has a model called TableModel that describes an interface for how a table would access tabular data. A default implementation of this operates on a two-dimensional array.

    The view component of a Swing JComponent is the object used to graphically represent the conceptual GUI control. A distinction of Swing, as a GUI framework, is in its reliance on programmatically rendered GUI controls (as opposed to the use of the native host OS's GUI controls). Prior to Java 6 Update 10, this distinction was a source of complications when mixing AWT controls, which use native controls, with Swing controls in a GUI (see Mixing AWT and Swing components).

    Finally, in terms of visual composition and management, Swing favors relative layouts (which specify the positional relationships between components) as opposed to absolute layouts (which specify the exact location and size of components). This bias towards "fluid"' visual ordering is due to its origins in the applet operating environment that framed the design and development of the original Java GUI toolkit. (Conceptually, this view of the layout management is quite similar to that which informs the rendering of HTML content in browsers, and addresses the same set of concerns that motivated the former.)


    Relationship to AWT

    AWT and Swing class hierarchy

    Since early versions of Java, a portion of the Abstract Window Toolkit (AWT) has provided platform-independent APIs for user interface components. In AWT, each component is rendered and controlled by a native peer component specific to the underlying windowing system.

    By contrast, Swing components are often described as lightweight because they do not require allocation of native resources in the operating system's windowing toolkit. The AWT components are referred to as heavyweight components.

    Much of the Swing API is generally a complementary extension of the AWT rather than a direct replacement. In fact, every Swing lightweight interface ultimately exists within an AWT heavyweight component because all of the top-level components in Swing (JApplet, JDialog, JFrame, and JWindow) extend an AWT top-level container. Prior to Java 6 Update 10, the use of both lightweight and heavyweight components within the same window was generally discouraged due to Z-order incompatibilities. However, later versions of Java have fixed these issues, and both Swing and AWT components can now be used in one GUI without Z-order issues.

    The core rendering functionality used by Swing to draw its lightweight components is provided by Java 2D, another part of JFC.

    This section may stray from the topic of the article. Please help improve this section or discuss this issue on the talk page. (May 2012)


    Relationship to SWT

    The Standard Widget Toolkit (SWT) is a competing toolkit originally developed by IBM and now maintained by the Eclipse community. SWT's implementation has more in common with the heavyweight components of AWT. This confers benefits such as more accurate fidelity with the underlying native windowing toolkit, at the cost of an increased exposure to the native platform in the programming model.

    There has been significant debate and speculation about the performance of SWT versus Swing; some hinted that SWT's heavy dependence on JNI would make it slower when the GUI component and Java need to communicate data, but faster at rendering when the data model has been loaded into the GUI, but this has not been confirmed either way. A fairly thorough set of benchmarks in 2005 concluded that neither Swing nor SWT clearly outperformed the other in the general case.

    SwingLabs :
    SwingLabs

    swingLabs is a Sun open source project proposing extensions to the Java Swing GUI toolkit.

    Available components include :

    • Sorting, filtering, highlighting for tables, trees, and lists
    • Find/search
    • Auto-completion
    • Login/authentication framework
    • TreeTable component
    • Collapsible panel component
    • Date picker component
    • Tip of the day component

  • .

    The aim of the project is to experiment new or enhanced GUI functionalities that are required by Rich client applications. It acts as a testbed for ideas related to client side technologies.


    Integration into Java API

    Successful project components are eventually incorporated into the core Swing toolkit for future Java versions, although API compatibility is not guaranteed. Examples of these are:

    • the GroupLayout manager in Java SE 6.
    • incorporation of the SystemTray in Java SE 6.

  • .

    the new Desktop class in Java SE 6, which allows to launch easily associated applications registered on the native desktop, as for example :
    launching the user-default browser, launching the user-default mail client, launching a registered application to open, edit or print a specified file.


    Sub-projects

    The swingLabs project is divided into several sub-projects. For example :

    • swingX: Provides extensions to the Java Swing GUI toolkit.
    • JDIC (JDesktop Integration Components): Aims to provide Java applications with seamless desktop integration without sacrificing platform independence.
    • nimbus: A Look and feel using synth.
    • swingLayout: Was the home of the GroupLayout manager before its inclusion in Java SE 6.
    • JDNC: Contained components to simplify the development of Swing-based rich client Java applications. This project has been replaced by the Swing Application Framework (JSR 296).
    • scenegraph: A library providing 2D Scene graph functionality to Java 2D, including Swing widgets. This library is used internally by the
    • JavaFX Script language.
    • PDFRenderer: A PDF viewing library written in pure Java.

  • .

    JFace :
    JFace

    JFace is defined by the Eclipse project as "a UI toolkit that provides helper classes for developing UI features that can be tedious to implement." SWT is an open source widget toolkit for Java designed to provide efficient, portable access to the user-interface facilities of the operating systems on which it is implemented.

    Structure

    It is a layer that sits on top of the raw widget system, and provides classes for handling common UI programming tasks. It brings model view controller programming to the Standard Widget Toolkit.

    Provides Viewer classes that handle the tedious tasks of populating, sorting, filtering, and updating widgets

    Provides Actions to allow users to define their own behavior and to assign that behavior to specific components, e.g. menu items, tool items, push buttons, etc.

    Provides registries that hold Images and Fonts

    Defines standard dialogs and wizards, and defines a framework for building complex interactions with the user

    Its primary goal is to free the developer up, letting the developer focus on the implementation of his or her specific application without having to be concerned with the underlying widget system or solving problems that are common in almost all UI applications.

    A primary concern of the Eclipse group when developing JFace was that under no circumstances did they want to hide the SWT component implementation from the programmer. JFace is completely dependent on SWT, but SWT is not dependent on JFace. Furthermore, the Eclipse Workbench is built on both JFace and SWT; in some instances, it bypasses JFace and accesses SWT directly.


    Standard Widget Toolkit :
    Standard Widget Toolkit

    The Standard Widget Toolkit (SWT) is a graphical widget toolkit for use with the Java platform. It was originally developed by Stephen Northover at IBM and is now maintained by the Eclipse Foundation in tandem with the Eclipse IDE. It is an alternative to the Abstract Window Toolkit (AWT) and Swing Java GUI toolkits provided by Sun Microsystems as part of the Java Platform, Standard Edition.

    To display GUI elements, the SWT implementation accesses the native GUI libraries of the operating system using JNI (Java Native Interface) in a manner that is similar to those programs written using operating system-specific APIs. Programs that call SWT are portable, but the implementation of the toolkit, despite part of it being written in Java, is unique for each platform.

    The toolkit is licensed under the Eclipse Public License, an open source license approved by the Open Source Initiative.


    History

    AWT (the Abstract Window Toolkit) was the first Java GUI toolkit, introduced with JDK 1.0 as one component of the Sun Microsystems Java platform. The original AWT was a simple Java wrapper around native (operating system-supplied) widgets such as menus, windows and buttons.

    Swing was the next generation GUI toolkit introduced by Sun in J2SE 1.2. Swing was developed in order to provide a richer set of GUI components than AWT. Swing GUI elements are 100% Java with no native code: instead of wrapping native GUI components, Swing draws its own components by using Java2D to call low level operating system drawing routines.

    The roots of SWT go back to work that Object Technology International, or OTI, did in the 1990s when creating multiplatform, portable, native widget interfaces for Smalltalk (originally for OTI Smalltalk, which became IBM Smalltalk in 1993). IBM Smalltalk's Common Widget layer provided fast, native access to multiple platform widget sets while still providing a common API without suffering the "lowest common denominator" problem typical of other portable graphical user interface (GUI) toolkits. IBM was developing VisualAge, an integrated development environment (IDE) written in Smalltalk. They decided to open-source the project, which led to the development of Eclipse, intended to compete against other IDEs such as Microsoft Visual Studio. Eclipse is written in Java, and IBM developers, deciding that they needed a toolkit that had "native look and feel" and "native performance", created SWT as a Swing replacement.


    Design

    SWT is a wrapper around native code objects, such as GTK+ objects, Motif objects etc. Because of this, SWT widgets are often referred to as "heavyweight", evoking images of a light Java wrapper around a "heavy" native object. In cases where native platform GUI libraries do not support the functionality required for SWT, SWT implements its own GUI code in Java, similar to Swing. In essence, SWT is a compromise between the low level performance and look and feel of AWT and the high level ease of use of Swing.

    According to the Eclipse Foundation, "SWT and Swing are different tools that were built with different goals in mind. The purpose of SWT is to provide a common API for accessing native widgets across a spectrum of platforms. The primary design goals are high performance, native look and feel, and deep platform integration. Swing, on the other hand, is designed to allow for a highly customizable look and feel that is common across all platforms."

    It has been argued that SWT features a clean design, in part inspired by Erich Gamma of Design Patterns fame.

    SWT is a simpler toolkit than Swing, with less (possibly) extraneous functionality for the average developer. This has led some people to argue that SWT lacks functionality when compared to Swing.

    James Gosling, the creator of the Java language, has argued that SWT is too simple, and that SWT is a difficult toolkit to port to new platforms for the same reason that AWT used to have porting problems: that it is too simple, too low level, and too tied to the Win32 GUI API, leading to problems adapting the SWT API to other GUI toolkits, such as Motif and OS X Carbon.

    Although SWT does not implement the popular Model-View-Controller architecture used in Swing and many other high level GUI toolkits, the JFace library, which is developed as part of the same Eclipse project, does provide a platform-independent, higher-level Model-View-Controller abstraction on top of SWT. Developers may choose to use JFace to provide more flexible and abstract data models for complex SWT controls such as trees, tables and lists, or access those controls directly as needed.


    Look and Feel

    SWT widgets have the same "look and feel" as native widgets because they often are the same native widgets. This is in contrast to the Swing toolkit where all widgets are emulations of native widgets. In some cases the difference is distinguishable. For example the Mac OS X tree widget features a subtle animation when a tree is expanded and default buttons actually have an animated pulsing glow to focus the user's attention on them. The default Swing version of these widgets do not animate.

    Since SWT is simply a wrapper around native GUI code, it does not require large numbers of updates when that native code is changed, providing that operating system vendors are careful not to break clients of their API when the operating systems are updated. The same cannot be said of Swing: Swing supports the ability to change the look and feel of the running application with "pluggable look and feels" which enable emulating the native platform user interface using themes, which must be updated to mirror operating system GUI changes (such as theme or other look and feel updates).

    SWT aims for "deep platform integration", the Eclipse reference to SWT's use of native widgets. According to Mauro Marinillia of developer.com, "whenever one needs a tight integration with the native platform, SWT can be a plus". This deep integration can be useful in a number of ways, for example enabling SWT to wrap ActiveX objects on Microsoft Windows.


    Platform support

    Vuze, a BitTorrent client which uses SWT, running in a GTK+ environment

    SWT must be ported to every new GUI library that needs supporting. Unlike Swing and AWT, SWT is not available on every Java-supported platform since SWT is not part of the Java release. There is also some evidence that the performance of SWT on platforms other than Windows is noticeably less efficient. Since SWT uses a different native library for each platform, SWT developers may be exposed to platform-specific bugs.

    SWT exposes developers to more low level details than Swing. This is because SWT is technically just a layer over native library provided GUI functionality, exposing the programmer to native GUI code is part of the design intent of SWT: "Its goal is not to provide a rich user-interface design framework but rather the thinnest possible user-interface API that can be implemented uniformly on the largest possible set of platforms while still providing sufficient functionality to build rich graphical user interface (GUI) applications."

    Since the SWT implementation is different for each platform, a platform-specific SWT library (JAR file) must be distributed with each application.

    As of March 2012 SWT supports the following platforms and/or GUI libraries:

    • Windows XP, Windows Vista, Windows 7:
    • Win32
    • WPF (under development)
    • AIX, FreeBSD, Linux, HP-UX, Solaris:
    • GTK+
    • Mac OS X:
    • Cocoa
    • Pocket PC

  • .


    Performance

    SWT was designed to be a "high performance" GUI toolkit; faster, more responsive and lighter on system resource usage than Swing.

    There has been some attempted benchmarking of SWT and Swing, which concluded that SWT should be more efficient than Swing, although the applications benchmarked in this case were not complex enough to draw solid conclusions for all possible SWT or Swing uses. A fairly thorough set of benchmarks concluded that neither Swing nor SWT clearly outperformed the other in the general case.


    Extensibility and comparison to other Java code

    Due to the use of native code, SWT classes do not allow for easy inheritance for all widget classes, which some people consider can hurt extensibility. This can make customizing existing widgets more difficult to achieve with SWT than if one were using Swing. Both toolkits support writing new widgets using only Java code, however in SWT extra work is needed to make the new widget work on every platform.

    SWT widgets, unlike almost any other Java toolkit, requires manual object deallocation, as opposed to the standard Java practice of automatic garbage collection. SWT objects must be explicitly deallocated using the ".dispose()" function, which is analogous to the C language's "free". If this is not done, memory leaks or other unintended behavior may result. On this matter, some have commented that "explicitly de-allocating the resources could be a step back in development time (and costs) at least for the average Java developer." and that "this is a mixed blessing. It means more control (and more complexity) for the SWT developer instead of more automation (and slowness) when using Swing." The need for manual object deallocation when using SWT is largely due to SWT's use of native objects. As these objects are not tracked by the Java JVM, the JVM is unable to ascertain whether or not these native objects are in use, and thus unable to garbage collect them at an appropriate time.

    In practice, the only SWT objects which a developer must explicitly dispose are the subclasses of Resource, such as Image, Color, and Font objects.


    Development

    This section relies on references to primary sources. Please add references to secondary or tertiary sources. (November 2011)

    There is some activity to enable combining Swing and SWT. There are two different approaches being attempted:

    SwingWT is a project which intends to provide Swing developers with an alternative Swing implementation: one which uses an SWT back end to display its widgets, thus providing the native look and feel and performance advantages of SWT along with the same programming model as Swing.

    SWTSwing is a project which intends to provide a Swing back end for SWT. In effect, SWT could be run using "Swing native objects" instead of, for example, GTK or Windows native objects. This would enable SWT to work on every platform that Swing supports.

    Starting in 2006 there was a SWT-3.2 port to the D programming language called DWT. Since then the project supports Windows 32-bit and also Linux GTK 32-bit for SWT-3.4. The DWT project also has an addon package that contains a port of JFace and Eclipse Forms.

    Socket Programming :
    Socket Programming

    A network socket is an endpoint of an inter-process communication flow across a computer network. Today, most communication between computers is based on the Internet Protocol; therefore most network sockets are Internet sockets.

    A socket API is an application programming interface (API), usually provided by the operating system, that allows application programs to control and use network sockets. Internet socket APIs are usually based on the Berkeley sockets standard.

    A socket address is the combination of an IP address and a port number, much like one end of a telephone connection is the combination of a phone number and a particular extension. Based on this address, internet sockets deliver incoming data packets to the appropriate application process or thread.


    Overview

    An Internet socket is characterized by a unique combination of the following:

    Remote socket address: Only for established TCP sockets. As discussed in the client-server section below, this is necessary since a TCP server may serve several clients concurrently. The server creates one socket for each client, and these sockets share the same local socket address.

    Protocol: A transport protocol (e.g., TCP, UDP, raw IP, or others). TCP port 53 and UDP port 53 are consequently different, distinct sockets.

    Within the operating system and the application that created a socket, a socket is referred to by a unique integer number called socket identifier or socket number. The operating system forwards the payload of incoming IP packets to the corresponding application by extracting the socket address information from the IP and transport protocol headers and stripping the headers from the application data.

    In IETF Request for Comments, Internet Standards, in many textbooks, as well as in this article, the term socket refers to an entity that is uniquely identified by the socket number. In other textbooks, the socket term refers to a local socket address, i.e. a "combination of an IP address and a port number". In the original definition of socket given in RFC 147, as it was related to the ARPA network in 1971, "the socket is specified as a 32 bit number with even sockets identifying receiving sockets and odd sockets identifying sending sockets." Today, however, socket communications are bidirectional.

    On Unix-like and Microsoft Windows based operating systems the netstat command line tool may be used to list all currently established sockets and related information.


    Socket types

    There are several Internet socket types available:

    Datagram sockets, also known as connectionless sockets, which use User Datagram Protocol (UDP)

    Stream sockets, also known as connection-oriented sockets, which use Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP).

    Raw sockets (or Raw IP sockets), typically available in routers and other network equipment. Here the transport layer is bypassed, and the packet headers are made accessible to the application.

    There are also non-Internet sockets, implemented over other transport protocols, such as Systems Network Architecture (SNA). See also Unix domain sockets (UDS), for internal inter-process communication.


    Socket states and the client-server model

    Computer processes that provide application services are called as servers, and create sockets on start up that are in listening state. These sockets are waiting for initiatives from client programs.

    A TCP server may serve several clients concurrently, by creating a child process for each client and establishing a TCP connection between the child process and the client. Unique dedicated sockets are created for each connection. These are in established state, when a socket-to-socket virtual connection or virtual circuit (VC), also known as a TCP session, is established with the remote socket, providing a duplex byte stream.

    A server may create several concurrently established TCP sockets with the same local port number and local IP address, each mapped to its own server-child process, serving its own client process. They are treated as different sockets by the operating system, since the remote socket address (the client IP address and/or port number) are different; i.e. since they have different socket pair tuples (see below).

    For further details on TCP sockets, including other states of TCP sockets, see Transmission Control Protocol.

    A UDP socket cannot be in an established state, since UDP is connectionless. Therefore, netstat does not show the state of a UDP socket. A UDP server does not create new child processes for every concurrently served client, but the same process handles incoming data packets from all remote clients sequentially through the same socket. It implies that UDP sockets are not identified by the remote address, but only by the local address, although each message has an associated remote address.


    Socket pairs

    Communicating local and remote sockets are called socket pairs. Each socket pair is described by a unique 4-tuple consisting of source and destination IP addresses and port numbers, i.e. of local and remote socket addresses. As seen in the discussion above, in the TCP case, each unique socket pair 4-tuple is assigned a socket number, while in the UDP case, each unique local socket address is assigned a socket number.


    Implementations

    Sockets are usually implemented by an API library such as Berkeley sockets, first introduced in 1983. Most implementations are based on Berkeley sockets, for example Winsock introduced in 1991. Other socket API implementations exist, such as the STREAMS-based Transport Layer Interface (TLI).

    Development of application programs that utilize this API is called socket programming or network programming.

    Early Implementations
    • 1983 Berkeley sockets (also known as the BSD socket API) originated with the 4.2BSD Unix operating system (released in 1983) as an API. Only in 1989, however, could UC Berkeley release versions of its operating system and networking library free from the licensing constraints of AT&T's copyright-protected Unix.
    • 1987 Transport Layer Interface (TLI) was the networking API provided by AT&T UNIX System V Release 3 (SVR3) in 1987 and continued into Release 4 (SVR4).
    • Other early implementations were written for TOPS-20 , MVS, VM, IBM-DOS (PCIP)

  • .


    Sockets in network equipment

    The socket is primarily a concept used in the Transport Layer of the Internet model. Networking equipment such as routers and switches do not require implementations of the Transport Layer, as they operate on the Link Layer level (switches) or at the Internet Layer (routers). However, stateful network firewalls, network address translators, and proxy servers keep track of active socket pairs. Also in fair queuing, layer 3 switching and quality of service (QoS) support in routers, packet flows may be identified by extracting information about the socket pairs. Raw sockets are typically available in network equipment and are used for routing protocols such as IGRP and OSPF, and in Internet Control Message Protocol (ICMP).

    .NET Remoting :
    .NET Remoting

    .NET Remoting is a Microsoft application programming interface (API) for interprocess communication released in 2002 with the 1.0 version of .NET Framework. It is one in a series of Microsoft technologies that began in 1990 with the first version of Object Linking and Embedding (OLE) for 16-bit Windows. Intermediate steps in the development of these technologies were Component Object Model (COM) released in 1993 and updated in 1995 as COM-95, Distributed Component Object Model (DCOM), released in 1997 (and renamed Active X), and COM+ with its Microsoft Transaction Server (MTS), released in 2000. It is now superseded by Windows Communication Foundation (WCF), which is part of the .NET Framework 3.0.

    Like its family members and similar technologies such as Common Object Request Broker Architecture (CORBA) and Java's remote method invocation (RMI), .NET Remoting is complex, yet its essence is straightforward. With the assistance of operating system and network agents, a client process sends a message to a server process and receives a reply.


    Overview

    .NET Remoting allows an application to make an object (termed remotable object) available across remoting boundaries, which includes different appdomains, processes or even different computers connected by a network. The .NET Remoting runtime hosts the listener for requests to the object in the appdomain of the server application. At the client end, any requests to the remotable object are proxied by the .NET Remoting runtime over Channel objects, that encapsulate the actual transport mode, including TCP streams, HTTP streams and named pipes. As a result, by instantiating proper Channel objects, a .NET Remoting application can be made to support different communication protocols without recompiling the application. The runtime itself manages the act of serialization and marshalling of objects across the client and server appdomains.

    .NET Remoting makes a reference of a remotable object available to a client application, which then instantiates and uses a remotable object as if it were a local object. However, the actual code execution happens at the server-side. A remotable object is identified by Activation URLs and are instantiated by a connection to the URL. A listener for the object is created by the remoting runtime when the server registers the channel that is used to connect to the remotable object. At the client side, the remoting infrastructure creates a proxy that stands-in as a pseudo-instantiation of the remotable object. It does not implement the functionality of the remotable object, but presents a similar interface. As such, the remoting infrastructure needs to know the public interface of the remotable object before-hand. Any method calls made against the object, including the identity of the method and any parameters passed, are serialized to a byte stream and transferred over a communication protocol-dependent Channel to a recipient proxy object at the server side ("marshalled"), by writing to the Channel's transport sink. At the server side, the proxy reads the stream off the sink and makes the call to the remotable object on the behalf of the client. The results are serialized and transferred over the sink to the client, where the proxy reads the result and hands it over to the calling application. If the remotable object needs to make a callback to a client object for some services, the client application must mark it as remotable and have a remoting runtime host a listener for it. The server can connect to it over a different Channel, or over the already existent one if the underlying connection supports bidirectional communication. A channel can be composed of a number of different Channel objects, possibly with different heterogeneous transports. Thus, remoting can also work across systems separated by an interconnection of heterogeneous networks, including the internet. Type safety is enforced by the CTS and the .NET Remoting runtime. Remote method calls are inherently synchronous; asynchronous calls can be implemented using threading libraries. Authentication and access control can be implemented for clients by either using custom Channels or by hosting the remotable objects in IIS and then using the IIS authentication system.

    Dynamic Data Exchange :
    Dynamic Data Exchange

    Dynamic Data Exchange (DDE) is a technology for interprocess communication under Microsoft Windows or OS/2.


    Overview

    Dynamic Data Exchange was first introduced in 1987 with the release of Windows 2.0 as a method of interprocess communication so that one program can communicate with or control another program, somewhat like Sun's RPC (Remote Procedure Call). It used the "Windows Messaging Layer" functionality within Windows. Therefore, DDE continues to work even in modern versions of Windows. DDE has been superseded by newer technologies. Windows for Workgroups introduced a remoting version called NetDDE. OLE and OLE Automation were more advanced, but proved to be bulky and difficult to code. OLE was GUI intensive, but when stripped down to reveal COM. Its remoting version that works between networked machines is DCOM (Distributed COM). .NET Remoting provides a layered architecture for interprocess communication in the .NET Framework. However, legacy DDE is still used in several places inside Windows, e.g. for Shell file associations and for the copy, cut and paste functions.

    The primary function of DDE is to allow Windows applications to share data. For example, a cell in Microsoft Excel could be linked to a value in another application and when the value changed, it would be automatically updated in the Excel spreadsheet. The data communication was established by a simple, three-segment model. Each program was known to DDE by its "application" name. Each application could further organize information by groups known as "topic" and each topic could serve up individual pieces of data as an "item". For example, if a user wanted to pull a value from Microsoft Excel which was contained in a spreadsheet called "Book1.xls" in the cell in the first row and first column, the application would be "Excel", the topic "Book1.xls" and the item "r1c1".

    A common use of DDE is for custom-developed applications to control off-the-shelf software. For example, a custom in-house application might use DDE to open a Microsoft Excel spreadsheet and fill it with data, by opening a DDE conversation with Excel and sending it DDE commands. Today, however, one could also use the Excel object model with OLE Automation (part of COM). The technique is, however, still in use, particularly for distribution of financial data. DDE has also been widely used in the SAS programming language for manipulating Excel and transferring data between SAS and Excel and can be used to format Excel workbooks from within a SAS program.

    While newer technologies like COM offer features DDE doesn't have, there are also issues with regard to configuration that can make COM more difficult to use than DDE. Also, DDE is a generic protocol that allows any application to monitor changing data provided by any other application, while to achieve similar results in COM one would generally need to know details of the application that is either to produce or consume the data. For example, a single DDE financial data distribution application can provide live prices to either Excel or a financial charting application without needing to know which it is doing, while to achieve the same results with COM would usually require the distribution application's authors to write custom code for each use scenario.


    NetDDE

    California-based company Wonderware developed an extension for DDE called NetDDE that could be used to initiate and maintain the network connections needed for DDE conversations between DDE-aware applications running on different computers in a network and transparently exchange data. A DDE conversation is an interaction between client and server applications. NetDDE could be used along with DDE and the DDE management library (DDEML) in applications.

    Microsoft licensed a basic (NetBIOS Frames protocol only) version of the product for inclusion in various versions of Windows from Windows for Workgroups to Windows XP. In addition, Wonderware also sold an enhanced version of NetDDE to their own customers that included support for TCP/IP. The technology is extensively used in the SCADA field. Basic Windows applications using NetDDE are Clipbook Viewer, WinChat and Microsoft Hearts.

    NetDDE was still included with Windows Server 2003 and Windows XP Service Pack 2, although it was disabled by default. It has been removed entirely in Windows Vista. However, this will not prevent existing versions of NetDDE from being installed and functioning on later versions of Windows.

    Java Remote Method Invocation ( RMI ) :
    Java remote method invocation ( RMI )

    A typical implementation model of Java-RMI using stub and skeleton objects. Java 2 SDK, Standard Edition, v1.2 removed the need for a skeleton.

    The Java Remote Method Invocation Application Programming Interface (API), or Java RMI, is a Java API that performs the object-oriented equivalent of remote procedure calls (RPC), with support for direct transfer of serialized Java objects and distributed garbage collection.

    The original implementation depends on Java Virtual Machine (JVM) class representation mechanisms and it thus only supports making calls from one JVM to another. The protocol underlying this Java-only implementation is known as Java Remote Method Protocol (JRMP).

    In order to support code running in a non-JVM context, a CORBA version was later developed.

    Usage of the term RMI may denote solely the programming interface or may signify both the API and JRMP, whereas the term RMI-IIOP (read: RMI over IIOP) denotes the RMI interface delegating most of the functionality to the supporting CORBA implementation.


    Generalized Code

    The programmers of the original RMI API generalized the code somewhat to support different implementations, such as a HTTP transport. Additionally, the ability to pass arguments "by value" was added to CORBA in order to support the RMI interface. Still, the RMI-IIOP and JRMP implementations do not have fully identical interfaces.

    RMI functionality comes in the package java.rmi, while most of Sun's implementation is located in the sun.rmi package. Note that with Java versions before Java 5.0 developers had to compile RMI stubs in a separate compilation step using rmic. Version 5.0 of Java and beyond no longer require this step.


    Jini Version

    Jini offers a more advanced version of RMI in Java. It functions similarly but provides more advanced searching capabilities and mechanisms for distributed object applications.

    Jini :
    Jini

    Jini (pronounced like genie i.e. /'d?ini/), also called Apache River, is a network architecture for the construction of distributed systems in the form of modular co-operating services.

    Originally developed by Sun, Jini was released under an open source license (Apache license). Responsibility for Jini has been transferred to Apache under the project name "River".


    History

    Sun introduced Jini in July 1998. In November of 1998, Sun announced that there were some firms supporting Jini. The Jini team at Sun Microsystems has always stated that Jini is not an acronym. Some have joked that it meant "Jini Is Not Initials", but it has always been just Jini. The word 'jini' means "the devil" in Swahili; this is a loan from an Arabic word for a mythological spirit, which is also the origin of the English word 'genie'.


    Using a service

    The first step in creating a Jini service is for the service to find the lookup service (LUS) - a process called discovery. Once the LUS is found, it returns a Service Registrar object to the service, which is used to register the service in the lookup (the join process). This involves providing information about the service to be provided, such as the ID of the service, the object which actually implements it and other attributes of the service.

    When a client wishes to make use of a service, it too uses discovery to find the LUS - either by unicast interaction, when it knows the actual location of the LUS, or by dynamic multicast discovery. After contacting the LUS, the client is returned a Service Registrar object, which it uses to look up a particular service. It does this by consulting the lookup catalog on the LUS and searching based on the type, name or description of a service. The LUS will return a Java proxy, specifying how to connect directly to the service. This is one of the ways in which Jini is more powerful than Java remote method invocation, which requires the service to know the location of the remote service in advance.


    Limitations

    Jini uses a lookup service to broker communication between the client and service. This appears to be a centralized model (though the communication between client and service can be seen as decentralized) that does not scale well to very large systems. However, the lookup service can be horizontally scaled by running multiple instances that listen to the same multicast group.

    Apache Axis :
    Apache Axis

    Apache Axis (Apache eXtensible Interaction System) is an open source, XML based Web service framework. It consists of a Java and a C++ implementation of the SOAP server, and various utilities and APIs for generating and deploying Web service applications. Using Apache Axis, developers can create interoperable, distributed computing applications. Axis is developed under the auspices of the Apache Software Foundation


    Axis for Java

    When using the Java version of Axis there are two ways to expose Java code as Web service. The easiest one is to use Axis native JWS (Java Web Service) files. Another way is to use custom deployment. Custom deployment enables you to customize resources that should be exposed as Web service.


    Apache Axis2

    Apache Axis2 is a core engine for Web services. It is a complete re-design and re-write of the widely used Apache Axis SOAP stack. Implementations of Axis2 are available in Java and C.

    Axis2 not only provides the capability to add Web services interfaces to Web applications, but can also function as a standalone server application.


    Why Apache Axis2 ?

    A new architecture for Axis2 was introduced during the August 2004 Axis2 Summit in Colombo, Sri Lanka. The new architecture on which Axis2 is based is more flexible, efficient and configurable in comparison to Axis1.x architecture. Some well-established concepts from Axis 1.x, like handlers etc., have been preserved in the new architecture.

    Apache Axis2 not only supports SOAP 1.1 and SOAP 1.2, but it also has integrated support for the widely popular REST style of Web services. The same business-logic implementation can offer both a WS-* style interface as well as a REST/POX style interface simultaneously.

    • Axis2/Java has support for Spring Framework.
    • Axis2/C seems to be abandoned in 2009.
    • Axis2 comes with many new features, enhancements and industry specification implementations. Key features offered include:

  • .

    Axis2 Features
    • Apache Axis2 includes support for following standards:
    • WS - ReliableMessaging - Via Apache Sandesha2
    • WS - Coordination - Via Apache Kandula2
    • WS - SecurityPolicy - Via Apache Rampart
    • WS - Security - Via Apache Rampart
    • WS - Trust - Via Apache Rampart
    • WS - SecureConversation - Via Apache Rampart
    • SAML 1.1 - Via Apache Rampart
    • SAML 2.0 - Via Apache Rampart
    • WS - Addressing - Module included as part of Axis2 core

  • .

    Further, Axis2 offers following features and characteristics.

    Speed - Axis2 uses its own object model and StAX (Streaming API for XML) parsing to achieve significantly greater speed than earlier versions of Apache Axis.

    Low memory foot print - Axis2 was designed ground-up keeping low memory foot print in mind.

    AXIOM - Axis2 comes with its own light-weight object model, AXIOM, for message processing which is extensible, optimized for performance, and simplified for developers.

    Hot Deployment - Axis2 is equipped with the capability of deploying Web services and handlers while the system is up and running. In other words, new services can be added to the system without having to shut down the server. Simply drop the required Web service archive into the services directory in the repository, and the deployment model will automatically deploy the service and make it available for use.

    Asynchronous Web services - Axis2 now supports asynchronous Web services and asynchronous Web services invocation using non-blocking clients and transports.

    MEP Support - Axis2 now comes handy with the flexibility to support Message Exchange Patterns (MEPs) with in-built support for basic MEPs defined in WSDL 2.0

    Flexibility - The Axis2 architecture gives the developer complete freedom to insert extensions into the engine for custom header processing, system management, and anything else you can imagine.

    Stability - Axis2 defines a set of published interfaces which change relatively slowly compared to the rest of Axis. Component-oriented Deployment - You can easily define reusable networks of Handlers to implement common patterns of processing for your applications, or to distribute to partners.

    Transport Framework - We have a clean and simple abstraction for integrating and using Transports (i.e., senders and listeners for SOAP over various protocols such as SMTP, FTP, message-oriented middleware, etc.), and the core of the engine is completely transport-independent.

    WSDL support - Axis2 supports the Web Services Description Language, version 1.1 and 2.0, which allows you to easily build stubs to access remote services, and also to automatically export machine-readable descriptions of your deployed services from Axis2.

    Add-ons - Several Web services specifications have been incorporated including WSS4J for security (Apache Rampart), Sandesha for reliable messaging, Kandula which is an encapsulation of WS-Coordination, WS-AtomicTransaction and WS-BusinessActivity.

    Composition and Extensibility - Modules and phases improve support for composability and extensibility. Modules support composability and can also support new WS-* specifications in a simple and clean manner. They are however not hot deployable as they change the overall behavior of the system.

    Enterprise JavaBeans (EJB) :
    Enterprise JavaBeans

    Enterprise JavaBeans (EJB) is a managed, server-side component architecture for modular construction of enterprise applications.

    The EJB specification is one of several Java APIs in the Java EE specification. EJB is a server-side model that encapsulates the business logic of an application. The EJB specification was originally developed in 1997 by IBM and later adopted by Sun Microsystems (EJB 1.0 and 1.1) in 1999 and enhanced under the Java Community Process as JSR 19 (EJB 2.0), JSR 153 (EJB 2.1), JSR 220 (EJB 3.0), JSR 318 (EJB 3.1) and JSR 345 (EJB 3.2).

    The EJB specification intends to provide a standard way to implement the back-end 'business' code typically found in enterprise applications (as opposed to 'front-end' interface code). Such code addresses the same types of problems, and solutions to these problems are often repeatedly re-implemented by programmers. Enterprise JavaBeans are intended to handle such common concerns as persistence, transactional integrity, and security in a standard way, leaving programmers free to concentrate on the particular problem at hand.


    General Responsibilities

    The EJB specification details how an application server provides the following responsibilities:

    • Transaction processing
    • Integration with the persistence services offered by the Java Persistence API (JPA)
    • Concurrency control
    • Event-driven programming using Java Message Service and Java EE Connector Architecture
    • Asynchronous method invocation
    • Job scheduling
    • Naming and directory services (JNDI)
    • Interprocess Communication using RMI-IIOP and Web services
    • Security (JCE and JAAS)
    • Deployment of software components in an application server

  • .

    Additionally, the Enterprise JavaBean specification defines the roles played by the EJB container and the EJBs as well as how to deploy the EJBs in a container. Note that the current EJB 3.2 specification does not detail how an application server provides persistence (a task delegated to the JPA specification), but instead details how business logic can easily integrate with the persistence services offered by the application server.


    Reinventing EJBs

    Gradually an industry consensus emerged that the original EJB specification's primary virtue - enabling transactional integrity over distributed applications - was of limited use to most enterprise applications, and the functionality delivered by simpler frameworks like Spring and Hibernate was more useful.

    Accordingly, the EJB 3.0 specification (JSR 220) was a radical departure from its predecessors, following this new paradigm. It shows a clear influence from Spring in its use of plain Java objects, and its support for dependency injection to simplify configuration and integration of heterogeneous systems. Gavin King, the creator of Hibernate, participated in the EJB 3.0 process and is an outspoken advocate of the technology. Many features originally in Hibernate were incorporated in the Java Persistence API, the replacement for entity beans in EJB 3.0. The EJB 3.0 specification relies heavily on the use of annotations (a feature added to the Java language with its 5.0 release) and convention over configuration to enable a much less verbose coding style.

    Accordingly, in practical terms EJB 3.0 is much more lightweight and nearly a completely new API, bearing little resemblance to the previous EJB specifications.


    Types of Enterprise Beans

    An EJB container holds two major types of beans:

    Session Beans that can be either "Stateful", "Stateless" or "Singleton" and can be accessed via either a Local (same JVM) or Remote (different JVM) interface or directly without an interface, in which case local semantics apply. All session beans support asynchronous execution for all views (local/remote/no-interface).

    Message Driven Beans (MDBs, also known as Message Beans). MDBs also support asynchronous execution, but via a messaging paradigm.


    Session beans

    Stateful Session Beans are business objects having state: that is, they keep track of which calling client they are dealing with throughout a session and thus access to the bean instance is strictly limited to only one client at a time. If concurrent access to a single bean is attempted anyway the container serializes those requests, but via the @AccessTimeout annotation the container can instead throw an exception. Stateful session beans' state may be persisted (passivated) automatically by the container to free up memory after the client hasn't accessed the bean for some time. The JPA extended persistence context is explicitly supported by Stateful Session Beans.


    Stateless Session Beans

    Stateless Session Beans are business objects that do not have state associated with them. However, access to a single bean instance is still limited to only one client at a time, concurrent access to the bean is prohibited. If concurrent access to a single bean is attempted the container simply routes each request to a different instance. This makes a stateless session bean automatically thread-safe. Instance variables can be used during a single method call from a client to the bean, but the contents of those instance variables are not guaranteed to be preserved across different client method calls. Instances of Stateless Session beans are typically pooled. If a second client accesses a specific bean right after a method call on it made by a first client has finished, it might get the same instance. The lack of overhead to maintain a conversation with the calling client makes them less resource-intensive than stateful beans.


    Singleton Session Beans

    Singleton Session Beans are business objects having a global shared state within a JVM. Concurrent access to the one and only bean instance can be controlled by the container (Container-managed concurrency, CMC) or by the bean itself (Bean-managed concurrency, BMC). CMC can be tuned using the @Lock annotation, that designates whether a read lock or a write lock will be used for a method call. Additionally, Singleton Session Beans can explicitly request to be instantiated when the EJB container starts up, using the @Startup annotation.


    Message driven beans

    Message Driven Beans are business objects whose execution is triggered by messages instead of by method calls. The Message Driven Bean is used among others to provide a high level ease-of-use abstraction for the lower level JMS (Java Message Service) specification. It may subscribe to JMS message queues or message topics, which typically happens via the activationConfig attribute of the @MessageDriven annotation. They were added in EJB to allow event-driven processing. Unlike session beans, an MDB does not have a client view (Local/Remote/No-interface), i. e. clients cannot look-up an MDB instance. An MDB just listens for any incoming message on, for example, a JMS queue or topic and processes them automatically. Only JMS support is required by the Java EE spec, but Message Driven Beans can support other messaging protocols. Such protocols may be asynchronous but can also be synchronous. Since session beans can also be synchronous or asynchronous, the prime difference between session- and message driven beans is not the synchronicity, but the difference between (object oriented) method calling and messaging.


    Entity beans (deprecated)

    Previous versions of EJB also used a type of bean known as an Entity Bean. These were distributed objects having persistent state. Beans in which their container managed the persistent state were said to be using Container-Managed Persistence (CMP), whereas beans that managed their own state were said to be using Bean-Managed Persistence (BMP). In EJB 3.0, Entity Beans were replaced by the Java Persistence API, which was completely separated to its own specification to allow the EJB specification to focus only on the "core session bean and message-driven bean component models and their client API". Entity Beans were still available in EJB 3.1 for backwards compatibility, but they had been officially proposed to be removed from the specification (via a process called "pruning"). In EJB 3.2 Entity Beans have indeed been removed (officially, "made optional").

    Other types of Enterprise Beans had been proposed. For instance, Enterprise Media Beans (JSR 86) would address the integration of multimedia objects in Java EE applications.

    Java Message Service (JMS) :
    Java Message Service

    The Java Message Service (JMS) API is a Java Message Oriented Middleware (MOM) API for sending messages between two or more clients. JMS is a part of the Java Platform, Enterprise Edition, and is defined by a specification developed under the Java Community Process as JSR 914. It is a messaging standard that allows application components based on the Java Enterprise Edition (JEE) to create, send, receive, and read messages. It allows the communication between different components of a distributed application to be loosely coupled, reliable, and asynchronous.


    General idea of messaging

    Messaging is a form of loosely coupled distributed communication, where in this context the term 'communication' can be understood as an exchange of messages between software components. Message-oriented technologies attempt to relax tightly coupled communication (such as TCP network sockets, CORBA or RMI) by the introduction of an intermediary component. This approach allows software components to communicate 'indirectly' with each other. Benefits of this include message senders not needing to have precise knowledge of their receivers.

    The advantages of messaging include the ability to integrate heterogeneous platforms, reduce system bottlenecks, increase scalability, and respond more quickly to change.


    Elements

    This article is written like a manual or guidebook. Please help rewrite this article from a descriptive, neutral point of view, and remove advice or instruction. (March 2009)

    The following are JMS elements:

    JMS provider
    An implementation of the JMS interface for a Message Oriented Middleware (MOM). Providers are implemented as either a Java JMS implementation or an adapter to a non-Java MOM.

    JMS client
    An application or process that produces and/or receives messages.

    JMS producer/publisher
    A JMS client that creates and sends message.

    JMS consumer/subscriber
    A JMS client that receives message.

    JMS message
    An object that contains the data being transferred between JMS clients.

    JMS Queue
    A staging area that contains messages that have been sent and are waiting to be read (by only one consumer). Note that, contrary to what the name queue suggests, messages don't have to be delivered in the order sent. A JMS queue only guarantees that each message is processed only once.

    JMS Topic A distribution mechanism for publishing messages that are delivered to multiple subscribers.

    Models :
    • The JMS API supports two models:
    • Point-to-point
    • Publish and subscribe

  • .


    Point-to-point model

    In point-to-point messaging system, messages are routed to an individual consumer which maintains a queue of "incoming" messages. This messaging type is built on the concept of message queues, senders, and receivers. Each message is addressed to a specific queue, and the receiving clients extract messages from the queues established to hold their messages. While any number of producers can send messages to the queue, each message is guaranteed to be delivered, and consumed by one consumer. Queues retain all messages sent to them until the messages are consumed or until the messages expire. If no consumers are registered to consume the messages, the queue holds them until a consumer registers to consume them.


    Publish/subscribe model

    The publish/subscribe model supports publishing messages to a particular message topic. Subscribers may register interest in receiving messages on a particular message topic. In this model, neither the publisher nor the subscriber knows about each other. A good analogy for this is an anonymous bulletin board.

    Zero or more consumers will receive the message.

    There is a timing dependency between publishers and subscribers. The publisher has to create a message topic for clients to subscribe. The subscriber has to remain continuously active to receive messages, unless it has established a durable subscription. In that case, messages published while the subscriber is not connected will be redistributed whenever it reconnects.

    JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the Java Naming and Directory Interface (JNDI) information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then use populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages.


    Provider Implementations

    To use JMS, one must have a JMS provider that can manage the sessions and queues. Starting from Java EE version 1.4, JMS provider has to be contained in all Java EE application servers. This can be implemented using the message inflow management of the Java EE Connector Architecture, which was first made available in that version.


    The following is a list of JMS providers:

    • Apache ActiveMQ
    • Apache Qpid, using AMQP
    • Oracle Weblogic (part of the Fusion Middleware suite) and Oracle AQ from Oracle
    • EMS from TIBCO
    • FFMQ, GNU LGPL licensed
    • JBoss Messaging and HornetQ from JBoss
    • JORAM, from the OW2 Consortium
    • Open Message Queue, from Oracle
    • OpenJMS, from The OpenJMS Group
    • Solace JMS from Solace Systems
    • RabbitMQ by Rabbit Technologies Ltd., acquired by SpringSource
    • SAP Process Integration ESB
    • SonicMQ from Progress Software
    • SwiftMQ
    • Tervela
    • Ultra Messaging from 29 West (acquired by Informatica)
    • webMethods from Software AG
    • WebSphere Application Server from IBM, which provides an inbuilt default messaging provider known as the Service Integration Bus (SIBus), or which can connect to WebSphere MQ as a JMS provider
    • WebSphere MQ (formerly MQSeries) from IBM

  • .

    Common Object Request Broker Architecture (CORBA) :
    Common Object Request Broker Architecture

    This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (February 2009)

    The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG) that enables software components written in multiple computer languages and running on multiple computers to work together (i.e., it supports multiple platforms).

    Overview

    CORBA enables separate pieces of software written in different languages and running on different computers to work with each other like a single application or set of services. More specifically, CORBA is a mechanism in software for normalizing the method-call semantics between application objects residing either in the same address space (application) or remote address space (same host, or remote host on a network). Version 1.0 was released in October 1991. CORBA uses an interface definition language (IDL) to specify the interfaces which objects present to the outer world. CORBA then specifies a mapping from IDL to a specific implementation language like C++ or Java. Standard mappings exist for Ada, C, C++, C++11, Lisp, Ruby, Smalltalk, Java, COBOL, PL/I and Python. There are also non-standard mappings for Perl, Visual Basic, Erlang, and Tcl implemented by object request brokers (ORBs) written for those languages.

    The CORBA specification dictates there shall be an ORB through which an application would interact with other objects. In practice, the application simply initializes the ORB, and accesses an internal Object Adapter, which maintains things like reference counting, object (and reference) instantiation policies, and object lifetime policies. The Object Adapter is used to register instances of the generated code classes. Generated code classes are the result of compiling the user IDL code, which translates the high-level interface definition into an OS- and language-specific class base for use by the user application. This step is necessary in order to enforce CORBA semantics and provide a clean user process for interfacing with the CORBA infrastructure.

    Some IDL mappings are more difficult to use than others. For example, due to the nature of Java, the IDL-Java mapping is rather straightforward and makes usage of CORBA very simple in a Java application. This is also true of the IDL to Python mapping. The C++ mapping is notoriously difficult; the mapping requires the programmer to learn complex and confusing datatypes that predate the C++ Standard Template Library (STL). By contrast, the C++11 mapping is very easy to use, as it uses Standard Template Library (STL) heavily. Since the C language is not object-oriented, the IDL to C mapping requires a C programmer to manually emulate object-oriented features.

    A language mapping requires the developer to create IDL code that represents the interfaces to his objects. Typically, a CORBA implementation comes with a tool called an IDL compiler which converts the user's IDL code into some language-specific generated code. A traditional compiler then compiles the generated code to create the linkable-object files for the application. This diagram illustrates how the generated code is used within the CORBA infrastructure:


    Illustration of the autogeneration of the infrastructure code from an interface defined using the CORBA IDL

    This illustrates the high-level paradigm for remote interprocess communications using CORBA. Issues not addressed here, yet accounted for in the CORBA specification, include data typing, exceptions, network protocols, communication timeouts, etc. For example: Normally the server side has the Portable Object Adapter (POA) that redirects calls either to the local servants or (to balance the load) to the other servers. Also, both server and client parts often have interceptors that are described below. Issues CORBA (and thus this figure) does not address, but that all distributed systems must address, include object lifetimes, redundancy/fail-over, naming semantics (beyond a simple name), memory management, dynamic load balancing, separation of model between display/data/control semantics, etc.

    In addition to providing users with a language and a platform-neutral remote procedure call (RPC) specification, CORBA defines commonly needed services such as transactions and security, events, time, and other domain-specific interface models.

    OMG trademarks

    CORBA, IIOP and OMG are the registered marks of the Object Management Group and should be used with care. However, GIOP (General Inter-ORB Protocol) is not a registered OMG trademark. Hence in some cases it may be more appropriate just to say that the application uses or implements the GIOP-based architecture.


    Objects By Reference

    This reference is either acquired through a stringified Uniform Resource Identifier (URI), NameService lookup (similar to Domain Name System (DNS)), or passed-in as a method parameter during a call.

    Object references are lightweight objects matching the interface of the real object (remote or local). Method calls on the reference result in subsequent calls to the ORB and blocking on the thread while waiting for a reply, success or failure. The parameters, return data (if any), and exception data are marshaled internally by the ORB according to the local language and OS mapping.


    Data By Value

    The CORBA Interface Definition Language provides the language- and OS-neutral inter-object communication definition. CORBA Objects are passed by reference, while data (integers, doubles, structs, enums, etc.) are passed by value. The combination of Objects-by-reference and data-by-value provides the means to enforce strong data typing while compiling clients and servers, yet preserve the flexibility inherent in the CORBA problem-space.

    Objects By Value (OBV)

    Apart from remote objects, the CORBA and RMI-IIOP define the concept of the OBV and Valuetypes. The code inside the methods of Valuetype objects is executed locally by default. If the OBV has been received from the remote side, the needed code must be either a priori known for both sides or dynamically downloaded from the sender. To make this possible, the record, defining OBV, contains the Code Base that is a space-separated list of URLs from where this code should be downloaded. The OBV can also have the remote methods.


    CORBA Component Model (CCM)

    CORBA Component Model (CCM) is an addition to the family of CORBA definitions. It was introduced with CORBA 3 and it describes a standard application framework for CORBA components. Though not dependent on "language dependent Enterprise Java Beans (EJB)", it is a more general form of EJB, providing four component types instead of the two that EJB defines. It provides an abstraction of entities that can provide and accept services through well-defined named interfaces called ports.

    The CCM has a component container, where software components can be deployed. The container offers a set of services that the components can use. These services include (but are not limited to) notification, authentication, persistence and transaction processing. These are the most-used services any distributed system requires, and, by moving the implementation of these services from the software components to the component container, the complexity of the components is dramatically reduced.


    Portable interceptors

    Portable interceptors are the "hooks", used by CORBA and RMI-IIOP to mediate the most important functions of the CORBA system. The CORBA standard defines the following types of interceptors:

    IOR interceptors mediate the creation of the new references to the remote objects, presented by the current server.

    Client interceptors usually mediate the remote method calls on the client (caller) side. If the object Servant exists on the same server where the method is invoked, they also mediate the local calls.

    Server interceptors mediate the handling of the remote method calls on the server (handler) side.

    The interceptors can attach the specific information to the messages being sent and IORs being created. This information can be later read by the corresponding interceptor on the remote side. Interceptors can also throw forwarding exceptions, redirecting request to another target.


    General InterORB Protocol (GIOP)

    The GIOP is an abstract protocol by which Object request brokers (ORBs) communicate. Standards associated with the protocol are maintained by the Object Management Group (OMG). The GIOP architecture provides several concrete protocols, including:

    Internet InterORB Protocol (IIOP) - The Internet Inter-Orb Protocol is an implementation of the GIOP for use over the Internet, and provides a mapping between GIOP messages and the TCP/IP layer.

    SSL InterORB Protocol (SSLIOP) - SSLIOP is IIOP over SSL, providing encryption and authentication.

    HyperText InterORB Protocol (HTIOP) - HTIOP is IIOP over HTTP, providing transparent proxy bypassing.

    Zipped IOP (ZIOP) - A zipped version of GIOP that reduces the bandwidth usage


    Corba Location (CorbaLoc)

    Corba Location (CorbaLoc) refers to a stringified object reference for a CORBA object that looks similar to a URL. All CORBA products must support two OMG-defined URLs: "corbaloc:" and "corbaname:". The purpose of these is to provide a human readable and editable way to specify a location where an IOR can be obtained.


    Spring Remoting :
    Spring Framework

    The Spring Framework is an open source application framework and inversion of control container for the Java platform.

    The first version was written by Rod Johnson, who released the framework with the publication of his book Expert One-on-One J2EE Design and Development in October 2002. The framework was first released under the Apache 2.0 license in June 2003. The first milestone release, 1.0, was released in March 2004, with further milestone releases in September 2004 and March 2005. The Spring 1.2.6 framework won a Jolt productivity award and a JAX Innovation Award in 2006. Spring 2.0 was released in October 2006, Spring 2.5 in November 2007, Spring 3.0 in December 2009, and Spring 3.1 in December 2011. The current version is 3.2.3, which was released in May 2013. Spring Framework 4.0 is expected by the end of 2013, with plans to support Java SE 8, Groovy 2, some aspects of Java EE7, and WebSockets.

    The core features of the Spring Framework can be used by any Java application, but there are extensions for building web applications on top of the Java EE platform. Although the Spring Framework does not impose any specific programming model, it has become popular in the Java community as an alternative to, replacement for, or even addition to the Enterprise JavaBean (EJB) model.


    Modules

    The Spring Framework includes several modules that provide a range of services :

    Inversion of control container: configuration of application components and lifecycle management of Java objects, done mainly via dependency injection

    Aspect-oriented programming: enables implementing cross-cutting concerns.

    Data access: working with relational database management systems on the Java platform using JDBC and object-relational mapping tools and with NoSQL databases

    Transaction management: unifies several transaction management APIs and coordinates transactions for Java objects

    Model - view - controller: an HTTP- and servlet-based framework providing hooks for extension and customization for web applications and RESTful web services.

    Remote access framework: configurative RPC-style marshalling of Java objects over networks supporting RMI, CORBA and HTTP-based protocols including web services (SOAP)

    Convention over configuration: a rapid application development solution for Spring-based enterprise applications is offered in the Spring Roo module

    Authentication and authorization: configurable security processes that support a range of standards, protocols, tools and practices via the Spring Security sub-project (formerly Acegi Security System for Spring).

    Remote management: configurative exposure and management of Java objects for local or remote configuration via JMX

    Messaging: configurative registration of message listener objects for transparent message-consumption from message queues via JMS, improvement of message sending over standard JMS APIs

    Testing: support classes for writing unit tests and integration tests


    Remote access framework

    Spring's Remote Access framework is an abstraction for working with various RPC-based technologies available on the Java platform both for client connectivity and marshalling objects on servers. The most important feature offered by this framework is to ease configuration and usage of these technologies as much as possible by combining inversion of control and AOP.

    The framework also provides fault-recovery (automatic reconnection after connection failure) and some optimizations for client-side use of EJB remote stateless session beans.

    Spring provides support for these protocols and products out of the box:

    HTTP-based protocols

    Hessian: binary serialization protocol, open-sourced and maintained by CORBA-based protocols
    RMI (1): method invocations using RMI infrastructure yet specific to Spring
    RMI (2): method invocations using RMI interfaces complying with regular RMI usage

    RMI-IIOP (CORBA): method invocations using RMI-IIOP/CORBA

    Enterprise JavaBean client integration
    Local EJB stateless session bean connectivity: connecting to local stateless session beans
    Remote EJB stateless session bean connectivity: connecting to remote stateless session beans
    SOAP

    Integration with the Apache Axis web services framework
    Apache CXF provides integration with the Spring Framework for RPC-style exporting of object on the server side.

    Both client and server setup for all RPC-style protocols and products supported by the Spring Remote access framework (except for the Apache Axis support) is configured in the Spring Core container.

    There is alternative open-source implementation (Cluster4Spring) of a remoting subsystem included into Spring Framework which is intended to support various schemes of remoting (1-1, 1-many, dynamic services discovering).....


    Convention-over-configuration rapid application development

    Spring Roo is Spring's convention-over-configuration solution for rapidly building applications in Java. It currently supports Spring Framework, Spring Security and Spring Web Flow, with remaining Spring projects scheduled to be added in due course. Roo differs from other rapid application development frameworks by focusing on:

    • Java platform productivity (as opposed to other languages)
    • Usability (particularly via the shell features and usage patterns)
    • Runtime avoidance (with associated deployment advantages)
    • Lock-in avoidance (Roo can be removed within a few minutes from any application)
    • Extensibility (via add-ons)

  • .

    Batch framework

    Spring Batch is a framework for batch processing that provides reusable functions that are essential in processing large volumes of records, including:

    • logging/tracing
    • transaction management
    • job restart
    • skip
    • resource management

  • .

    It also provides more advanced technical services and features that will enable extremely high-volume and high performance batch jobs through optimizations and partitioning techniques.


    Integration Framework

    Spring Integration is a framework for Enterprise application integration that provides reusable functions that are essential in messaging, or event-driven architectures.

    • routers
    • transformers
    • adapters to integrate with other technologies and systems (HTTP, AMQP, JMS, XMPP, SMTP, IMAP, FTP (as well as FTPS/SFTP), file systems, etc.)
    • filters
    • service activators
    • management and auditing
    • Spring Integration supports pipe-and-filter based architectures.

  • .

    OSGi Service Platform :
    OSGi Service Platform

    The OSGi framework is a module system and service platform for the Java programming language that implements a complete and dynamic component model, something that does not exist in standalone Java/VM environments. Applications or components (coming in the form of bundles for deployment) can be remotely installed, started, stopped, updated, and uninstalled without requiring a reboot; management of Java packages/classes is specified in great detail. Application life cycle management (start, stop, install, etc.) is done via APIs that allow for remote downloading of management policies. The service registry allows bundles to detect the addition of new services, or the removal of services, and adapt accordingly.

    The OSGi specifications have moved beyond the original focus of service gateways, and are now used in applications ranging from mobile phones to the open source Eclipse IDE. Other application areas include automobiles, industrial automation, building automation, PDAs, grid computing, entertainment, fleet management and application servers.


    OSGi Service Gateway Architecture

    Any framework that implements the OSGi standard provides an environment for the modularization of applications into smaller bundles. Each bundle is a tightly coupled, dynamically loadable collection of classes, jars, and configuration files that explicitly declare their external dependencies (if any).

    The framework is conceptually divided into the following areas:

    Bundles :
    Bundles are normal jar components with extra manifest headers.

    Services :
    The services layer connects bundles in a dynamic way by offering a publish-find-bind model for plain old Java Interfaces (POJI) or Plain Old Java Objects POJO.

    Services Registry :
    The API for management services (ServiceRegistration, ServiceTracker and ServiceReference).

    Life-Cycle :
    The API for life cycle management (install, start, stop, update, and uninstall) for bundles.

    Modules :
    The layer that defines encapsulation and declaration of dependencies (how a bundle can import and export code).

    Security :
    The layer that handles the security aspects by limiting bundle functionality to pre-defined capabilities.

    Execution Environment :
    Defines what methods and classes are available in a specific platform. There is no fixed list of execution environments, since it is subject to change as the Java Community Process creates new versions and editions of Java. However, the following set is currently supported by most OSGi implementations:

    • CDC-1.0/Foundation-1.0
    • CDC-1.1/Foundation-1.1
    • OSGi/Minimum-1.1
    • JRE-1.1
    • From J2SE-1.2 up to J2SE-1.6

  • .

    OSGi Life-Cycle

    A Life Cycle layer adds bundles that can be dynamically installed, started, stopped, updated and uninstalled. Bundles rely on the module layer for class loading but add an API to manage the modules in run time. The life cycle layer introduces dynamics that are normally not part of an application. Extensive dependency mechanisms are used to assure the correct operation of the environment. Life cycle operations are fully protected with the security architecture.

    • INSTALLED : The bundle has been successfully installed.
    • RESOLVED : All Java classes that the bundle needs are available. This state indicates that the bundle is either ready to be started or has stopped.
    • STARTING : The bundle is being started, the BundleActivator.start method will be called, and this method has not yet returned. When the bundle has an activation policy, the bundle will remain in the STARTING state until the bundle is activated according to its activation policy.
    • ACTIVE : The bundle has been successfully activated and is running; its Bundle Activator start method has been called and returned.
    • STOPPING : The bundle is being stopped. The BundleActivator.stop method has been called but the stop method has not yet returned.
    • UNINSTALLED : The bundle has been uninstalled. It cannot move into another state.

  • .





    Quality Service

    If the pros at Sun had had a chance to fix Java, the world would be a much more pleasant place. This is not secret knowledge. It’s just secret to this pop culture.
    -Alan Kay

    Intelligent Quotes

    A solid working knowledge of productivity software and other IT tools has become a basic foundation for success in virtually any career. Beyond that, however, I don't think you can overemphasise the importance of having a good background in maths and science.....
    "Every software system needs to have a simple yet powerful organizational philosophy (think of it as the software equivalent of a sound bite that describes the system's architecture)... A step in thr development process is to articulate this architectural framework, so that we might have a stable foundation upon which to evolve the system's function points. "
    "All architecture is design but not all design is architecture. Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change"
    "The ultimate measurement is effectiveness, not efficiency "
    "It is argued that software architecture is an effective tool to cut development cost and time and to increase the quality of a system. "Architecture-centric methods and agile approaches." Agile Processes in Software Engineering and Extreme Programming.
    "Java is C++ without the guns, knives, and clubs "
    "When done well, software is invisible"
    "Our words are built on the objects of our experience. They have acquired their effectiveness by adapting themselves to the occurrences of our everyday world."
    "I always knew that one day Smalltalk would replace Java. I just didn't know it would be called Ruby. "
    "The best way to predict the future is to invent it."
    "In 30 years Lisp will likely be ahead of C++/Java (but behind something else)"
    "Possibly the only real object-oriented system in working order. (About Internet)"
    "Simple things should be simple, complex things should be possible. "
    "Software engineering is the establishment and use of sound engineering principles in order to obtain economically software that is reliable and works efficiently on real machines."
    "Model Driven Architecture is a style of enterprise application development and integration, based on using automated tools to build system independent models and transform them into efficient implementations. "
    "The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. "
    "Software Engineering Economics is an invaluable guide to determining software costs, applying the fundamental concepts of microeconomics to software engineering, and utilizing economic analysis in software engineering decision making. "
    "Ultimately, discovery and invention are both problems of classification, and classification is fundamentally a problem of finding sameness. When we classify, we seek to group things that have a common structure or exhibit a common behavior. "
    "Perhaps the greatest strength of an object-oriented approach to development is that it offers a mechanism that captures a model of the real world. "
    "The entire history of software engineering is that of the rise in levels of abstraction. "
    "The amateur software engineer is always in search of magic, some sensational method or tool whose application promises to render software development trivial. It is the mark of the professional software engineer to know that no such panacea exist "


    Core Values ?

    Agile And Scrum Based Architecture

    Agile software development is a group of software development methods based on iterative and incremental development, where requirements and solutions evolve through collaboration.....

    more

    Core Values ?

    Total quality management

    Total Quality Management / TQM is an integrative philosophy of management for continuously improving the quality of products and processes. TQM is based on the premise that the quality of products and .....

    more

    Core Values ?

    Design that Matters

    We are more than code junkies. We're a company that cares how a product works and what it says to its users. There is no reason why your custom software should be difficult to understand.....

    more

    Core Values ?

    Expertise that is Second to None

    With extensive software development experience, our development team is up for any challenge within the Great Plains development environment. our Research works on IEEE international papers are consider....

    more

    Core Values ?

    Solutions that Deliver Results

    We have a proven track record of developing and delivering solutions that have resulted in reduced costs, time savings, and increased efficiency. Our clients are very much ....

    more

    Core Values ?

    Relentless Software Testing

    We simply dont release anything that isnt tested well. Tell us something cant be tested under automation, and we will go prove it can be. We create tests before we write the complementary production software......

    more

    Core Values ?

    Unparalled Technical Support

    If a customer needs technical support for one of our products, no-one can do it better than us. Our offices are open from 9am until 9pm Monday to Friday, and soon to be 24hours. Unlike many companies, you are able to....

    more

    Core Values ?

    Impressive Results

    We have a reputation for process genius, fanatical testing, high quality, and software joy. Whatever your business, our methods will work well in your field. We have done work in Erp Solutions ,e-commerce, Portal Solutions,IEEE Research....

    more

     
     

    Why Choose Us ?

    Invest in Thoughts

    The intellectual commitment of our development team is central to the leonsoft ability to achieve its mission: to develop principled, innovative thought leaders in global communities.

    Read More
    From Idea to Enterprise

    Today's most successful enterprise applications were once nothing more than an idea in someone's head. While many of these applications are planned and budgeted from the beginning.

    Read More
    Constant Innovation

    We constantly strive to redefine the standard of excellence in everything we do. We encourage both individuals and teams to constantly strive for developing innovative technologies....

    Read More
    Utmost Integrity

    If our customers are the foundation of our business, then integrity is the cornerstone. Everything we do is guided by what is right. We live by the highest ethical standards.....

    Read More