Sunday, July 11, 2010

Chapter 1 Report Summary

The first group to report Module 1 was composed of my classmates Michael George Guanzon, John Cesar Manlangit, Jerusalem Alvaira, and Edsa Fe Esio (for the application part of the report) and Karen Palero, Athina Alorro, Emilio Jopia, Jr., Felix Sumalinog and Franz Cie Suico (for the theory part of the report). Their topic was entitled “Moving to Design”.

The first slide of their slide presentation talked about Analysis versus Design. The objectives of the activities of analysis were to understand: business events and processes, system activities and processing requirements, and information storage requirements. The Design activities’ objective was to define, organize, and structure the components of the final solution system that will serve as the blueprint for construction.

As for me, both are very important for the development of a system. Each has its own tasks that the system needs for it to be fully functional. We have to first analyze the environment, platform, the specifications of the computers, the processes and many more, right? And we also have to make an interface for the user to interact on and to detail to the user of what the workflow of the system is. So both must not be taken for granted, because both of them, Analysis and Design, play a major role in systems development.

*********************************************

Elements of Design

Design is process of describing, organizing, and structuring system components at architectural design level and detailed design level

· Focused on preparing for construction

· Like developing blueprints

Three questions

· What components require systems design?

· What are inputs to and outputs of design process?

· How is systems design done?

These questions were later answered by them on their following slides.

******************************************

Components Requiring Systems Design

n User interface design defines the forms, reports, and controls of inputs and outputs.

-It is important to have a very appealing user interface design, because most of the time, the user’s reason of using the system would be that the interface of the system looks good and is easy to use. But having a good and sleek design must not compromise the functionality of the system. If you have a good interface design, then you have to pair it up with also good functionality, so the system shall be used more. I also found some primary information on the topic user interface design from the Internet.

“User interface design or user interface engineering is the design of computers, appliances, machines, mobile communication devices, software applications, and websites with the focus on the user's experience and interaction. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goals—what is often called user-centered design. Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design may be utilized to support its usability. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs.

Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design, or industrial design.

There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. (Note: for the remainder of this section, the word system is used to denote any project whether it is a web site, application, or device.)

* Functionality requirements gathering – assembling a list of the functionality required by the system to accomplish the goals of the project and the potential needs of the users.

* User analysis – analysis of the potential users of the system either through discussion with people who work with the users and/or the potential users themselves. Typical questions involve:

o What would the user want the system to do?

o How would the system fit in with the user's normal workflow or daily activities?

o How technically savvy is the user and what similar systems does the user already use?

o What interface look & feel styles appeal to the user?

* Information architecture – development of the process and/or information flow of the system (i.e. for phone tree systems, this would be an option tree flowchart and for web sites this would be a site flow that shows the hierarchy of the pages).

* Prototyping – development of wireframes, either in the form of paper prototypes or simple interactive screens. These prototypes are stripped of all look & feel elements and most content in order to concentrate on the interface.

* Usability testing – testing of the prototypes on an actual user—often using a technique called think aloud protocol where you ask the user to talk about their thoughts during the experience.

* Graphic Interface design – actual look & feel design of the final graphical user interface (GUI). It may be based on the findings developed during the usability testing if usability is unpredictable, or based on communication objectives and styles that would appeal to the user. In rare cases, the graphics may drive the prototyping, depending on the importance of visual form versus function. If the interface requires multiple skins, there may be multiple interface designs for one control panel, functional feature or widget. This phase is often a collaborative effort between a graphic designer and a user interface designer, or handled by one who is proficient in both disciplines.

User interface design requires a good understanding of user needs.”

n Network design specifies the hardware and middleware to link the system together

-Networking is also considered important for systems analysis and design. Developers and makers of the system have to consider every aspect of the development and the implementation of their system, as this can affect the performance of their system. They have to consider what kind of networking scheme they are going to implement in deploying the system. Networking topologies also come into play here, because sometimes, the proposed system may be deployed just through intranet, and with the right topology and the right network infrastructure, the system may run smoothly through the network.

n Application design describes the computer programs and modules

-The application design is important because it is important to know what computer programs and how the interfaces interact with each other work. There might be times that a specific computer module does not work with a specific module because of some error with the coding or whatsoever. That could be very crucial to the whole system, because each function in the system must work according to its function and there has to be no place for errors so the client is satisfied. Therefore, another task to think about is the application design of the system.

n System interface design describes the communications going to other systems

n Database design specifies the structure of the underlying database.

“Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity.

The term database design can be used to describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term database design could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the database management system (DBMS).

The process of doing database design generally consists of a number of steps which will be carried out by the database designer. Usually, the designer must:

* Determine the relationships between the different data elements.

* Superimpose a logical structure upon the data on the basis of these relationships.

ER Diagram (Entity-relationship model)

Database design also include ER (Entity-relationship model) diagram. ER diagram is diagram help to design database in effective and efficient way.

Attributes in ER diagrams are usually modeled as an oval with the name of the attribute, linked to the entity or relationship that contains the attribute.

Within the relational model the final step can generally be broken down into two further steps, that of determining the grouping of information within the system, generally determining what are the basic objects about which information is being stored, and then determining the relationships between these groups of information, or objects. This step is not necessary with an Object database.

The Design Process

The design process consists of the following steps:

1. Determine the purpose of your database - This helps prepare you for the remaining steps.

2. Find and organize the information required - Gather all of the types of information you might want to record in the database, such as product name and order number.

3. Divide the information into tables - Divide your information items into major entities or subjects, such as Products or Orders. Each subject then becomes a table.

4. Turn information items into columns - Decide what information you want to store in each table. Each item becomes a field, and is displayed as a column in the table. For example, an Employees table might include fields such as Last Name and Hire Date.

5. Specify primary keys - Choose each table’s primary key. The primary key is a column that is used to uniquely identify each row. An example might be Product ID or Order ID.

6. Set up the table relationships - Look at each table and decide how the data in one table is related to the data in other tables. Add fields to tables or create new tables to clarify the relationships, as necessary.

7. Refine your design - Analyze your design for errors. Create the tables and add a few records of sample data. See if you can get the results you want from your tables. Make adjustments to the design, as needed.

8. Apply the normalization rules - Apply the data normalization rules to see if your tables are structured correctly. Make adjustments to the tables, as needed.

Determining data to be stored

In a majority of cases, a person who is doing the design of a database is a person with expertise in the area of database design, rather than expertise in the domain from which the data to be stored is drawn e.g. financial information, biological information etc. Therefore the data to be stored in the database must be determined in cooperation with a person who does have expertise in that domain, and who is aware of what data must be stored within the system.

This process is one which is generally considered part of requirements analysis, and requires skill on the part of the database designer to elicit the needed information from those with the domain knowledge. This is because those with the necessary domain knowledge frequently cannot express clearly what their system requirements for the database are as they are unaccustomed to thinking in terms of the discrete data elements which must be stored. Data to be stored can be determined by Requirement Specification.

Normalization

The process of applying the rules to your database design is called normalizing the database, or just normalization. Normalization is most useful after you have represented all of the information items and have arrived at a preliminary design. The idea is to help you ensure that you have divided your information items into the appropriate tables. What normalization cannot do is ensure that you have all the correct data items to begin with. You apply the rules in succession, at each step ensuring that your design arrives at one of what is known as the "normal forms." Five normal forms are widely accepted — the first normal form through the fifth normal form. This article expands on the first three, because they are all that is required for the majority of database designs.

First normal form

First normal form states that at every row and column intersection in the table there exists a single value, and never a list of values. For example, you cannot have a field named Price in which you place more than one Price. If you think of each intersection of rows and columns as a cell, each cell can hold only one value.

Second normal form

Second normal form requires that each non-key column be fully dependent on the entire primary key, not on just part of the key. This rule applies when you have a primary key that consists of more than one column. For example, suppose you have a table containing the following columns, where Order ID and Product ID form the primary key:

Order ID (primary key)

Product ID (primary key)

Product Name

This design violates second normal form, because Product Name is dependent on Product ID, but not on Order ID, so it is not dependent on the entire primary key. You must remove Product Name from the table. It belongs in a different table (Products).

Third normal form

Third normal form requires that not only every non-key column be dependent on the entire primary key, but that non-key columns be independent of each other. Another way of saying this is that each non-key column must be dependent on the primary key and nothing but the primary key. For example, suppose you have a table containing the following columns:

ProductID (primary key)

Name

SRP

Discount

Assume that Discount depends on the suggested retail price (SRP). This table violates third normal form because a non-key column, Discount, depends on another non-key column, SRP. Column independence means that you should be able to change any non-key column without affecting any other column. If you change a value in the SRP field, the Discount would change accordingly, thus violating that rule. In this case Discount should be moved to another table that is keyed on SRP.

Types of Database design:

Conceptual schema

Once a database designer is aware of the data which is to be stored within the database, they must then determine where dependancy is within the data. Sometimes when data is changed you can be changing other data that is not visible. For example, in a list of names and addresses, assuming a situation where multiple people can have the same address, but one person cannot have more than one address, the name is dependent upon the address, because if the address is different, then the associated name is different too. However, the other way around is different. One attribute can change and not another.

(NOTE: A common misconception is that the relational model is so called because of the stating of relationships between data elements therein. This is not true. The relational model is so named because it is based upon the mathematical structures known as relations.)

Logically structuring data

Once the relationships and dependencies amongst the various pieces of information have been determined, it is possible to arrange the data into a logical structure which can then be mapped into the storage objects supported by the database management system. In the case of relational databases the storage objects are tables which store data in rows and columns.

Each table may represent an implementation of either a logical object or a relationship joining one or more instances of one or more logical objects. Relationships between tables may then be stored as links connecting child tables with parents. Since complex logical relationships are themselves tables they will probably have links to more than one parent.

In an Object database the storage objects correspond directly to the objects used by the Object-oriented programming language used to write the applications that will manage and access the data. The relationships may be defined as attributes of the object classes involved or as methods that operate on the object classes.

Physical database design

The physical design of the database specifies the physical configuration of the database on the storage media. This includes detailed specification of data elements, data types, indexing options and other parameters residing in the DBMS data dictionary. It is the detailed design of a system that includes modules & the database's hardware & software specifications of the system.

************************************************

Inputs for System Design

n Design

n Converts functional models from analysis into models that represent the solution

n Focused on technical issues

n Requires less user involvement than analysis

n Design may use structured or OO approaches

**************************************************

DESIGN PHASE ACTIVITIES

Design Phase

n How a system will work using a particular technology.

n Each of the activities develops a specific portion of the final set of design documents.

n Various systems design documents also must be consistent and integrated to provide a comprehensive set of specifications for the complete

**************************************************

Design Phase Activity

Design and integrate the network

n Have we specified in detail how the various parts of the system will communicate with each other throughout the organization?

Design the application architecture

n Have we specified in detail how each system activity is actually carried out by the people and computers?

Design the user interface(s)

n Have we specified in detail how all users will interact with the system?

Design the system interface(s)

n Have we specified in detail how the system will work with all the other systems inside and outside the organization?

Design and integrate the database

n Have we specified in detail how and where the system will store all of the information needed by the organization?

Prototype for design details

n Have we created prototypes to ensure all detailed design decisions have been fully understood?

Design and integrate the system controls

n Have we specified in detail how we can e sure that the system operates correctly and the data maintained by the system are the safe and secure?

*****************************************************

Design and Integrate the Network

n Network Specialists have established the network based on an overall strategic plan.

n Designers choose an alternative that fits the existing network.

n Important technical issues:

¨ Reliability

¨ Security

¨ Throughput

¨ Sychronization

**************************************************

Design the Application Architecture

n How all system activities will actually be carried out.

n Define the automation boundary (separates the manual work done by people from the automated work done by computers)

n Varies depending on the development and deployment environments.

n Some activities are carried out by people, so manual procedures need to be designed.

**************************************************

Design the User Interface(s)

n How the user will interact with the system

n Something to be considered throughout the development process

**************************************************

Design the System Interfaces

n A new information system will affect many other information systems.

n The component that enables systems to share information; each needs to be designed in detail.

**************************************************

Design and Integrate the Database

n The data model is used to create the physical model of the database

n Database Performance (e.g.,response time)

n New databases are properly integrated with exiting databases.

**************************************************

Prototype for Design Details

n Continue creating and evaluating prototypes

n Used to confirm design choices about user interfaces, the database, network architecture, and controls.

n To understand variety of design decisions.

**************************************************

Design and Integrate the System Controls

n Ensuring that the system has the adequate safeguards (system controls) to protect organizational assets.

n User Interface: Limit access to the system to authorized users.

n System Interface: Ensure that other systems cause no harm to this system.

n Application: Ensure that transactions recorded and other work done by the system are done correctly.

n Database: Ensure that data are protected from unauthorized access and from accidental loss due to software or hardware failure.

n Network: Ensure that communication through networks are protected.

**************************************************

To sum it up, here is a summary for the design phase of an IT Project.

“The purpose of the design phase of an IT project life cycle is to plan out a system that meets the requirements defined in the analysis phase. In the design phase, the project team defines the means of implementing the project solution—how the product will be created. To do this, the project team uses the inputs and tools to conduct the key activities, create the outputs, and meet the milestones for this phase.

The purpose of the design phase is to provide the project team with a means for assessing the quality of the solution before it has been implemented, when changes are still easy to make and are less costly. This phase includes the following elements.

* The inputs required for this phase are the corporate standards, business process prototype, and requirements analysis.

* Only standardized tools found in all offices, such as word processing software, spreadsheets, and presentation software, are used in the design phase.

* The key activities for the design phase are to review the end user interface design, create the technical design, and perform the quality verification and validation.

* The single output for the design phase is the design document.

* The milestones for the design phase are the architecture assessment deliverable, design sign-off, and lifecycle assessment complete deliverable.

During the design phase, the project manager enlists the help of the designer, the technical architect, and a representative of the end users. The key activities that must be conducted during the design phase are listed below.

1. Review the end user interface design.

Before the project manager can conduct the first key activity, the designer uses the project's storyboards to create the end user interface in terms of appearance, layout, and interaction techniques as seen by the end users.

The PM evaluates the design to ensure that it meets the users' needs and the corporate standards. The PM then meets with the designer and explains what's wrong and what's missing from the proposed design. If the PM is not satisfied with the design, the designer applies the necessary corrections, and the PM reviews it again.

When a PM looks at a design, he or she compares it to the inputs listed below. This evaluation is important because the developer will not be able to properly test the completed product during the testing phase if the design doesn't meet these requirements. If testing can't be conducted properly, the product can't move into the rollout phase.

* User needs. Does the design include each of the user requirements? Are each of the user requirements correctly incorporated into the design?

* Corporate standards. Does the design include all the standards that specify products or technologies that the development team will use? Are the standards properly implemented in the design?

2. Create the technical design.

Although PMs do not conduct the last two key activities of the design phase, it is important that they understand each activity. The designer and technical architect conduct the technical design activity to decide how they will implement the project design. By using the requirements specifications input as a guide, they create a document that includes the sections listed below. The technical design is then given to the PM to serve as the blueprintfor the project.

Framework. The framework is a design that can be reused on a software development project. For example, the text entry field used on a user interface framework can be reused in a database design.

* Coding standards. Programmers have to follow these rules to ensure consistency in the programming code since usually more than one programmer will work on a project. A sample coding standard might read, "Do not use capital letters for file names when coding."

* Task breakdown. Development is divided into tasks created separately and then integrated into a final product. Task breakdown makes it easier for a PM to assign work during the construction phase.For example, three tasks might be the user interface, the database, and the help feature.

* Project timeline. A timeline for completion is estimated for each task required to complete the project. For example, it will take four days to create the user interface, three days for the database, and two and a half days for the help feature, for a total of nine and a half days.

3. Perform the quality verification and validation.

As in the technical design activity, the project manager does not actually have a part in conducting the final key activity of the design phase, except to receive a sign-off form. During quality verification and validation, end users and technical personnel verify and validate that the proposed design meets the user and qualityrequirements.

Upon completion of the review, an approval form is given to the project manager, indicating approval of the completed system design. This sign-off then becomes one of the milestonesfor the design phase.

Remember, it's easier and less costly to make changes in the design phase, before you implement the actual solution for the project. Take your time and design it right!”

**************************************************

APPLICATION ARCHITECTURE

Client/Server Architecture

n Client-Server divides programs into two types

n Server – manages information system resources or provides well defined services for client

n Client – communicates with server to request resources or services

n Advantage – Deployment flexibility

¨ Location, scalability, maintainability

n Disadvantage – Potential performance, security, and reliability issues from network communication

*******************************************************

Three-Layer Client/Server Architecture

The data layer is a layer on a client-server configuration that manages stored data implemented as one or more databases

“A Data Access Layer (DAL) is a layer of a computer program which provides simplified access to data stored in persistent storage of some kind, such as an entity-relational database.

For example, the DAL might return a reference to an object (in terms of object-oriented programming) complete with its attributes instead of a row of fields from a database table. This allows the client (or user) modules to be created with a higher level of abstraction. This kind of model could be implemented by creating a class of data access methods that directly reference a corresponding set of database stored procedures. Another implementation could potentially retrieve or write records to or from a file system. The DAL hides this complexity of the underlying data store from the external world.

For example, instead of using commands such as insert, delete, and update to access a specific table in a database, a class and a few stored procedures could be created in the database. The procedures would be called from a method inside the class, which would return an object containing the requested values. Or, the insert, delete and update commands could be executed within simple functions like registeruser or loginuser stored within the data access layer.

Also, business logic methods from an application can be mapped to the Data Access Layer. So, for example, instead of making a query into a database to fetch all users from several tables the application can call a single method from a DAL which abstracts those database calls.

Applications using a data access layer can be either database server dependent or independent. If the data access layer supports multiple database types, the application becomes able to use whatever databases the DAL can talk to. In either circumstance, having a data access layer provides a centralized location for all calls into the database, and thus makes it easier to port the application to other database systems (assuming that 100% of the database interaction is done in the DAL for a given application).

Object-Relational Mapping tools provide data layers in this fashion, following the active record model. The ORM/active-record model is popular with web frameworks, but it has not been proven to be better than a straightforward approach of implementing a collection of domain-specific data access functions.”

The business logic layer contains the programs that implement the rules and procedures of business processing (or program logic of the application)

-“ A business logic layer (BLL), also known as the domain layer, is a software engineering practice of compartmentalizing. The business logic layer is usually one of the tiers in a multitier architecture. It separates the business logic from other modules, such as the data access layer and user interface. By doing this, the business logic of an application can often withstand modifications or replacements of other tiers. For example, in an application with a properly separated business logic layer and data access layer, the data access layer could be rewritten to retrieve data from a different database, without affecting any of the business logic. This practice allows software application development to be more effectively split into teams, with each team working on a different tier simultaneously.

Within a BLL objects can further be partitioned into business processes (business activities) and business entities. Business process objects typically implement the controller pattern, i.e. they contain no data elements but have methods that orchestrate interaction among business entities. Business entities typically correspond to entities in the logical domain model, rather than the physical database model. Domain entities are a super-set of data layer entities or data transfer objects, and may aggregate zero or more DLEs/DTOs.”

The view layer contains the user interface and other components to access the system (accepts user input, and formats and displays processing results)

In software engineering, multi-tier architecture (often referred to as n-tier architecture) is a client–server architecture in which the presentation, the application processing, and the data management are logically separate processes. For example, an application that uses middleware to service data requests between a user and a database employs multi-tier architecture. The most widespread use of multi-tier architecture is the three-tier architecture.

N-tier application architecture provides a model for developers to create a flexible and reusable application. By breaking up an application into tiers, developers only have to modify or add a specific layer, rather than have to rewrite the entire application over. There should be a presentation tier, a business or data access tier, and a data tier.

The concepts of layer and tier are often used interchangeably. However, one fairly common point of view is that there is indeed a difference, and that a layer is a logical structuring mechanism for the elements that make up the software solution, while a tier is a physical structuring mechanism for the system infrastructure.

Three-tier[2] is a client–server architecture in which the user interface, functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. It was developed by John J. Donovan in Open Environment Corporation (OEC), a tools company he founded in Cambridge, MA.

The three-tier model is a software architecture and a software design pattern.

Apart from the usual advantages of modular software with well-defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. For example, a change of operating system in the presentation tier would only affect the user interface code.

Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture").

Three-tier architecture has the following three tiers:

Presentation tier

This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.

Application tier (business logic, logic tier, data access tier, or middle tier)

The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.

Data tier

This tier consists of database servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.

Comparison with the MVC architecture

At first glance, the three tiers may seem similar to the model-view-controller (MVC) concept; however, topologically they are different. A fundamental rule in a three-tier architecture is the client tier never communicates directly with the data tier; in a three-tier model all communication must pass through the middleware tier. Conceptually the three-tier architecture is linear. However, the MVC architecture is triangular: the view sends updates to the controller, the controller updates the model, and the view gets updated directly from the model.

From a historical perspective the three-tier architecture concept emerged in the 1990s from observations of distributed systems (e.g., web applications) where the client, middleware and data tiers ran on physically separate platforms. Whereas MVC comes from the previous decade (by work at Xerox PARC in the late 1970s and early 1980s) and is based on observations of applications that ran on a single graphical workstation; MVC was applied to distributed applications much later in its history (see Model 2).

Web development usage

In the web development field, three-tier is often used to refer to websites, commonly electronic commerce websites, which are built using three tiers:

1. A front-end web server serving static content, and potentially some are cached dynamic content. In web based application, Front End is the content rendered by the browser. The content may be static or generated dynamically.

2. A middle dynamic content processing and generation level application server, for example Java EE, ASP.NET, PHP platform.

3. A back-end database, comprising both data sets and the database management system or RDBMS software that manages and provides access to the data.

Other considerations

Data transfer between tiers is part of the architecture. Protocols involved may include one or more of SNMP, CORBA, Java RMI, .NET Remoting, Windows Communication Foundation, sockets, UDP, web services or other standard or proprietary protocols. Often Middleware is used to connect the separate tiers. Separate tiers often (but not necessarily) run on separate physical servers, and each tier may itself run on a cluster.

Traceability

The end-To-end traceability of n-tier systems is a challenging task which becomes more important when systems increase in complexity. The Application Response Measurement defines concepts and APIs for measuring performance and correlating transactions between tiers.

(The Three Layer Architecture image)

*******************************************************

Web Services Architecture

A client/server architecture

Packages software functionality into server processes (“services”)

Makes services available to applications via Web protocols

Web services are available to internal and external applications

* Developers can assemble an application using existing Web services

“Client–server model of computing is a distributed application structure that partitions tasks or workloads between service providers, called servers, and service requesters, called clients. Often clients and servers communicate over a computer network on separate hardware, but both client and server may reside in the same system. A server machine is a host that is running one or more server programs which share its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen for) incoming requests.

The client–server characteristic describes the relationship of cooperating programs in an application. The server component provides a function or service to one or many clients, which initiate requests for such services.

Functions such as email exchange, web access and database access, are built on the client–server model. For example, a web browser is a client program running on a user's computer that may access information stored on a web server on the Internet. Users accessing banking services from their computer use a web browser client to send a request to a web server at a bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve the account information. The balance is returned to the bank database client, which in turn serves it back to the web browser client displaying the results to the user.

The client–server model has become one of the central ideas of network computing. Many business applications being written today use the client–server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, and DNS. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client–server model and become part of network computing.

Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.

The most basic type of client–server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier. The Internet increasingly uses a three-tier architecture. In this the server side consists of an Application Server (such as Web Server)and a Database Server (such as a SQL Server). Thus the three tiers become - Client, Application Server and Database. All three tiers are relatively independent; for example you can switch to a different Web Server while maintaining the integrity of the model.

The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language.

Specific types of clients include web browsers, email clients, and online chat clients.

Specific types of servers include web servers, ftp servers, application servers, database servers, name servers, mail servers, file servers, print servers, and terminal servers. Most web services are also types of servers.

Advantages

* In most cases, a client–server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change.

* All data is stored on the servers, which generally have far greater security controls than most clients.[citation needed] Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data.

* Since data storage is centralized, updates to that data are far easier to administer in comparison to a P2P paradigm. In the latter, data updates may need to be distributed and applied to each peer in the network, which is both time-consuming and error-prone, as there can be thousands or even millions of peers.

* Many mature client–server technologies are already available which were designed to ensure security, friendliness of the user interface, and ease of use.

* It functions with multiple different clients of different capabilities.

Disadvantages

* As the number of simultaneous client requests to a given server increases, the server can become overloaded. Contrast that to a P2P network, where its aggregated bandwidth actually increases as nodes are added, since the P2P network's overall bandwidth can be roughly computed as the sum of the bandwidths of every node in that network.

* The client–server paradigm lacks the robustness of a good P2P network. Under client–server, should a critical server fail, clients’ requests cannot be fulfilled. In P2P networks, resources are usually distributed among many nodes. Even if one or more nodes depart and abandon a downloading file, for example, the remaining nodes should still have the data needed to complete the download.

*******************************************************

Internet and Web-Based Application Architecture

n Web is complex example of client/server architecture

n Can use Web protocols and browsers as application interfaces

n Benefits

* Accessibility

* Low-cost communication

* Widely implemented standards

“World Wide Web Protocols

Technically the World-Wide Web hinges on three enabling protocols, the HyperText Markup Language (HTML) that specifies a simple markup language for describing hypertext pages, the Hypertext Transfer Protocol (HTTP) which is used by web browsers to communicate with web clients, and Uniform Resource Locators (URL's) which are used to specify the links between documents.

HyperText Markup Language

The hypertext pages on the web are all written using the Hypertext Markup Language (HTML), a simple language consisting of a small number of tags to delineate logical constructs within the text. Unlike a procedural language such as postscript (move 1 inch to the right, 2 inches down, and create a green WWW in 15 pointer bold helvetica font), HTML deals with higher level constructs such as "headings," "lists," "images," etc. This leaves individual browsers free to format text in the most appropriate way for their particular environment; for example, the same document can be viewed on a MAC, on a PC, or on a linemode terminal, and while the content of the document remains the same, the precise way in which it is displayed will vary between the different environments.

The earliest version of HTML (subsequently labeled HTML1), was deliberately kept very simple to make the task of browser developers easier. Subsequent versions of HTML will allow more advanced features. HTML2 (approximately what most browsers support today) includes the ability to embed images in documents, layout fill-in forms, and nest lists to arbitrary depths. HTML3 (currently being defined) will allow still more advanced features such as mathematical equations, tables, and figures with captions and flow-around text.

Hypertext Transfer Protocol

Although most Web browsers are able to communicate using a variety of protocols, such as FTP, Gopher and WAIS, the most common protocol in use on the Web is that designed specifically for the WWW project, the HyperText Transfer Protocol. In order to give the fast response time needed for hypertext applications, a very simple protocol which uses a single round trip between the client and the server is used.

In the first phase of a HTTP transfer the browser sends a request for a document to the server. Included in this request is the description of the document being requested, as well as a list of document types that the browser is capable of handling. The Multipurpose Internet Mail Extensions (MIME) standard is used to specify the document types that the browser can handle, typically a variety of video, audio, and image formats in addition to plain text and HTML. The browser is able to specify weights for each document type, in order to inform the server about the relative desirability of different document types.

In response to a query the server returns the document to the browser using one of the formats acceptable to the browser. If necessary the server can translate the document from the format it is stored in into a format acceptable to the browser. For example the server might have an image stored in the highly compressed JPEG image format, and if a browser capable of displaying JPEG images requests the image it would be returned in this format. However if a browser capable of displaying images only if they are in GIF format requested the same document the server would be able to translate the image and return the (larger) GIF image. This provides a way of introducing more sophisticated document formats in future but still enabling older or less advanced browser to access the same information.

In addition to the basic "GET" transaction described above the HTTP is also able to support a number of other transaction types, such as "POST" for sending the data for fill-out forms back to the server and "PUT" which might be used in the future to allow authors to save modified versions of documents back to the server.

Uniform Resource Locators

The final key to the World-Wide Web is the URL's which allow the hypertext documents to point to other documents located anywhere on the web. A URL consists of 3 major components:

:///

The first component specifies the protocol to be used to access the document, for example, HTTP, FTP, or Gopher, etc. The second component specifies the node on the network from which the document is to be obtained, and the third component specifies the location of the document on the remote machine. The third component of the URL is passed without modification by the browser to the server, and the interpretation of this component is performed by the server, so while a documents location is often specified as a Unix-like file specification, there is no requirement that this is how it is actually interpreted by the server.”

*******************************************************

Network Design

Key design issues are as follows:

n Integrate network needs of new system with existing network infrastructure

n Describe processing activity and network connectivity at each system location

n Describe communications protocols and middleware that connects layers

n Ensure that network capacity is sufficient

¨ Data size per access type and average

¨ Peak number of access per minute or hour

*******************************************************

Network Integration

Modern Organizations rely on networks to support many different applications. Thus, the majority of new system must be integrated into existing networks without disrupting existing applications. Network design and management are highly technical tasks, and most organizations have permanent in-house staff, contractors, or consultants to handle network administration.

*******************************************************

Network Description

Location-related information gathered during analysis may have been documented using location diagrams, activity location matrices, and activity-data matrices. During network design, the analyst expands the information content of these documents to include processing locations, communication protocols, middleware, and communication capacity.

*******************************************************

Communication Protocols and Middleware

The network diagram is also a starting point for specifying protocol and middleware requirements. For example, the private WAN connections must support protocols required to process Microsoft Active Directory logins and queries. If the WAN fails, messages are routed through encrypted (VPN) connections over the Internet, so those connections must support the same protocols as the private WAN.

“Middleware is computer software that connects software components or applications. The software consists of a set of services that allows multiple processes running on one or more machines to interact. This technology evolved to provide for interoperability in support of the move to coherent distributed architectures, which are most often used to support and simplify complex distributed applications. It includes web servers, application servers, and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.

Middleware sits "in the middle" between application software that may be working on different operating systems. It is similar to the middle layer of a three-tier single system architecture, except that it is stretched across multiple systems or applications. Examples include EAI software, telecommunications software, transaction monitors, and messaging-and-queueing software.

The distinction between operating system and middleware functionality is, to some extent, arbitrary. While core kernel functionality can only be provided by the operating system itself, some functionality previously provided by separately sold middleware is now integrated in operating systems. A typical example is the TCP/IP stack for telecommunications, nowadays included in virtually every operating system.

In simulation technology, middleware is generally used in the context of the high level architecture (HLA) that applies to many distributed simulations. It is a layer of software that lies between the application code and the run-time infrastructure. Middleware generally consists of a library of functions, and enables a number of applications – simulations or federates in HLA terminology – to page these functions from the common library rather than re-create them for each application.

Software that provides a link between separate software applications. Middleware is sometimes called plumbing because it connects two applications and passes data between them. Middleware allows data contained in one database to be accessed through another. This definition would fit enterprise application integration and data integration software.

ObjectWeb defines middleware as: "The software layer that lies between the operating system and applications on each side of a distributed computing system in a network."Middleware is a computer software that connects software components or applications. The software consists of a set of services that allows multiple processes running on one or more machines to interact. This technology evolved to provide for interoperability in support of the move to coherent distributed architectures, which are most often used to support and simplify complex, distributed applications. It includes web servers, application servers, and similar tools that support application development and delivery. Middleware is especially integral to modern information technology based on XML, SOAP, Web services, and service-oriented architecture.

In simulation technology, middleware is generally used in the context of the high level architecture (HLA) that applies to many distributed simulations. It is a layer of software that lies between the application code and the run-time infrastructure. Middleware generally consists of a library of functions, and enables a number of applications – simulations or federates in HLA terminology – to page these functions from the common library rather than re-create them for each application

Middleware is a relatively new addition to the computing landscape. It gained popularity in the 1980s as a solution to the problem of how to link newer applications to older legacy systems, although the term had been in use since 1968.[2] It also facilitated distributed processing, the connection of multiple applications to create a larger application, usually over a network.

IBM, Red Hat, and Oracle Corporation are major vendors providing middleware software. Vendors such as Axway, SAP, TIBCO, Informatica, Pervasive and webMethods were specifically founded to provide Web-oriented middleware tools. Groups such as the Apache Software Foundation, OpenSAF and the ObjectWeb Consortium encourage the development of open source middleware. Microsoft .NET “Framework” architecture is essentially “Middleware” with typical middleware functions distributed between the various products, with most inter-computer interaction by industry standards, open APIs or RAND software licence.

Middleware services provide a more functional set of application programming interfaces to allow an application to:

* Locate transparently across the network, thus providing interaction with another service or application

* Filter data to make them friendly usable or public via anonymization process for privacy protection (for example)

* Be independent from network services

* Be reliable and always available

* Add complementary attributes like semantics

when compared to the operating system and network services.

Middleware offers some unique technological advantages for business and industry. For example, traditional database systems are usually deployed in closed environments where users access the system only via a restricted network or intranet (e.g., an enterprise’s internal network). With the phenomenal growth of the World Wide Web, users can access virtually any database for which they have proper access rights from anywhere in the world. Middleware addresses the problem of varying levels of interoperability among different database structures. Middleware facilitates transparent access to legacy database management systems (DBMSs) or applications via a web server without regard to database-specific characteristics [3].

Businesses frequently use middleware applications to link information from departmental databases, such as payroll, sales, and accounting, or databases housed in multiple geographic locations [4]. In the highly competitive healthcare community, laboratories make extensive use of middleware applications for data mining, laboratory information system (LIS) backup, and to combine systems during hospital mergers. Middleware helps bridge the gap between separate LISs in a newly formed healthcare network following a hospital buyout .

Wireless networking developers can use middleware to meet the challenges associated with wireless sensor network (WSN), or WSN technologies. Implementing a middleware application allows WSN developers to integrate operating systems and hardware with the wide variety of various applications that are currently available .

Middleware can help software developers avoid having to write application programming interfaces (API) for every control program, by serving as an independent programming interface for their applications. For Future Internet network operation through traffic monitoring in multi-domain scenarios, using mediator tools (middleware) is a powerful help since they allow operators, searchers and service providers to supervise Quality of service and analyse eventual failures in telecommunication services (see MOMENT[1] for example)

Finally, e-commerce uses middleware to assist in handling rapid and secure transactions over many different types of computer environments . In short, middleware has become a critical element across a broad range of industries, thanks to its ability to bring together resources across dissimilar networks or computing platforms.

In 2004 members of the European Broadcasting Union (EBU) carried out a study of Middleware with respect to system integration in broadcast environments. This involved system design engineering experts from 10 major European broadcasters working over a 12 month period to understand the effect of predominantly software based products to media production and broadcasting system design techniques. The resulting reports were published and are freely available from the EBU web site here:- Tech 3300 and Tech 3300s ”

*******************************************************

Network Capacity

Information from the activity-location and activity-data matrices is the starting point for estimating communication capacity requirements for the various LAN, WAN, and internet connections.

No comments: