Unit-5 Software Architecture

Here are some important questions asked in university question for unit 5 software architecture

software architecture

Que 5. Describe Cohesion and coupling? Explain how the functional independence is measured using them.

The purpose of Design phase in the Software Development Life Cycle is to produce a solution to a problem given in the SRS(Software Requirement Specification) document. The output of the design phase is Software Design Document (SDD). 

Coupling and Cohesion are two key concepts in software engineering that are used to measure the quality of a software system’s design.

Coupling refers to the degree of interdependence between software modules. High coupling means that modules are closely connected and changes in one module may affect other modules. Low coupling means that modules are independent and changes in one module have little impact on other modules.

Cohesion refers to the degree to which elements within a module work together to fulfill a single, well-defined purpose. High cohesion means that elements are closely related and focused on a single purpose, while low cohesion means that elements are loosely related and serve multiple purposes.

Both coupling and cohesion are important factors in determining the maintainability, scalability, and reliability of a software system. High coupling and low cohesion can make a system difficult to change and test, while low coupling and high cohesion make a system easier to maintain and improve.

Basically, design is a two-part iterative process. First part is Conceptual Design that tells the customer what the system will do. Second is Technical Design that allows the system builders to understand the actual hardware and software needed to solve customer’s problem. 

Coupling: Coupling is the measure of the degree of interdependence between the modules. A good software will have low coupling. 

Types of Coupling: 

  • Data Coupling: If the dependency between the modules is based on the fact that they communicate by passing only data, then the modules are said to be data coupled. In data coupling, the components are independent of each other and communicate through data. Module communications don’t contain tramp data. Example-customer billing system.
  • Stamp Coupling In stamp coupling, the complete data structure is passed from one module to another module. Therefore, it involves tramp data. It may be necessary due to efficiency factors- this choice was made by the insightful designer, not a lazy programmer.
  • Control Coupling: If the modules communicate by passing control information, then they are said to be control coupled. It can be bad if parameters indicate completely different behavior and good if parameters allow factoring and reuse of functionality. Example- sort function that takes comparison function as an argument.
  • External Coupling: In external coupling, the modules depend on other modules, external to the software being developed or to a particular type of hardware. Ex- protocol, external file, device format, etc.
  • Common Coupling: The modules have shared data such as global data structures. The changes in global data mean tracing back to all modules which access that data to evaluate the effect of the change. So it has got disadvantages like difficulty in reusing modules, reduced ability to control data accesses, and reduced maintainability.
  • Content Coupling: In a content coupling, one module can modify the data of another module, or control flow is passed from one module to the other module. This is the worst form of coupling and should be avoided.

Cohesion: Cohesion is a measure of the degree to which the elements of the module are functionally related. It is the degree to which all elements directed towards performing a single task are contained in the component. Basically, cohesion is the internal glue that keeps the module together. A good software design will have high cohesion. 

Types of Cohesion: 

  • Functional Cohesion: Every essential element for a single computation is contained in the component. A functional cohesion performs the task and functions. It is an ideal situation.
  • Sequential Cohesion: An element outputs some data that becomes the input for other element, i.e., data flow between the parts. It occurs naturally in functional programming languages.
  • Communicational Cohesion: Two elements operate on the same input data or contribute towards the same output data. Example- update record in the database and send it to the printer.
  • Procedural Cohesion: Elements of procedural cohesion ensure the order of execution. Actions are still weakly connected and unlikely to be reusable. Ex- calculate student GPA, print student record, calculate cumulative GPA, print cumulative GPA.
  • Temporal Cohesion: The elements are related by their timing involved. A module connected with temporal cohesion all the tasks must be executed in the same time span. This cohesion contains the code for initializing all the parts of the system. Lots of different activities occur, all at unit time.
  • Logical Cohesion: The elements are logically related and not functionally. Ex- A component reads inputs from tape, disk, and network. All the code for these functions is in the same component. Operations are related, but the functions are significantly different.
  • Coincidental Cohesion: The elements are not related(unrelated). The elements have no conceptual relationship other than location in source code. It is accidental and the worst form of cohesion. Ex- print next line and reverse the characters of a string in a single component.

ADVANTAGES OR DISADVANTAGES:

Advantages of low coupling:

  • Improved maintainability: Low coupling reduces the impact of changes in one module on other modules, making it easier to modify or replace individual components without affecting the entire system.
  • Enhanced modularity: Low coupling allows modules to be developed and tested in isolation, improving the modularity and reusability of code.
  • Better scalability: Low coupling facilitates the addition of new modules and the removal of existing ones, making it easier to scale the system as needed.

Advantages of high cohesion:

  • Improved readability and understandability: High cohesion results in clear, focused modules with a single, well-defined purpose, making it easier for developers to understand the code and make changes.
  • Better error isolation: High cohesion reduces the likelihood that a change in one part of a module will affect other parts, making it easier to
  • isolate and fix errors. Improved reliability: High cohesion leads to modules that are less prone to errors and that function more consistently, 
  • leading to an overall improvement in the reliability of the system.

Disadvantages of high coupling:

  • Increased complexity: High coupling increases the interdependence between modules, making the system more complex and difficult to understand.
  • Reduced flexibility: High coupling makes it more difficult to modify or replace individual components without affecting the entire system.
  • Decreased modularity: High coupling makes it more difficult to develop and test modules in isolation, reducing the modularity and reusability of code.

Disadvantages of low cohesion:

  • Increased code duplication: Low cohesion can lead to the duplication of code, as elements that belong together are split into separate modules.
  • Reduced functionality: Low cohesion can result in modules that lack a clear purpose and contain elements that don’t belong together, reducing their functionality and making them harder to maintain.
  • Difficulty in understanding the module: Low cohesion can make it harder for developers to understand the purpose and behavior of a module, leading to errors and a lack of clarity.

Que 6. Explain the features for Program Design Languages.

Program design languages are used to create algorithms and programs for software applications. These languages provide a set of features that allow programmers to design, implement, and test software programs. Here are some of the key features of program design languages:

  1. Syntax: Program design languages have their own set of rules and guidelines for writing code. Syntax refers to the rules that govern how the code is written, such as the use of punctuation, keywords, and data types. Proper syntax is essential for programs to be compiled and executed correctly.
  2. Data types: Program design languages support different types of data, including integers, floating-point numbers, characters, and strings. Data types allow programmers to manipulate data and perform operations on them.
  3. Control structures: Control structures are used to control the flow of a program. They include conditional statements, such as if-else and switch-case statements, and loops, such as for and while loops. Control structures enable programmers to write programs that make decisions and repeat tasks.
  4. Functions: Functions are used to break down programs into smaller, manageable parts. They allow programmers to reuse code and write modular programs. Functions take input, perform a set of operations, and return output.
  5. Libraries: Program design languages come with a set of libraries that provide additional functionality. Libraries are collections of functions and pre-written code that can be used to perform complex tasks. They help programmers to write efficient and reliable code.
  6. Debugging: Debugging is the process of identifying and fixing errors in code. Program design languages provide debugging tools that help programmers to find and fix errors in their code.
  7. Portability: Program design languages are portable, meaning that programs written in one language can be executed on different platforms and operating systems. This portability allows programmers to write code once and run it on different machines.

Overall, program design languages provide a set of features that enable programmers to write efficient and reliable software programs. These languages continue to evolve to meet the changing needs of the software development industry.

Also Read – Software Engineering | Architectural Design – GeeksforGeeks

Also Read – DSA – Interview ready in 60 Days

Que 7. Explain the principles of data design.

Data design (data architecting) creates a model of data and/or information that is
represented at a high level of abstraction.

At program component level – the design of data structures and the associated algorithms required to manipulate them is essential to the creation of high-quality applications.

At the application level – the translation of a data model into a database is
pivotal to achieving the business objectives of a system.

At the business level – the collection of information stored in disparate databases and reorganized into a “data warehouse” enables data mining or knowledge discovery that can have an impact on the success of the business itself.

Principles of data design :

1. The systematic analysis principles applied to function and behavior should also be applied to data. We spend much time and effort deriving, reviewing, and specifying functional requirements and preliminary design. Representations of data flow and content should also be developed and reviewed, data objects should be identified, alternative data organizations should be considered, and the impact of data modeling on software design should be evaluated. For example, specification of a multi ringed linked list may nicely satisfy
data requirements but lead to an unwieldy software design. An alternative data organization may lead to better results.

2. All data structures and the operations to be performed on each should be identified. The design of an efficient data structure must take the operations to be performed on the data structure into account . For example, consider a data structure made up of a set of diverse data elements. The data structure is to be manipulated in a number of major software functions. Upon evaluation of the operations performed on the data structure, an abstract data type is defined for use in subsequent software design. Specification of the abstract data type may simplify software design considerably.

3. A data dictionary should be established and used to define both data and program design. The concept of a data dictionary has been introduced in Chapter 12. A data dictionary explicitly represents the relationships among data
objects and the constraints on the elements of a data structure. Algorithms that must take advantage of specific relationships can be more easily defined if a dictionarylike data specification exists.

4. Low-level data design decisions should be deferred until late in the design process. A process of stepwise refinement may be used for the design of data. That is, overall data organization may be defined during requirements analysis, refined during data design work, and specified in detail during component level design. The top-down approach to data design provides benefits that are analogous to a top-down approach to software design—major structural attributes are designed and evaluated first so that the architecture of the data
may be established.

5. The representation of data structure should be known only to those modules that must make direct use of the data contained within the structure. The concept of information hiding and the related concept of coupling provide important insight into the quality of a software design. This principle alludes to the importance of these concepts as well as “the importance of separating the logical view of a data object from its physical view”.

6. A library of useful data structures and the operations that may be applied to them should be developed. Data structures and operations should be viewed as a resource for software design. Data structures can be designed for
reusability. A library of data structure templates (abstract data types) can reduce both specification and design effort for data.

7. A software design and programming language should support the specification and realization of abstract data types. The implementation of a sophisticated data structure can be made exceedingly difficult if no means for direct specification of the structure exists in the programming language chosen for implementation.

These principles form a basis for a component-level data design approach that can
be integrated into both the analysis and design activities.

Que 8. What is an architectural style? Explain taxonomy of style and patterns.

The software needs the architectural design to represents the design of software. IEEE defines architectural design as “the process of defining a collection of hardware and software components and their interfaces to establish the framework for the development of a computer system.” The software that is built for computer-based systems can exhibit one of these many architectural styles. 

1] Data centered architectures: 

  • A data store will reside at the center of this architecture and is accessed frequently by the other components that update, add, delete or modify the data present within the store.
  • The figure illustrates a typical data centered style. The client software access a central repository. Variation of this approach are used to transform the repository into a blackboard when data related to client or data of interest for the client change the notifications to client software.
  • This data-centered architecture will promote integrability. This means that the existing components can be changed and new client components can be added to the architecture without the permission or concern of other clients.
  • Data can be passed among clients using blackboard mechanism.
image 28
Data-centered architecture

Advantage of Data centered architecture

  • Repository of data is independent of clients
  • Client work independent of each other 
  • It may be simple to add additional clients.
  • Modification can be very easy 

2] Data flow architectures: 

  • This kind of architecture is used when input data is transformed into output data through a series of computational manipulative components.
  • The figure represents pipe-and-filter architecture since it uses both pipe and filter and it has a set of components called filters connected by lines.
  • Pipes are used to transmitting data from one component to the next.
  • Each filter will work independently and is designed to take data input of a certain form and produces data output to the next filter of a specified form. The filters don’t require any knowledge of the working of neighboring filters.
  • If the data flow degenerates into a single line of transforms, then it is termed as batch sequential. This structure accepts the batch of data and then applies a series of sequential components to transform it.
image 30
Data flow architectures

  Advantages of Data Flow architecture 

  •  It encourages upkeep, repurposing, and modification.
  • With this design, concurrent execution is supported.

The disadvantage of Data Flow architecture 

  • It frequently degenerates to batch sequential system 
  • Data flow architecture does not allow applications that require greater user engagement.
  • It is not easy to coordinate two different but related streams 

3] Call and Return architectures: It is used to create a program that is easy to scale and modify. Many sub-styles exist within this category. Two of them are explained below. 

  • Remote procedure call architecture: This components is used to present in a main program or sub program architecture distributed among multiple computers on a network.
  • Main program or Subprogram architectures: The main program structure decomposes into number of subprograms or function into a control hierarchy. Main program contains number of subprograms that can invoke other components. 

4] Object Oriented architecture: The components of a system encapsulate data and the operations that must be applied to manipulate the data. The coordination and communication between the components are established via the message passing.

Characteristics of  Object Oriented architecture

  • Object protect the system’s integrity.
  • An object is unaware of the depiction of other items.

Advantage of Object Oriented architecture

  • It enables the designer to separate a challenge into a collection of autonomous objects.
  • Other objects are aware of the implementation details of the object, allowing changes to be made without having an impact on other objects.

5] Layered architecture: 

  • A number of different layers are defined with each layer performing a well-defined set of operations. Each layer will do some operations that becomes closer to machine instruction set progressively.
  • At the outer layer, components will receive the user interface operations and at the inner layers, components will perform the operating system interfacing(communication and coordination with OS)
  • Intermediate layers to utility services and application software functions.
  • One common example of this architectural style is OSI-ISO (Open Systems Interconnection-International Organisation for Standardisation) communication system.
image 31
Layered architecture

Que 9. Explain
1. Graphical Design Notation.
2. Tabular Design Notation.
3. Transform mapping
4. Layered Architecture.

1] Graphical Design Notation

Obviously graphical tools such as flowchart or box diagram provide useful pictorial patterns that readily depict procedural detail. However, if graphical tools are misused, the wrong picture may lead to the wrong software.

A flowchart is quite simple pictorially. A box is used to indicate a processing step. A diamond represents a logical condition, and arrows show the flow of control. illustrates three structured constructs:-

  1. The sequence is represented as two processing boxes connected by a line (arrow) of control.
  2. Condition, also called if-then-else, is depicted as a decision diamond that if true, causes then-part processing to occur, and if false, invokes else-part processing
  3. Repetition is represented using two slightly different forms.
  • The do while tests a condition and executes a loop task repetitively as long as the condition holds true.
  • A repeat until executes the loop task first, then tests a condition and repeats the task until the condition fails.
    The selection (or select-case) construct shown in the figure is actually an extension of the if-then-else. A parameter is tested by successive decisions until a true condition occurs and a case part processing path is executed.
  • The structured constructs may be nested within one another. By nesting constructs in this manner, a complex. logical schema may be developed. It should be noted that any one of the nested module can refer another module thereby accomplishing procedural layering implied by program structure.
WhatsApp Image 2023 03 30 at 12.40.36 AM
  • In general, the dogmatic use of only the structured -constructs can introduce inefficiency when an escape from a set of nested loops or nested conditiona required. More important, additional complication of all logical tests along the path of escape can cloud software control flow, increase the possibility of error, and have a negative impact on readability and maintainability.
  • Another graphical design tool, the box diagram, evolved from desire to develop a procedural design a representation that would not allow violation of the structured constructs.
  • Developed by Nassi and Shneiderman the diagrams (also called Nassi-Shneiderman charts, N-S charts,) have the following characteristics
    • Functional domain (the scope of repetition or if then- else) is well defined and clearly visible as a pictorial representation,
    • Arbitrary transfer of control is impossible,
    • The scope of local and/or global data can be easily determined,
    • Recursion is easy to represent.
WhatsApp Image 2023 03 30 at 12.42.41 AM
  • The graphical representation of structured constructs using the box diagram is illustrated in Figure. The fundamental element of the diagram is a box.
    • To represent sequence, two boxes are connected bottom to top.
    • To represent if-then-else, a condition box is followed by a then-part and else-part box.
    • Repetition is depicted with a bounding pattern that encloses the process (do-while part or repeat-until part) to be repeated.
    • Finally, selection is represented using the graphical form shown at the bottom of the figure.
  • Like flowcharts, a box diagram is layered on multiple pages as processing elements of a module are refined. A “call” to a subordinate module can be represented within a box by specifying the module name enclosed by an oval.

2] Tabular Design Notation

  • In many software applications, a module may be required to evaluate a complex combination of conditions & select appropriate actions based on these conditions.
  • Decision tables provide a notation that translates actions and conditions into a tabular form.
  • The table is difficult to misinterpret and may even be used as a machine readable input to a table driven algorithm.
  • Decision tables are an excellent design tools. Decision tables preceded software engineering by nearly a decade, but fit so well with software engineering that they might have been designed for that purpose.
WhatsApp Image 2023 03 30 at 12.45.00 AM
  • Decision table organization is illustrated in Figure 14.4, Referring to the figure, the table is divided into four sections.
    • The upper left-hand quadrant contains a list of all conditions.
    • The lower left-hand quadrant contains a list of all actions that are possible based on combinations of conditions.
    • The right-hand quadrants form a matrix that indicates condition combinations and the corresponding actions that will occur for a specific combination. Therefore, each column of the matrix may be interpreted as a processing rule.
  • The following steps are applied to develop a decision table,
    List all actions that can be associated with a specific procedure (or module).
    List all conditions (or decisions made) during execution of the procedure.
    Associate specific sets of conditions with specific actions, eliminating impossible combinations of conditions; alternatively, develop every possible permutation of conditions.
    Define rules by indicating what action(s) occurs for a set of Conditions.
WhatsApp Image 2023 03 30 at 12.45.34 AM
  • Illustrates a decision table representation of the random data. Each of the five rules indicates one of five viable conditions (i.e., a T (true) in both fixed rate and variable rate account makes no sense in the context of this procedure; so this condition is omitted). As a general rule, the decision table can be effectively used to supplement other procedural design notation.

Programming Design Language (PDL)

  1. Program design language (PDL), also called structured English or pseudo-code, is “a pidgin language in that it uses the vocabulary of one language (i.e., English) and the overall syntax of another (i.e,, a structured programming language)”.
  2. PDL is used as a generic reference for a design language. PDL looks like a modern programming language.
  3. The difference between PDL and a real programming language lies in the use of narrative text (e.g., English) embedded directly within PDL statements.
  4. Given the use of narrative text embedded directly into a syntactical structure, PDL cannot be compiled.
    However, PDL tools currently exist to translate PDL into a programming language “skeleton” and/or a graphical representation (e.g., a flowchart) of design.
  5. These tools also produce nesting maps, a design operation index, cross-reference tables, and a variety of other information.
  6. A program design language may be a simple transposition of a language such as Ada or C. Alternatively, it may be a specifically for procedural design. product purchased.
  7. Regardless of origin, a design language should have the following characteristics
    • A fixed syntax of keywords that provide for all structured constructs, data declaration, and modularity characteristics.
    • A free syntax of natural language that describes processing features.
    • Data declaration facilities that should include both simple (scalar, array) and complex (linked list or tree) data structures.
    • Subprogram definition and calling techniques that support various modes of interface description.
  8. A basic PDL syntax should include constructs for subprogram definition, interface description, data declaration, techniques for block structuring condition constructs, repetition constructs, and I/O constructs.
  9. The following PDL defines an elaboration of the procedural design
    PROCEDURE security. Monitor;
    INTERFACE RETURNS system. Status;
    TYPE signal IS STRUCTURE DEFINED
    Name IS STRING LENGTH VAR;
    Address IS HEX device location
    Bound Value IS upper bound SCALAR;
    Message IS STRING LENGTH VAR;
    END signal TYPE;
  10. It should be noted that PDL can be extended to include keywords for multitasking and/or concurrent processing, interrupt handling, inter-process synchronization, and many other features.

3] Transform Mapping

  • Recalling the fundamental system model (level 0 dats flow diagram), information must enter and exit software in an “external world” form. For example, data typed on a keyboard, tones on a telephone line, and video images in a multimedia application are all forms of external world information. Such externalized data must be converted into an internal form for processing.
  • Information enters the system along paths that transform external data into an internal form. These paths are identified as incoming flow.
  • At the kernel of the software, a transition occurs. Incoming data are passed through a transform center and begin to move along paths that now lead “out” of the software. Data moving along these paths are called outgoing flow.
  • The overall flow of data occurs in a sequential manner and follows one, or only a few, “straight line” paths. When a segment of a data flow diagram exhibits these characteristics, transform flow is present.
  • Transform mapping is a set of design steps that allows DFD with transform flow characteristics to be mapped into a specific architectural style. Transform mapping is described by applying design steps to an example system a portion of the SafeHome security software.

Design Steps

The steps begin with a re-evaluation of work done during requirements analysis and then move to the design of the software architecture.

  1. Review the Fundamental System Model: The fundamental system model encompasses the level 0 DFD and supporting information. In actuality, the design step begins with an evaluation of both the System Specification and the Software Requirements Specification.
  2. Review and Refine Data Flow Diagrams for the software: Information obtained from analysis models contained in the Software Requirements. Specification is refined to produce greater detail. At higher level each transform in the data flow diagram exhibits relatively high cohesion. That is, the process implied by a transform performs a single, distinct function that can be implemented as a module in the SafeHome software.
  3. Determine Whether the DFD has Transform or Transaction flow Characteristics: In general, information flow within a system can always be represented as transform. However, when an obvious transaction characteristic is encountered, a different design mapping is recommended. In this step, the designer selects global (software wide) flow characteristics based on the prevailing nature of the DFD. In addition, local regions of transform or transaction flow are isolated. These sub flows can be used to refine program architecture derived from a global characteristic described previously.
  4. Isolate the Transform center by Specifying Incoming and outgoing flow Boundaries: The incoming flow is as a path in which information is converted from external to internal form; outgoing flow converts from internal to external form. Incoming and outgoing flow boundaries are open to interpretation. That is, different designers may select slightly different points in the flow as boundary locations. In fact, alternative design solutions can be derived by varying the placement of flow boundaries. Although care should be taken when boundaries are selected, a variance of one bubble along a flow path will generally have little impact on the final program structure.
  5. Perform First-Level Factoring: Program structure shows (represents) a top-down distribution of control. Factoring results in a program structure in which top- level modules perform decision making and low-level modules perform most input, computation, and output work. Middle-level modules perform some control & do moderate amounts of work. When transform flow is encountered, a DFD is mapped to a specific structure (a call & return architecture) that provides control for incoming, transform, and outgoing information processing.
  6. Perform Second-Level factoring is accomplished by mapping individual transforms (bubbles) of a DFD into appropriate modules within the architecture. Beginning at the transform center boundary and moving outward along incoming and then outgoing paths, transforms are mapped into subordinate levels of the software structure. Second-level factoring for incoming flow follows in the same manner. Factoring is again accomplished by moving outward from the transform center boundary on the incoming flow side. The narrative describes
    • Information that passes into and out of the module (an interface description).
    • Information that is retained by a module, such as data stored in a local data structure.
    • A procedural narrative that indicates major decision points and tasks.
    • A brief discussion of restrictions and special features (e.g., file I/O, hardware dependent characteristics, special timing requirements).
  7. Refine the First-Iteration Architecture using Design Heuristics for Improved Software Quality: First iteration architecture can always be refined by applying concepts of module independence. Modules are exploded or imploded to produce sensible factoring, good cohesion, minimal coupling, and most important, a structure that can be implemented without difficulty, tested without confusion, and maintained without grief.

4] Layered Architecture

  • A number of different layers are defined with each layer performing a well-defined set of operations. Each layer will do some operations that becomes closer to machine instruction set progressively.
  • At the outer layer, components will receive the user interface operations and at the inner layers, components will perform the operating system interfacing(communication and coordination with OS)
  • Intermediate layers to utility services and application software functions.
  • One common example of this architectural style is OSI-ISO (Open Systems Interconnection-International Organisation for Standardisation) communication system.
image 32

Que 10. Explain user interface design process.

The design process for user interfaces is iterative and car be represented using a spiral model.

WhatsApp Image 2023 03 30 at 1.09.09 AM

the user interface design process encompasses four distinct framework activities:-

  1. User, task, and environment analysis and modeling
  2. Interface design
  3. Interface construction
  4. Interface validation.
  1. User, Task and Environment Analysis and Modelling
  • The spiral shown in Figure 13.1 implies that each of user’s tasks will occur more than once, with each pass around the spiral representing additional elaboration of requirements and the resultant design.
  • In most cases, the implementation activity involves prototyping the only practical way to validate what has been designed.
  • The initial analysis activity focuses on the profile of the users who will interact with the system. Skill level, business understanding, and general receptiveness to the new system are recorded; and different user categories are defined.
  • For each user category, requirements are elicited. In essence, the software engineer attempts to understand the system perception for each class of users.
  • Once general requirements have been defined, a more detailed task analysis is conducted. Those tasks that the user performs to accomplish the goals of the system are identified, described, and elaborated over a number of iterative passes through the spiral.
  • The analysis of the user environment focuses on the physical work environment. Among the questions to be asked are

2. Interface Design

  • Using this model as a basis, the design activity commences. The goal of interface design is to define a set of interface objects and actions that enable a user to perform all defined tasks in a manner that meets every usability goal defined for the system.
  • The implementation activity normally begins with the creation of a prototype that enables usage scenarios to be evaluated.

3. Interface Construction

  • As the iterative design process continues, a user interface tool kit may be used to complete the construction of the interface.
  • The interface design must also consider how the interface will be implemented, the environment that will be used, and other elements of the application that “sit behind” the interface.

4. Interface Validation

Validation focuses on:-

  • The ability of the interface to implement every user task correctly, to accommodate all task variations, and to achieve all general user requirements;
  • The degree to which the interface is easy to use an easy to learn; and
  • The user’s acceptance of the interface as a useful tool in their work.

Que 11. Outline the design documentation specification.

The design model was represented as a pyramid. The symbolism of this shape is important. A pyramid is an extremely stable object with a wide base and low center of gravity. Like the pyramid, we want to create software design that is stable. By establishing a broad foundation using data design, a stable mid-region with a architectural and interface design, and a sharp point by applying component-level design.

Design Documentation

  1. The Design aspects of the design model and is completed as the designer refines his representation of the software.
  2. First, the overall scope of the design effort is described. Much of the information presented here is derived from the System Specification and the analysis model.
  3. Next, the data design is specified. Database structure, any external file structures, internal data structures, and a cross reference that connects data objects to specific files are all defined.
  4. The architectural design indicates how the program architecture has been derived from the analysis model. In addition, structure charts are used to represent the module hierarchy.
  5. The design of external and internal program interfaces is represented and a detailed design of the human/machine interface is described.
  6. Components separately addressable elements of software such as subroutines, functions, or procedures are also covered in separate section.
  7. Once program structure and interfaces have been established, we can develop guidelines for testing of individual modules and integration of the entire package in the testing & integration section.
  8. The final section of the Design Specification contains supplementary data. Algorithm descriptions, alternative procedures, tabular data, excerpts from other documents, and other relevant information are presented as a special note or as a separate appendix. It may be advisable to develop a Preliminary Operations/Installation Manual and include it as an appendix to the design document.

Que 12. Describe Data Structures, Database and Data Ware house with its levels in software development.

The data objects defined during software requirements analysis are modeled using entity/relationship diagrams and the data dictionary. The data design activity translates these elements of the requirements model into data structures at the software component level and, when necessary, a database architecture at the application level.

In years past, data architecture was generally limited to data structures at the program level and databases at the application level. But today, businesses large and small are awash in data. It is not unusual for even a moderately sized business to have dozens of databases serving many applications encompassing hundreds of gigabytes of data. The challenge for a business has been to extract useful information from this data environment, particularly when the information desired is cross functional (e.g., information that can be obtained only if specific marketing data are cross-correlated with product engineering data).

To solve this challenge, the business IT community has developed data mining techniques, also called knowledge discovery in databases (KDD), that navigate through existing databases in an attempt to extract appropriate business-level information. However, the existence of multiple databases, their different structures, the degree
of detail contained with the databases, and many other factors make data mining difficult within an existing database environment. An alternative solution, called a data warehouse, adds an additional layer to the data architecture.

A data warehouse is a separate data environment that is not directly integrated with day-to-day applications but encompasses all data used by a business. In a sense, a data warehouse is a large, independent database that encompasses some, but not all, of the data that are stored in databases that serve the set of applications required by a business. But many characteristics differentiate a data warehouse from the typical database:

Subject orientation: A data warehouse is organized by major business subjects, rather than by business process or function. This leads to the exclusion of data that may be necessary for a particular business function but is
generally not necessary for data mining.

Integration : Regardless of the source, the data exhibit consistent naming conventions, units and measures, encoding structures, and physical attributes, even when inconsistency exists across different application-oriented databases.

Time variancy : For a transaction-oriented application environment, data are accurate at the moment of access and for a relatively short time span (typically 60 to 90 days) before access. For a data warehouse, however, data
can be accessed at a specific moment in time (e.g., customers contacted on the date that a new product was announced to the trade press). The typical time horizon for a data warehouse is five to ten years.

Nonvolatility: Unlike typical business application databases that undergo a continuing stream of changes (inserts, deletes, updates), data are loaded into the warehouse, but after the original transfer, the data do not change.

These characteristics present a unique set of design challenges for a data architect

WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now
Linkedin Page Join Now

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top