Modularity

Investigating software modularity using class and module level metrics

Michael English , ... J.J. Collins , in Software Quality Assurance, 2016

Abstract

Modularity is at the core of software quality. It is an attribute which reflects the complexity of software systems, and their ability to evolve. In previous metric-based inquiry, modularity has been predominantly assessed at the class level, but this level seems inappropriate for the large-scale software systems of today due to information overload. More than recently work has begun to focus on the assessment of modularity at college levels of abstraction for these types of software systems.

In moving to assess such systems at the module rather than the class level, the first question that arises is to define the nature of a module. In previous research, the concept of module has many definitions some of which are ambiguous. In this affiliate we investigate if metrics for higher level abstractions can help to inform on the composition of High Level Modules (HLMs). Another interesting question is focused on whether form level modularity metrics in object-oriented systems reflect module level modularity metrics in systems. In other words, practice relationships exist between metrics extracted at dissimilar levels of abstraction in systems?

This chapter probes these two issues past reviewing the relevant literature and performing a preliminary empirical written report that aims to identify candidate HLMs and assesses the ability of modularity metrics at that level to inform on modularity bug at lower levels of brainchild in the system. It proposes a uncomplicated metric-based characterization of HLMs and suggests that metric correlations, at dissimilar levels of abstraction, exercise be.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128023013000089

13th International Symposium on Procedure Systems Engineering (PSE 2018)

Feng Hua , ... Tong Qiu , in Reckoner Aided Chemical Engineering science, 2018

2.ii Modularity of the network

The modularity of the substrate graph is measured using the community detection method ( Blondel et al., 2008). The modularity is divers as a scalar value between -1 and 1 that measures the density of links inside the community compared to links between communities. If the number of within-community edges is the aforementioned as random, we will get Q   =   0. Q increases with stronger community structure. Through analyse, the modularity of the substrate graph is 0.194. The high modularity can be considered equally the result of the similarity of reactions. Reactions of the aforementioned mechanism connect the relevant components in the aforementioned design, resulting in the clustering 'cliques' in this network. The wide existence of communities suggests that the local features of the network topology might exist extracted by learning CNN compages.

Read full chapter

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B978044464241750135X

Object-Oriented Programming

Raymond Greenlaw , Y. Daniel Liang , in Encyclopedia of Information Systems, 2003

II.B. Modularity

Modularity is a cardinal principle of programming. It is intended to control the complexity of a software system through the utilise of the divide and conquer arroyo. A circuitous organisation tin exist decomposed into a fix of loosely coupled, but cohesive modules.

The concept of modularity was developed for programming using procedural programming languages in which the modules take the grade of procedures and functions. The same concept applies to OOP in which modules accept the form of classes. Decomposition of a software organization into smaller modules in an object-oriented organization means designing classes to model the system.

A class should be cohesive and describe a single entity or a set of similar operations. You lot can use a course for students, for example, but do non combine students and staff in the same class since students and staff will have different sets of operations. A single entity with also many responsibilities tin be broken into several classes to split responsibilities.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122272404001246

Graph Creation and Analysis for Linking Actors: Application to Social Data

Charles Perez , Rony Germon , in Automating Open Source Intelligence, 2016

Modularity

The modularity is a number that illustrates how much a given graph may be organized into communities. The modularity (Newman, 2006) captures how good is a given partition compared with a randomly wired network. The random network is here calculated based on a randomization of the original graph, while keeping the degree of each node unchanged. Under this constraint, the probability of observing a link between a node i and j equals yard i × k j ii L . Modularity Q, as expressed in the equation below, increases equally the number of observed edges (stored in the adjacency matrix A) is significantly higher than the expected random ratio over the nodes that belongs to the same community.

Q = ane 2 L five westward A v w k i × k j ( 2 L ) ii ( c v , c w )

where:

δ is the Kronecker delta, it equals to 1 if u and v belong to the same customs and 0 otherwise.

k i is the caste of node u

Fifty is the number of edges in the graph

A vw is the chemical element located at row 5 and cavalcade w of the adjacency matrix A

This value can exist used as reference for clustering (Clauset, Newman, & Moore, 2004; Shiokawa, Fujiwara, & Onizuka, 2013) by successively merging communities that allow obtaining the best increases in modularity.

The iterative process has the 5 post-obit steps:

1.

Each node belongs to a unique community.

2.

Consider each community pair, and evaluate the modularity score Q that could be obtained by merging them.

three.

Merge the communities that permit the highest variation in modularity (ΔQ).

4.

Repeat the steps (ii and iii) until only one community remains.

5.

Return the partitions that have allowed obtaining the highest modularity score.

Note that many clustering approaches exist (eastward.thou., Girvan–Newman algorithm, CFinder algorithm, Markov Cluster Algorithm). When trying to identify communities, one should consider the benefits and drawbacks of each method in order to use the nigh appropriate one. Examples of comparison criteria are: the computation costs and capacity of the algorithm to scale on large dataset; the capacity of the approach to place the all-time number of communities; the possibility of identifying some overlapping communities; etc.

Read total affiliate

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780128029169000075

From genotype to phenotype: looking into the black box

Jessica A. Bolker , in On Growth, Form and Computers, 2003

four.2.three Patterns of modify in modular systems

Modularity is important not just equally a descriptor of phenotypes, only too as an influence on their evolution. A arrangement built of interconnected modules is both economic of data (in whatsoever class: genetic or software code) and capable of item kinds of change based on changes in the number, connectivity and context-dependent office of its parts. Raff (1996) describes dissociation, duplication and departure, and co-pick as evolutionary processes that apply to developmental modules, and that generate 'nonrandom variation within the existing modules that can lead to new internal patterns of order' (Raff, 1996, p. 325). The following discussion closely follows his.

Read full chapter

URL:

https://world wide web.sciencedirect.com/science/article/pii/B9780124287655500372

Parallel and Distributed Systems

Dan C. Marinescu , in Cloud Computing (Second Edition), 2018

4.eight Soft Modularity versus Enforced Modularity

The progress made in system design is notable not in the least due to a number of principles guiding the design of parallel and distributed systems. Ane of these principles is specialization; this means that a number of functions are identified and an adequate number of organization components are configured to provide these functions. For example, data storage is an intrinsic function and storage servers are a ubiquitous presence in most systems. This brings us to the modularity concept.

Modularity allows the states to build a complex software system from a set of components built and tested independently. A requirement for modularity is to clearly ascertain the interfaces betwixt modules and enable the modules to work together. The steps involved in the transfer of the flow of command betwixt the caller and the callee are:

1.

The caller saves its land including the registers, the arguments, and the return address on the stack.

2.

The callee loads the arguments from the stack, carries out the calculations so transfers control back to the caller.

iii.

The caller adjusts the stack, restores its registers, and continues its processing.

Soft modularity. We distinguish soft modularity from enforced modularity. The former implies dividing a plan into modules which call each other and communicate using shared-memory or follow the procedure phone call convention.

Soft modularity hides the details of the implementation of a module and has many advantages: in one case the interfaces of the modules are divers, the modules can be adult independently; a module can be replaced with a more elaborate, or with a more than efficient i, equally long as its interfaces with the other modules are not changed. The modules tin exist written using different programming languages and can be tested independently.

Soft modularity presents a number of challenges. It increases the difficulty of debugging; for instance, a telephone call to a module with an infinite loop will never return. At that place could be naming conflicts and incorrect context specifications. The caller and the callee are in the same address infinite and may misuse the stack, e.g., the callee may use registers that the caller has not saved on the stack, and so on.

Strongly-typed languages may enforce soft modularity by ensuring type safety at compile time or at run time, information technology may decline operations or part grade which disregard the data types, or information technology may not allow class instances to have their course contradistinct. Soft modularity may be affected by errors in the run-time system, errors in the compiler, or by the fact that different modules are written in different programming languages.

Enforced modularity. The ubiquitous client–server paradigm is based on enforced modularity; this means that the modules are forced to interact only past sending and receiving messages. This paradigm leads to a more than robust blueprint, the clients and the servers are independent modules and may fail separately.

Moreover, the servers are stateless, they do not have to maintain state information. A server may fail and so come up back up without the clients existence afflicted, or fifty-fifty noticing the failure of the server. The arrangement is more than robust as it does not allow errors to propagate. Enforced modularity makes an attack less likely because it is difficult for an intruder to judge the format of the messages or the sequence numbers of segments, when messages are transported by TCP.

Last but not least, resources can be managed more than efficiently. For example, a server typically consists of an ensemble of systems, a forepart-stop system which dispatches the requests to multiple back-end systems which process the requests. Such an architecture exploits the elasticity of a computer cloud infrastructure, the larger the request rate, the larger the number of back-terminate systems activated.

The client–server image. This image allows systems with dissimilar processor architecture, due east.m., 32-fleck or 64-bit, with different operating systems, due east.g., multiple versions of operating systems, such as Linux, Mac OS, or Microsoft Windows, libraries and other system software, to cooperate. The customer–server paradigm increases flexibility and choice; the same service could be available from multiple providers, a server may use services provided by other servers, a client may use multiple servers, and so on.

Heterogeneity of systems based on the customer–server prototype is less of a approval, the issues it creates outweigh its appeal. Heterogeneity adds to the complication of the interactions between a client and a server as information technology may require conversion from one information format to another, e.g., from little-endian to large-endian or vice-versa, or conversion to a canonical data representation. At that place is likewise doubtfulness in terms of response time as some servers may be more performant than others or may have a lower workload.

A major difference between the basic models of grid and cloud calculating is that the one-time does not impose any restrictions regarding heterogeneity of the computing platforms, whereas homogeneity used to be a bones tenet of figurer clouds infrastructure. Originally, a reckoner cloud was a collection of homogeneous systems, systems with the same compages and running under the same or very similar organisation software. Nosotros have already seen in Section 2.iv that present computer clouds exhibit some level of heterogeneity.

The clients and the servers communicate through a network that can be congested. Transferring large volumes of data through the network can be time-consuming; this is a major concern for data-intensive applications in cloud computing. Communication through the network adds additional delay to the response fourth dimension. Security becomes a major business concern as the traffic between a customer and a server can exist intercepted.

Remote Procedure Call (RPC). RPCs were introduced in the early 1970s by Bruce Nelson and used for the first time at PARC (Palo Alto Inquiry Park). PARC is credited with many innovative ideas in distributed systems including the development of the Ethernet, the GUI interfaces, bitmap displays, and the Alto system.

RPC is often used for the implementation of client–server systems interactions. For example, the Network File System (NFS) introduced in 1984 was based on Sunday's RPC. Many programming languages support RPCs. For case, Java Remote Method Invocation (Java RMI) provides a functionality similar to the one of UNIX RPC methods; XML-RPC uses XML to encode HTML-based calls. The RPC standard is described in RFC 1831.

To use an RPC, a process may use special services PORTMAP or RPCBIND available at port 111 to register and for service lookup. RPC letters must be well-structured; they identify the RPC and are addressed to an RPC demon listening at an RPC port. XDP is a machine independent representation standard for RPC.

RPCs reduce the fate sharing between caller and the callee. RPCs take longer than local calls due to communication delays. Several RPC semantics are used to overcome potential advice issues:

At least once: a message is resent several times and an answer is expected. The server may end upwards executing a request more than once, merely an reply may never be received. This semantics is suitable for operation free of side-effects.

At most once: a bulletin is acted upon at virtually once. The sender sets upwardly a timeout for receiving the response. When the timeout expires an error code is delivered to the caller. This semantics requires the sender to keep a history of the fourth dimension-stamps of all messages as letters may go far out-of-order. This semantics is suitable for operations which have side effects.

Exactly once: it implements the at well-nigh once semantics and requests an acquittance from the server.

Applications of the client–server paradigm. The big spectrum of applications attests to the role played by the client–server epitome in the modern computing landscape. Examples of popular applications of the client–server paradigm are numerous and include: the World Wide Web, the Domain Name System (DNS), the 10-windows, electronic mail, run across Effigy iv.6A, event services, run across Figure 4.6B, and so on.

Figure 4.6

Effigy 4.6. (A) Email service; the sender and the receiver communicate asynchronously using inboxes and outboxes. Mail daemons run at each site. (B) An issue service supports coordination in a distributed system environment. The service is based on the publish-subscribe paradigm; an result producer publishes events and an event consumer subscribes to events. The server maintains queues for each upshot and delivers notifications to clients when an event occurs.

The Www illustrates the power of the client–server epitome and its effects on the society. Equally of June 2011 in that location were shut to 350 1000000 spider web sites, in 2017 there are effectually one billion web sites. The spider web allows users to access resources such equally text, images, digital music, and any imaginable type of data previously stored in a digital format. A web page is created using a description linguistic communication called HTML (Hypertext Clarification Linguistic communication). The information in each web folio is encoded and formatted according to some standard, eastward.g., GIF, JPEG for images, MPEG for videos, MP3 or MP4 for audio, and and so on.

The web is based upon a "pull" prototype; the resources are stored at the server's site and the client pulls them from the server. Some web pages are created "on the fly" others are fetched from the disk. The customer, chosen a spider web browser and the server communicate using an application-level protocol called HTTP (HyperText Transfer Protocol) built on top of the TCP transport protocol.

The web server likewise called an HTTP server listens at a well known port, port fourscore, for connections from clients. Figure iv.7 shows the sequence of events when a customer browser sends an HTTP request to a server to remember some information and the server constructs the page on the fly and then the browser sends another HTTP request for an epitome stored on the deejay. Commencement a TCP connexion between the client and the server is established using a process chosen a three-way handshake. The client provides an arbitrary initial sequence number in a special segment with the SYN control bit on; so the server acknowledges the segment and adds its ain arbitrarily called initial sequence number; finally, the client sends its ain acknowledgment ACK as well as the HTTP asking and the connexion is established. The time elapsed from the initial request till the server's acquittance reaches the client is chosen the RTT (Round-Trip Time).

Figure 4.7

Effigy 4.vii. Client–server advice, the World wide web. The 3-way handshake involves the first three messages exchanged between the customer browser and the server. In one case the TCP connection is established the HTTP server takes its time to construct the page to respond to the first asking; to satisfy the 2d asking the HTTP server must recall an paradigm from the disk. The response fourth dimension includes the RTT, the server residence time, and the data transmission fourth dimension.

The response fourth dimension, defined equally the time from the case the first flake of the asking is sent until the final bit of the response is received, consists of several components: the RTT, the server residence time, the time it takes the server to construct the response, and the information transmission time. RTT depends on the network latency, the time information technology takes a package to cross the network from the sender to the receiver. The data manual fourth dimension is determined by the network bandwidth. In plough, the server residence fourth dimension depends on the server load.

Ofttimes, the customer and the server do not communicate directly, but through a proxy server as shown in Figure iv.8. Proxy servers could provide multiple functions; for case, they may filter client requests and determine whether or not to forrad the request based on some filtering rules. A proxy server may redirect a request to a server in close proximity of the client or to a less loaded server. A proxy can as well human activity equally a enshroud and provide a local copy of a resources, rather than forward the request to the server.

Figure 4.8

Figure four.eight. A client can communicate direct with the server, it tin can communicate through a proxy, or it may apply tunneling to cross the network.

Some other type of client–server communication is HTTP-tunneling used most often as a means of communication from network locations with restricted connectivity. Tunneling means encapsulation of a network protocol, in our example HTTP acts equally a wrapper for the communication aqueduct between the client and the server, see Figure 4.8.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128128107000054

Modularity versus Interactive Processing, Psychology of

R.E. Alterman , in International Encyclopedia of the Social & Behavioral Sciences, 2001

Theories of modularity function to root the structural elements of language and thought in special purpose, neurologically hardwired, faculties of mind. Interaction accounts complement modularity stories most biologically produced structures with stories about external structures used during, and created by, activity. They likewise invite skepticism about the assumptions made in the modularity arguments. There are iii points of contrast between the assumptions underlying modularity and interactionism. Modularity accounts assume as a basic unit of analysis a reduction of mind to what goes on in the head; interactionist accounts assume interaction, peculiarly social interaction, as the basic unit of assay. A second point of contrast concerns the deviation between structures as biologically determined and external structures that emerge as a product of human activity. The final point concerns the historical aspects of cognition. These differences in assumption atomic number 82 to some disquisitional differences in viewpoint. Whether language and its structure can be reduced to an analysis independent of semantics and the social interaction in which language (and language learning) occurs is a point of debate. With regard to Fodor's version of modularity, the issue pivots around the deviation between a 'language of thought' and the structure of thinking, which depends on the history of such an activity within a community of actors. Additionally, many interactionists argue against the notion of internal representations of the sort supported past a 'language of thought' argument.

Read full chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B0080430767015436

Rewriting

Nachum Dershowitz , David A. Plaisted , in Handbook of Automatic Reasoning, 2001

10 Programming

Rewrite systems are readily used every bit a programming language. If ane requires of the programmer that all programs be terminating, then rewriting may be used as is to compute normal forms. With ground confluence, one is assured of their uniqueness.

Modularity is critical in the programming context. The idea of modularity is to infer properties of a combination of two rewrite systems from properties of their parts:

10.1. Theorem Toyama 1987

The union of ii confluent rewrite systems sharing no function symbols or constants is also confluent.

An example showing that confluence is not preserved when a constructor is shared is:

f ( x , x ) a f ( ten , c ( ten ) ) b | eastward c ( east )

In the combined nonterminating, not-left-linear organisation, f(due east, e) reduces both to a and b.

10.ii. Theorem Toyama, Klop and Barendregt 1995

The union of two convergent left-linear rewrite systems sharing no function symbols or constants is also convergent.

For a proof, see [Marchiori 1995].

These results unfortunately do not carry over to the prevalent situation of shared constructors. One result that does is:

10.3. Theorem Gramlich 1995, Dershowitz 1995

The union of ii convergent rewrite systems, sharing only constructor symbols and all of whose disquisitional pairs are overlays, is convergent.

This is considering innermost termination of such systems implies termination, while innermost termination is preserved by such unions [Kurihara and Kaji 1990].

10.iv. Definition

Nosotros say that two rewrite systems R and S are mutually-orthogonal (symbolized RS) if there are no non-fiddling disquisitional pairs between rules of the different systems.

As a corollary of Theorem 5.xv, nosotros have:

10.5. Theorem

The union of two mutually-orthogonal rewrite systems is confluent if it is terminating.

Analogous to Theorem v.23, nosotros have:

10.6. Theorem Raoult and Vuillemin 1980

The spousal relationship of two left-linear confluent mutually-orthogonal rewrite systems is confluent.

The related study of backdrop of combinations of algebraic rewriting with versions of the lambda calculus began with [Breazu-Tannen and Gallier to announced].

Many programs (interpreters, for example) practice not always terminate. Notwithstanding, we would want to compute normal forms whenever they exist. Confluent systems have at most i normal form per input term, and orthogonal systems are confluent. The left-linearity brake for orthogonal systems is reasonable in the programming context, since the formal parameters of procedure definitions are distinct. It is too convenient for efficiency of design matching. To check if a term f(southward, t) is an instance of a left-paw side f(x,10), information technology is necessary to check that s and t are identical, which can require fourth dimension proportional to the size of southward or t. (Of form, at that place are too cases where it is very convenient to use non-left-linear rules.)

To discover the unique normal form for orthogonal systems, when information technology exists, 1 tin can use the post-obit strategy for choosing the redex at which to utilize a rule:

x.7. Definition Outermost Rewriting

A rewriting pace southwardt is outermost with respect to some rewrite organisation if no rule applies at a symbol closer to the root symbol (in the tree representation of terms).

10.8. Theorem

Outermost Normalization [O'Donnell 1977]

For whatever orthogonal system, if no outermost pace is perpetually ignored, the normal form—if there is 1—will be reached.

Outermost rewriting of expressions is similarly used to compute normal forms in combinatory logic and caput normal forms in the lambda calculus.

In this way, orthogonal systems provide a simple, blueprint-directed (outset-guild) functional programming language, in which the orthogonal conditional operator

i f ( T , 10 , y ) x i f ( F , x , y ) y

can likewise conveniently exist incorporated. Various strategies accept been adult for efficient computation in special cases. Moreover, orthogonal systems lend themselves hands to parallel evaluation schemes.

Huet and Lévy [1991] developed a theory of "needed redexes" and optimal derivations for orthogonal systems. The need for a redex is, all the same, undecidable, except in special cases [Hoffmann and O'Donnell 1982, Huet and Lévy 1991]. Chew [1980] used congruence-closure techniques to enshroud results of prior sequences of orthogonal rewrites, and improve performance; this idea was extended to a form of not-orthogonal convergent systems in [Verma 1995].

Since programs are often nonterminating, techniques for showing confluence of nonterminating provisional rewrite systems are useful:

10.9. Definition

Conditional Orthogonality [Bergstra and Klop 1986]

A provisional rewrite system is orthogonal if

1.

every variable occurring on the right side or in a status besides appears on the left,

2.

each variable occurs at most in one case in a left-hand side of a rule,

3.

one side of each condition is a ground normal form,

4.

no left-manus side unifies with a renamed nonvariable subterm of any other left-hand side or with a proper subterm of itself, and

5.

no left-hand side is merely a variable.

10.x. Theorem Bergstra and Klop 1986

Every orthogonal conditional rewrite arrangement is confluent.

This definition of orthogonality could be weakened to allow overlaps when the conjunction of the conditions of the overlapping rules cannot be satisfied by the rules of the system. This is the case with the Conditional Suspend example, since just the concluding ii rules overlap, merely null(ϵ) can never be F.

Equally indicated earlier, there are diverse methods of defining the semantics of provisional rewrite systems. For example, if we have arbitrary conditions equally in

p ( c ) | a b ¬ p ( c ) | a b

can nosotros rewrite a to b? Nosotros might say aye, since either p(c) is true or ¬ p(c) is. Nosotros might say no, since neither condition can be proved. For discussions of logic-based semantics and alternative operational semantics for provisional systems, see [Brand et al. 1979, Plaisted 1987, Dershowitz and Plaisted 1988, Dershowitz and Okada 1990].

Conditional equations provide a natural span between functional programming, based on equational semantics, and logic-programming, based on Horn clauses. Note that the to a higher place rules can be expressed as

p ( c ) = T | a b ¬ p ( c ) = T | a b

In this fashion, nosotros can convert atmospheric condition involving arbitrary formulae to conditions involving equations. Even so, the police of the excluded middle no longer holds; we practice not accept 10 = T or x = F for all x. This changes the semantics, of grade. Interpreting definite Horn clauses p ∨ ¬ q one ∨ … ∨ ¬q n every bit conditional rewrite rules, q 1T ∧ ⋯ ∧ q n T | pT, gives a system satisfying the constraints of Theorem 9.three, because predicate symbols are never nested in the "caput" p of a clause. Furthermore, all critical pairs are joinable, since all right-hand sides are just T.

Nonetheless, logic programming permits variables to be spring by unification, whereas conditional rewriting typically uses matching instead, which is more restrictive. To simulate a language like Prolog, something similar "provisional narrowing" is needed. Run into [Dershowitz and Plaisted 1988] for one approach to provisional narrowing. (Meet [Baader and Snyder 2001, page 495] in Affiliate 8 of this Handbook, for the definition of narrowing and related equation-solving methods.) Solving existential queries for provisional equations corresponds to the logic-programming adequacy of resolution-based languages like Prolog. Goals of the form s =? t can be solved past a linear brake of paramodulation akin to narrowing (for unconditional equations) and to the selected linear strategy for Horn-clause logic. If s and t are unifiable, then the goal is satisfied by whatsoever case of their most full general unifier. Alternatively, if at that place is a (renamed) conditional rule p | lr such that l unifies with a nonvariable (selected) subterm of southward via most general unifier μ, and so the conditions in are solved, say via substitution ρ, and the new goal becomes sμρ =? tμρ.

Suppose we wish to solve

a p p e n d ( x , y ) = ? 10

using Provisional Append (9.1). To utilize the provisional rule, we need beginning to solve naught(ten) =? F using the (renamed) rule null(u : 5)→ F, thereby narrowing the original to

h e a d ( ( u : v ) , a p p e n d ( t a i l ( u : v ) , y ) ) = ? u : v

Straightforward rewriting reduces this to

u : a p p east north d ( v , y ) = ? u : v

to which the get-go rule for append applies (letting v be ϵ), giving a new goal u : y =? u : v. Since the ii terms are at present unifiable, this process has produced the solution xu : ϵ and y, vϵ.

For ground confluent conditional systems, whatever equationally satisfiable goal tin be solved past the method outlined above. Some recent proposals for logic programming languages, incorporating equality, adopt such an operational mechanism. The idea of calculation rewrite-based equation solving to rewriting to provide a functional-logic linguistic communication originated with [Dershowitz 1985, Fribourg 1985, Goguen and Meseguer 1986, Dershowitz and Plaisted 1988]. A number of experimental languages combine narrowing with outermost ("lazy") evaluation to add goal solving capabilities within functional languages. Run across [Reddy 1986, Hanus 1994].

Simplification via terminating rules is a very powerful characteristic, particularly when divers function symbols are allowed to be arbitrarily nested in left-mitt sides (which is not permitted with orthogonal rules). Assuming basis convergence, whatever strategy can be used for simplification, and completeness of the goal-solving process is preserved. One way negation tin exist handled is by incorporating negative information in the form of rewrite rules which are then used to simplify subgoals to F. Combined with eager simplification, this arroyo has the advantage of assuasive unsatisfiable goals to be pruned, thereby fugitive some potentially infinite paths. Diverse techniques are besides available to help avoid some superfluous paths that cannot pb to solutions.

The semantics of rewriting with infinite structures was explored in [Dershowitz, Kaplan and Plaisted 1991, Kennaway, Klop, Sleep and de Vries 1995].

Read total affiliate

URL:

https://www.sciencedirect.com/scientific discipline/article/pii/B9780444508133500114

Introduction

David Wall , in Multi-Tier Awarding Programming with PHP, 2004

1.2.2 Reliability

With modularity also comes reliability. Information technology'due south a bones tenet of software engineering science that incremental development is more often than not proficient. You become the basic framework going, and then add and test one feature at a time. You test recursively to verify that newly added features haven't broken one-time ones. A multi-tier software organization in which separate programs on each layer handle specific tasks is inherently compliant with this principle. When you lot add a course, you lot tin (and should) test it, to brand sure that it doesn't break what was working before.

Read full affiliate

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780127323503500016

Case Studies

Israel Koren , C. Mani Krishna , in Fault-Tolerant Systems, 2007

7.1.1 Architecture

The NonStop systems have followed four key design principles, listed beneath.

Modularity . The hardware and software are constructed of modules of fine granularity. These modules constitute units of failure, diagnosis, service, and repair. Keeping the modules every bit decoupled as possible reduces the probability that a mistake in ane module volition affect the performance of another.

Fail-Fast Operation. A fail-fast module either works properly or stops. Thus, each module is self-checking and stops upon detecting a failure. Hardware checks (through error-detecting codes; come across Affiliate three) and software consistency tests (see Chapter 5) support fail-fast operation.

Single Failure Tolerance. When a single module (hardware or software) fails, another module immediately takes over. For processors, this means that a 2nd processor is available. For storage modules, it means that the module and the path to it are duplicated.

Online Maintenance. Hardware and software modules can be diagnosed, disconnected for repair and so reconnected, without disrupting the unabridged system'south performance.

We next discuss briefly the original architecture of the NonStop systems, focusing on the mistake-tolerance features. In the next two sections, the maintenance aids and software back up for fault tolerance are presented. Finally, we describe the modifications which have been made to the original architecture.

Although in that location accept been several generations of NonStop systems, many of the underlying principles remain the same and are illustrated in Figure 7.1. The system consists of clusters of computers, in which a cluster may include up to 16 processors. Each custom-designed processor has a CPU, a local retentivity containing its ain copy of the operating system, a bus control unit, and an I/O aqueduct. The CPU differs from standard designs in its extensive error detection capabilities to support the fail-fast mode of operation. Error detection on the datapath is achieved through parity checking and prediction, whereas the control part is checked using parity, detection of illegal states, and specially designed self-checking logic (the description of which is across the scope of this book, but a pointer to the literature is provided in the Farther Reading section). In addition, the design includes several serial-scan shift registers, allowing fast testing to isolate faults in field-replaceable units.

Effigy 7.1. Original NonStop system architecture.

The memory is protected with a Hamming code capable of single-mistake correction and double-error detection (meet Section iii.1). The address is protected with a single-error-detection parity code.

The enshroud has been designed to perform retries to take care of transient faults. At that place is too a spare memory module that can be switched in if permanent failures occur. The enshroud supports a write-through policy, guaranteeing the existence of a valid copy of the data in the main retention. A parity error in the cache will force a cache miss followed by refetching of the data from the main memory.

Parity checking is not limited to retention units but is as well used internally in the processor. All units that exercise not change the information, such as buses and registers, propagate the parity bits. Other units that alter the data, such as arithmetics units and counters, require special circuits that predict the parity bits based on the information and parity inputs. The predicted parity bits can then exist compared to the parity bits generated out of the produced outputs, and whatsoever mismatch betwixt the ii will heighten a parity mistake indication. This technique is discussed in Chapter 9 and is very suitable to adders. Extending it to multipliers would issue in a very complicated circuit, and consequently, a different technique to detect faults in the multiplier has been followed. Subsequently each multiply operation, a second multiplication is performed with the two operands exchanged and i of them shifted prior to the performance. Since the correlation between the results of the two multiplications is piffling, a unproblematic circuit tin detect faults in the multiply performance. Note that even a permanent fault will be detected considering the same multiplication is not repeated. This error detection scheme is similar to the recomputation with shifted operands technique for detecting faults in arithmetic operations (meet Section 5.2.4).

Note the absence of a shared memory in Figure 7.1. A shared memory tin can simplify the communication among processors but may go a single indicate of failure. The 16 (or fewer) processors operate independently and asynchronously and communicate with each other through messages sent over the dual Dynabuses. The Dynabus interface is designed such that a single processor failure will not disable both buses. Similar duplication is as well followed in the I/O systems, in which a group of disks is controlled by dual-ported controllers which are connected to I/O buses from 2 dissimilar processors. One of the 2 ports is designated as the primary. If the processor (or its associated I/O bus) that is connected to the primary port fails, the controller switches to the secondary/backup port. With dual-ported controllers and dual-ported I/O devices, four separate paths run to each device. All data transfers are parity-checked, and a watchdog timer detects if a controller stops responding or if a nonexistent controller was addressed.

The in a higher place blueprint allows the organisation to proceed its operation despite the failure of any unmarried module. To farther support this goal, the power, cabling and packaging were also carefully designed. Parts of the system are redundantly powered from 2 different ability supplies, allowing them to tolerate a power supply failure. In addition, battery backups are provided so that the system state can exist preserved in case of a ability failure.

The controllers have a fail-fast requirement similar to the processors. This is achieved through the use of dual lock-stepped microprocessors (executing the same instructions in a fully synchronized fashion) with comparing circuits to observe errors in their functioning, and cocky-checking logic to detect errors in the remaining circuitry within the controller. The ii independent ports within the controller are implemented using physically separated circuits to prevent a error in i from affecting the other.

The system supports deejay mirroring (see Department 3.2), which, when used, provides eight paths for data read and write operations. Disk mirroring is further discussed in Department 7.one.3. The disk data is protected by end-to-stop checksums (see Section 3.ane). For each data block, the processor calculates a checksum and appends it to the information written to the disk. This checksum is verified by the processor when the data block is read from the disk. The checksum is used for error detection, whereas the disk mirroring is used for information recovery.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120885251500109